Saturday, April 29, 2017

Week 16: Final Demo

This is the final demo of our project, in which the arm of the robot changes as per the height of the individual.

Friday, April 21, 2017

Height coordinates and simulation in ROS

After calibrating the colour and depth images I was finally able to obtain the coordinates of the face of a person. Now I have been working on using PR2 to simulate a robot but it is turning out to be extremely difficult as most of the documentation is old and not available for ros-indigo. We intend to keep trying and ensure that it works properly, but given the limited time left it seems to be a tough task 

Friday, April 14, 2017

Intel realsense, camera calibration, cascade classifiers and ROS

In the past week, we finally obtained the intel realsense and which we have begun using in our project. We were able to build the cascade classifier which would detect the face of a person. Using the vertices of the cascade classifier have been able to find out the image coordinates of the center of the face.

Now, to convert the pixel coordinates into real world coordinates was very tough, as we had to learn quite a bit of mathematics and camera properties. Finally calibrating the realsense camera, we were able to get a camera matrix of its internal properties. This we will use in our calculations to get real world coordinates, through which we will be able to get elevation angle and twisting angle.

This data we intend to send in the form of messages to RVIZ simulator. We have also been studying about how to build simulation's. Initially we tried to install the Baxter robot drivers, but were unsuccessfull. Then Dr. Ramviyas suggested that we could use PR-2 robot simulation. It might be easier as it has very nice documentation, which we intend to follow.

Hopefully by end of this week we will be generating all appropriate data to send to the simulator, and begin work on the simulation by next week.


Friday, April 7, 2017

Week 14: Cascade Classifiers, Tracking Algorithms

This week we studied more about how to develop cascade classifiers and their training. Also we studied about the various tracking algorithms, which hopefully could provide information about the person in the X-Z plane and hence allow our robot to orient itself with the user's position. In the absence of Kinect we are currently using a normal laptop webcam for preliminary investigations and to develop initial scripts for our project and provide data that could be sent to Gazebo simulator through messages.

Friday, March 31, 2017

Week 13: Cascade Classifiers, OpenCV and ROS.

This week we studied cascade classifier for training and detection. This will help us to use openCV_annotation and opencv_visualisation in order to apply the mentioned applications in the course of our project. We are also understudying High Dynamic Range Imaging.
Further studies are still ongoing about the integration of opencv output estimations with ROS and we hope to make significant progress in the coming weeks.   

Friday, March 24, 2017

Week 12: Integrating ROS and Kinect

This week too we spent considerable effort into trying to integrate Kinect and ROS with different drivers but were unsuccessful. Now we have requested Dr.Min to help us purchase either a Primesense camera, Xtion pro or Kinect for Windows. Also we are studying about OpenCV and developing a way to use a normal webcam (incase we are unable to obtain any of the above mentioned cameras ) to obtain approximate height, width of the person and possibly their coordinates in 3D space.

Friday, March 17, 2017

Week 11: Integrating Kinect and ROS

This week we tried to integrate Kinect version One with ROS, so that we could obtain depth values of the person from it. But there were very few drivers for integrating Kinect One with ROS (we tried using iai_kinect, but it did not work). So, after asking Dr. Ramviyas for advice he suggested that we might need another stereo camera which would work with ROS. Also we have simultaneously been researching about the various ways we can use computer vision to obtain the position of a person in 3D- space as well as his height, which would allow us to ensure that a robot can perform a handshake even if the person is not directly infront of them.