For humans, it often takes one glance to make complex conclusions about the density, texture and weight of an object. The opposite is also true: after touching something with our eyes closed, we will instantly understand what it looks like. Robots are a different story. Scientists at the MIT Artificial Intelligence laboratory are hard at work trying to teach machines to effectively interact with the environment around them.
Researchers at the Massachusetts Institute of Technology have managed another successful experiment in robotics, teaching a machine to identify objects in a way that is comparable to human ability. They equipped a mechanical KUKA manipulator with a GelSight tactile sensor, which analyzes the connection between tactile and visual data about surrounding objects.
The sensor was created in 2014. It has a sensitive surface and a camera that can create a 3D map of any surface. The device was tested multiple times, constantly proving its efficiency.
In a recent project, the sensor was connected to a neural network — this symbiotic relationship helped KUKA to identify the appearance of an object using tactile impressions, and determining the texture visually. The AI was trained on a set of 12 thousand video recordings and 200 different objects.
Share this with your friends!
Be the first to comment
Please log in to comment