Top Internet of Things Daily & Weekly

MIT Invites Skynet By Teaching Robots To Teach Other Robots

MIT INVITES SKYNET BY TEACHING ROBOTS TO TEACH OTHER ROBOTS | #BigData #IoT #RT

  • But MIT took it to the next level by teaching robots how to teach other robots.
  • The first is learning by demonstration, where the robot watches a task be performed and then replicates it.
  • C-LEARN combines the two so that a robot can learn the geometric constraints for the task via observation.
  • The system breaks down the tasks for the robot into a sequence of steps known as keyframes and the constraints of that keyframe.
  • Naturally, the obvious application of this would be with large scale production robots teaching each other how to perform series of tasks by a simple demonstration.

Has MIT finally cracked the secret of the robot revolution? Or will their teaching robots simply make the production line a little easier?

@Ronald_vanLoon: MIT INVITES SKYNET BY TEACHING ROBOTS TO TEACH OTHER ROBOTS | #BigData #IoT #RT

You would think that years of science fiction would have stopped this. Sure we’ve embraced Big Data in a big way and most IoT devices now have some kind of AI in them that’s constantly learning our habits and when we’re weak. But MIT took it to the next level by teaching robots how to teach other robots.

Yes. Gone are the heady days when one weary programmer could waste days programming a simple robotic arm not to crush an egg. Now we finally have a way for the machines to teach each other and not just themselves.

Designed by MIT’s Computer Science and Artificial Intelligence Laboratory or CSAIL, the process is known as C-LEARN. Researchers Claudia Pérez-D’Arpino and Julie A. Shah created the idea when they decided to fuse the two current techniques for teaching robots skills. The first is learning by demonstration, where the robot watches a task be performed and then replicates it. The other is motion planning but requires an expert to craft it themselves as all the geometric factors for the manipulators have to be set.

So the team went and made their own method. C-LEARN combines the two so that a robot can learn the geometric constraints for the task via observation.

What is a geometric constraint? It’s just the physical limits of an object or task. Like the dimensions of the human neck, we don’t want our robot overlords to strangle.

The system breaks down the tasks for the robot into a sequence of steps known as keyframes and the constraints of that keyframe. This allows someone without the experience or knowledge of a coder to teach a robot a series of tasks by providing a portion of the information on how the task is performed and then showing the robot a demo of the task itself.

As a result, that robot can then be used to teach that skill to another robot in spite of design differences between those robots. Naturally, the obvious application of this would be with large scale production robots teaching each other how to perform series of tasks by a simple demonstration.

So far the researchers at MIT have only published a paper on the concept wherein they list their findings as a success. We still have a long way to go before the robot uprising comes and we all run in fear for our lives from the machines.

MIT Invites Skynet By Teaching Robots To Teach Other Robots