naturewn.com

Trending Topics

Google's DeepMind AI Is Now Learning to Play With Physical Objects

Nov 14, 2016 04:30 AM EST
Close
A-List Insider: Porhub star Mia Khalifa attacked by Porsha Williams, Ellie Goulding tipped to sing on Spectre

Researchers from Google's DeepMind announced some of their new artificial intelligence (AI) projects are learning "how" the world works -- akin how a child experiments with the way the world works. This opens an entirely new breakthrough in the realm of machine learning.

Misha Denil and her colleagues from the University of California, Berkeley announced that they have trained an AI to learn the "physical properties" of objects by interacting with them virtually.

This includes numerous aspects of the world, including questions such as "Can I sit on this?" or "Is it squishy?"

In their paper, the AI systems were experimented in two environments. The first involved introducing five blocks arranged in a tower. Others were stuck together to make larger blocks, while others did not.

The AI had to work out how many distinct blocks were there. It receives a reward for successes, and negative remarks for failures. The AI was able to experiment and interact with the tower to get the answer.

This was not, in fact. the first time simulations like this were done. Facebook already used simulations of stacked blocks to teach their neural networks to predict if towers would fall or not.

The approach is called deep reinforcement learning. According to New Scientist, DeepMind is known for such an approach. They already trained an AI to play games on Atari better by humans. This resulted in Google's purchase of the project. 

Deep reinforcement learning is an approach where AIs solve tasks without instructions. This is similar to how animals and babies solve problems. 

The method allows AI to find solutions for problems that are not readily available.

This is useful in the field of research where the virtual world could only do so much. AIs have a set number of possible interactions and cannot deal with distractions of the real world. There being able to accomplish such a feat will be useful in fields such as robotics.

For instance, Jiajun Wu from the Massachusetts Institute of Technology said this will be very useful for robots to travel difficult terrain.

© 2017 NatureWorldNews.com All rights reserved. Do not reproduce without permission.

Join the Conversation

arrow
Email Newsletter
About Us Contact Us Privacy Policy Terms&Conditions
Real Time Analytics