Tucked away in a corner of Carnegie Mellon University's campus, a computer program diligently sifts through images on the Internet in an effort to build connections between them.

Called the Never Ending Image Learner (NEIL), the project is one aimed at developing common sense within a computer.

The machine sifts through images on the Internet in order to characterize them according to coloring, lighting and materials in order to build associations, such as that cars are found on roads, buildings are generally vertical and ducks look similar to geese.

"Images are the best way to learn visual properties," Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute, said in a statement. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."

Running 24/7, NEIL has analyzed more than 3 million objects in less than six months, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of pictures. From this, it has connected the dots to form 25,000 associations.

Sometimes NEIL makes incorrect assumptions that can then compound mistakes, which is where humans come in, correcting those like a parent might correct a child. For example, a Google search for "Pink" might lead the computer system to conclude that "pink" is a person, rather than a color.

"People don't always know how or what to teach computers," he said. "But humans are good at telling computers when they are wrong."

Supported by the Office of Naval Research and Google, the project is part of an overall effort to create the world's largest visual structured knowledge base in which objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.

According to Gupta, "What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes."