A new camera could enable future NASA rovers to function with more autonomy, cutting down on the wait time for new commands and increasing the possibility of missions to sites much farther out in the solar system.

Currently, scientists upload a new agenda to the Mars rover Curiosity at the beginning of every day on the Red Planet. Moving at light speed, the instructions take 20 minutes to arrive. The round trip, therefore, for any information is twice that, meaning real-time control of the rover is impossible. And while not necessarily debilitating in this case, should researchers send a rover to Jupiter's moon Europa, for example, this delay more than doubles.

"Right now for the rovers, each day is planned out on Earth based on the images the rover took the previous day," Kiri Wagstaff, a computer scientist and geologist at the Jet Propulsion Laboratory, said in a statement. "This is a huge limitation and one of the main bottlenecks for exploration with these spacecraft."

And while NASA recently fired up Curiosity's autonomous navigation system, its main source of scientific discovery is that of taking pictures, which are transmitted back to Earth at 0.012 megabits per second, or roughly 250 times slower than a 3G mobile phone network. Such a strained connection limits the number of images the rover can send back to Earth.

On the other hand, Wagstaff argues, "If the rover itself could prioritize what's scientifically important, it would suddenly have the capability to take more images than it knows it can send back." And this, she explained, "goes hand in hand with its ability to discover new things that weren't anticipated."

Here is where TextureCam comes in. When TextureCam's stereo cameras snap 3D images, a processor quickly analyzes them for different textures that allows it to distinguish between, say, a rock and the sky. It then uses the size and distance to the rocks to determine if any of them are layered and thus scientifically important. Should it deem one potentially interesting, it can either upload a high-resolution image back to Earth or send a message to to the rover's main processor to move toward the rock and take a sample.

"You do have to provide it with some initial training, just like you would with a human, where you give it example images of what to look for," said Wagstaff. "But once it knows what to look for, it can make the same decisions we currently do on Earth."