A computer program has learned how to play classic Atari games all on its own, and Google, who is backing the startup DeepMind Technologies, is hailing it as a significant achievement on the way to true artificial intelligence.

A computer playing a game may not exactly sound like a huge accomplishment. After all, the videogame industry releases more and more complex in-game AI systems to control non-player characters in every next-generation consol or PC platform. Even Deep Blue, IBM's headliner chess-playing computer, trounced chess world champion Garry Kasparov at his own game nearly two decades ago.

However, what sets DeepMind's program apart from the industry standard and even Deep Blue is that it's not simply following a complex set of rules and parameters (i.e. - if player goes here, take such-and-such action). In fact, DeepMind started playing Atari videogames not even knowing any rules or aim. All it knew was a high score was good, and it worked things out all on its own from there.

The cofounder of DeepMind, Denis Hassabis, called the move "the first significant rung on the ladder to proving general learning systems can work."

"It's the first time that anyone has built a single general learning system that can learn directly from experience," he announced to the press last week. "This is going from pixels to actions, and it can work on a challenging task even humans find difficult. It's a baby step, but an important one."

The details of DeepMind's accomplishment has since been published in the journal Nature.

But what exactly does Hassabis mean by "from pixels to actions?" Put simply, the DeepMind team paired their general AI program, called the deep Q-network (DQN), with one of 49 Atari 2600 console games (think Space Invaders, etc...). The computer was also paired with a way to receive information on the pixels and their positions in each game and the value of the score achieved at the end of each play through. (Scroll to read on...)

And that's it. No base rules or explanations of what the information meant were provided. Instead, the program was simply left to play the game again and again, learning what earns a higher score and making its own intuitive decisions based on that gained knowledge.

This is not a lot unlike other learning AIs that Nature World News has previously discussed, such as LEVAN, a program that crowd sources information from the internet about words and pictures to - as its name implies - "Learn EVerything about ANything."

However, while LEVAN at least had to start with a base knowledge of English and some simple words, DQN started with only the simple rule that a higher score was good and a lower score was bad.

Doing it Doggy Style

What this is called is the "theory of reinforcement" and it wouldn't be unreasonable to compare this to how you train a dog. When a dog is first brought into a household, he knows absolutely nothing about his surroundings. He will be looking for social and behavioral cues, which are often reinforced with positive things, like a treat or petting, or negative reinforcement, like scolding or a sprits of lemon water in the mouth.

"Dogs are almost information junkies," John Bradshaw, an expert at the University of Bristol in the UK, recently told BBC News when commenting on a study about canine trust.

Therefore, he added, "dogs whose owners are inconsistent to them often have behavioral disorders." (Scroll to read on...)

Thankfully for DQN, Atari games are certainly not inconsistent, following the same rules-of-thumb every time, even if small adjustments have to be made per level. And as the program starts with no base knowledge, it doesn't have to "re-learn" to adapt to a new game entirely.

"It is worth noting that the games in which DQN excels are extremely varied in their nature, from side-scrolling shooters (River Raid) to boxing games (Boxing) and three-dimensional car-racing games (Enduro)," the DeepMind team wrote in their paper.

However, things are about to get much more challenging. The team now hopes to introduce their program to the first generation of 3D games from the late 90s. This is where that last category - 3D racing games - gets much more complex. It's also what Hassabis is most excited about.

"If this can drive the car in a racing game, then potentially, with a few real tweaks, it should be able to drive a real car," he excitedly told reporters. "That's the ultimate aim."

I certainly wouldn't mind a self driving car... just as long as it doesn't have to go through a few hundred crashes before learning the morning commute.

For more great nature science stories and general news, please visit our sister site, Headlines and Global News (HNGN).

- follow Brian on Twitter @BS_ButNoBS