Neural networks usually have to learn everything they need to know about their duties, rather than building on top of existing experiences like real brains do.
Alphabet's DeepMind team hopes to fix that.
They've crafted an algorithm that lets a neural network 'remember' past knowledge and learn more effectively.
The approach is similar to how your own mind works, and might even provide insights into the functioning of human minds.
Much like real synapses, which tend to preserve connections between neurons when they've been useful in the past, the algorithm (known as Elastic Weight Consideration) decides how important a given connection is to its associated task.
Ask the neural network to learn a new task and the algorithm will safeguard the most valuable connections, linking them to new tasks when relevant. In tests with 10 classic Atari video games, the AI didn't need learn how to play each game in isolation.
It could learn them sequentially, taking the knowledge accrued in one game and applying it to the other.
The technology is more than a little rough around the edges. It's a jack of all trades, but a master of none.
A single-task neural network is still better when limited to one game, DeepMind's James Kirkpatrick says to Wired.
It's also not ready to adapt to situations on the spot.
The algorithm shows that it's at least possible to give AI memory-like functions, however. And what DeepMind has learned here could shed light on how real brains consolidate information -- it may well validate theories that have existed for years.