Back in cognitive science PhD school, I learned about this cool thing rats
do. They learn super, super fast. Faster than I can learn German.
Well, they learn one type of thing super fast. Not German. They learn if
some food might be poisonous. It’s called the Garcia Effect.
Here is how: they eat something, they feel sick, they never eat it again.
They learn in one exposure that this type of food might be poisonous or
best-avoided.
Most theories of learning are not like this! The dominant forms of learning
involve repetition (repeat a lot = learn it fully), not “one-shot” effects.
So there are multiple types of learning going on in rats (and people –
because I am pretty sure I have the Garcia Effect when sampling weird food
too…).
But what about computers? What would it take to learn things from really
sparse data, not big data? This is a cool theme that a cognitive scientism
whom I admire, Gary Marcus, and colleagues have been working on.
//
Uber Bets on Artificial Intelligence With Acquisition and New Lab – The New
York Times
Instead of training machines by feeding them enormous amounts of data, what
if computers were capable of learning more like humans by extrapolating a
system of rules from just a few or even a single example?
In recent years, researchers at institutions such as the Massachusetts
Institute of Technology, New York University and the University of Toronto
have also worked on similar theories. Using this approach, some reported a
breakthrough in “one-shot” machine learning last December, in which
artificial intelligence advances surpassed human capabilities for a narrow
set of vision-related tasks.Instead of training machines by feeding them
enormous amounts of data, what if computers were capable of learning more
like humans by extrapolating a system of rules from just a few or even a
single example?