/* ---- Google Analytics Code Below */

Wednesday, January 03, 2018

Humans as Neurons Strategy

Brought to my attention.  Takes me back to our early days of neural nets, and thinking about how error minimizing reinforcement worked in alliance with an architecture of neurons.  We thought of them as very restricted societies, or companies aiming at some goal.  Like making money or recognizing patterns. Some objective function linked to an outcome.  - FAD

The Human Strategy
A Conversation With Alex "Sandy" Pentland   In The Edge,   Podcast

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?  .... "

ALEX "SANDY" PENTLAND is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. Sandy Pentland's Edge Bio page  ...... " 

No comments: