Deeper Learning

A computing system developed by Google researchers in Great Britain has beaten a top human player at the game of Go. That game is an ancient Eastern contest of strategy and intuition that has far more possible plays and strategies than chess. Chess masters were beaten by computers early on, but Go has been the grail for artificial intelligence (AI) experts for decades.

My friend Steve tried to get me into Go years ago. I tried. I bought a board. But it didn't click for me. First off, I hate board games. I hate most games. I never got into or owned any computer or video gaming systems.

I did have some interest in Scrabble, Trivial Pursuit and Jeopardy-style games, but not enough to sustain hours of play. I liked Othello and Pente which are very simplified variations on Go play. I learned not to play well but that I lack the brain that strategizes. Even when I taught my young sons how to play chess, they were able to beat me very soo. I don't think far enough ahead.

Perhaps, this gaming helped me realize that I am a natural Zen Master - living in the moment. Tomorrow? Far away. Three moves ahead? I don't see it. I still play the Scrabble-silly Words with Friends on my phone with a few people and lose almost all the time. I just put in words I know (none of these oddball words that work but no one has ever seen before)and I'm not clever enough or interested enough to use the triple letter and double word opportunities. I don't look to see what letters remain or calculate the opponent's rack of letters. My one son won't even play against me. Too easy to win to be any fun for him.

But a computer beating a master of this 2,500-year-old game was big news. It's not the end of the world or the rising of the machines quite yet, but it is an important event for AI.

It was researchers at DeepMind (a company Google acquired in 2014) that set up the machine-versus-man contest. Their system, called AlphaGo, went up against Fan Hui, Fan Hui (reigning European champion) and the machine last October went undefeated in five games.

This month, AlphaGo defeated Korean grandmaster Lee Sedol, finishing the best-of-five series with four wins and one loss. One for the humans!

Of course, the research behind this isn't designed to win Go matches. Google, Facebook, Microsoft and the rest of the gang are interested and already using deep learning to identify images, recognize spoken words, and understand natural language.

DeepMind is said to combine deep learning with a technology called reinforcement learning and other methods. They are looking for ways for that autonomous vehicle or robot to learn to perform physical tasks and respond to their environment.

Computers have always been better than humans at sorting through lots of data very quickly. They are not so great at figuring out what to do with it. Deep learning will probably be best used in research as a supplement to human researchers.

Lee Se-dol won that single game by doing something humans have done to computers for years in sci-fi stories from Asimov to Star Trek - confused them.  He said after the win that he "tricked" AlphaGo computer with a series of unorthodox moves.

Perhaps, Steve and others will enjoy reading the four-page, move by move explanation of how the machine was beaten, but the dummy version is that Lee went against his own best instincts as a player to create confusion for the computer.

He "lured" the AlphaGo in an aggressive position and forced it to neglect precision in order to maintain dominance. It made a serious mistake on move 79, and then "confused" (we do like to make the machine sound more human, even with our language) it continued to make errors as it attempted to correct itself.
I almost feel bad for the computer, but must root for mankind.  I've read too many sci-fi stories. I know what is coming.

No comments:

Post a Comment