이 누리집은 대한민국 공식 전자정부 누리집입니다.

한상넷 로고한상넷

전체검색영역
AlphaGo’s first win against Korean champion opens new chapter in AI
Collected
2016.03.11
Distributed
2016.03.14
Source
Go Direct
The first stunning win by the strongest Go software ever to be developed against the best living player in one of the most-closely watched matches between a man and a machine opened a new chapter in the artificial intelligence (AI).

Debunking earlier expectations for a sweeping win for Korean world champion 9-dan-rank Lee Se-dol, Google DeepMind’s AlphaGo reversed the trend in the latter half of the three and half-hour game in the first round of a five-match tournament in Seoul on Tuesday in a surprise upset that made his opponent declare defeat.

What stunned the Go as well as the science community was the staggering pace in developmental capacity of the Google’s latest AI.

Lee cannot be comparable to AlphaGo’s first human opponent - three-time European Go champion 2-dan Fan Hui - whom it crushed in the first machine’s defeat over a professional player without a handicap in October last year. Lee is known to play in unfathomable and surprising ways. Even with its monstrous training over the last five months that is said to be tantamount to what a human player could do in 1,000 years, most had been skeptical of the machine mastering the art of the 19-by-19 grid Go board known to be the most complicated and mystifying game developed by a human as it requires intuition and sensitivity which a machine in theory is incapable of. Lee in a press conference said he had been shocked by the formidable way AlphaGo played.

“The advance in AI is much faster than anyone could have imagined,” cried Park Hyung-ju , chair-professor at Mathematics Department at Ajou University, upon watching the game.

Kam Dong-gun, a professor of Electrical and Computer Engineering Department at Ajou University who was involved in developing IBM`s AI “Watson” that beat “Jeopardy!” quiz show against human players in the U.S., said the AlphaGo that sat across Fan in October was entirely different the one that pitted against the Korean guru.

AlphaGo has been under stringent training by its developers at DeepMind, studying and processing configurations, sequences and maneuvering of over 30 million games played in the past. Since a simulation game is impossible due to the astronomical possibilities in the game where the probabilities are said to outnumber all the atoms in the universe, it trained in random configuration mock games. AlphaGo learned to become intuitive, making judgment of the moment after studying board and predicting the consequence from each move. In short, it has come near to human intelligence. It is how it evolved from the beginning level of the master cadre to become equal to the highest guru level.

“Since it wasn’t able to play against professional players, it trained against other AIs. The learning of configurations alone could not have made the software advance so fast over a short period. Google must have a secret algorithm,” observed Kam.

The innate stability of a machine also could have helped AlphaGo in the game of nerves. “AI can be best applied in games because the target and problem-solving is clear,” said Lee Soo-won, Professor at School of Software in Soongsil University. “In the first game, a machine that does not make a mistake has won over the human mind.”

So what plans does Google have for its latest prized invention? “The advanced and evolving algorithm can be applied on Google search engine to make it more precise and interactive for the users,” said Kam.

In the longer run, it can work as the brain behind robots. Google has acquired 15 robotics companies before it took over the AI development DeepMind. Among them is Japan’s Shaft Inc. which was unrivaled first in the Defense Advanced Research Projects Agency (DARPA)’s Robotic Challenge in 2013.

By Won Ho-sup, Lee Young-wook

[ⓒ Pulse by Maeil Business News Korea & mk.co.kr, All rights reserved]