Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
No adverts like this in the forums anymore.
Times and dates in your local timezone.
Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
Email notifications for threads you want to watch closely.
Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.
2016/03/09 14:48:29
Subject: Google AI Wins First Match Against Korean Go Game Champion
I'm not a Go player myself, but I've always found the man vs machine fights interesting. First the computers beat the chess masters, and now they're winning the most complicated game ever made. After deep mind, you have to wonder what comes next, deep thought, or skynet
Google DeepMind’s artificial intelligence system beat a top-ranked player of the board game Go in a televised match in South Korea, providing the first evidence that the company’s software has attained super-human status at a challenging 2,500-year-old strategy contest.
The Internet company is playing a five-match tournament against Lee Sedol, who Google said has been the top-ranked Go player of the past decade, to show off the capabilities of software developed by its London-based AI subsidiary DeepMind. Lee’s loss was announced by match organizers in Seoul.
“It’ll never get tired and it’ll never get intimidated,” said DeepMind co-founder Demis Hassabis, at a press conference on Tuesday ahead of the match. “These are the main advantages.”
DeepMind, part of Mountain View, California-based Alphabet Inc., revealed its software, called AlphaGo, in January in a paper published in science journal Nature. AlphaGo had attained expert human-level performance at Go, and had beaten European professional Go player Fan Hui in a match held in the company’s London offices in October. When the paper was published, experts said they had thought a Go-playing AI system was five to 10 years away. The first win against Lee is further confirmation of the power of DeepMind’s system.
Neural Networks
What sets DeepMind’s approach apart from traditional Go-playing software is its use of a technology called a neural network, which lets computers learn from experience, rather than specific programming. This enables it to learn by studying example games, then by playing millions of games against itself, inferring the rules and, eventually, developing long-term strategies it can use to try to win. The system also uses a more traditional computing technique called Monte Carlo Tree Search.
Go, also known as Baduk, is a game that sees players battle to take territory on a board by taking turns placing stones on the intersections of a grid. There is only one type of piece and players choose to play as either white or black. On a 19-by-19 Go grid, there are more possible board configurations than there are atoms in the known universe.
“I’m somewhat shocked,” Lee told reporters after the match. “I didn’t really imagine I’d lose. I didn’t foresee AlphaGo would play Go so perfectly.”
AI Uses
The game is played widely in Asia, with tournaments awarding prizes in the hundreds of thousands of dollars. Top players like Lee are treated like celebrities -- DeepMind first contacted him through his agent, rather than reaching out directly, Hassabis said in January.
“Whenever you have a large number of people using something, we can probably use machine intelligence to make it more efficient,” Alphabet Chairman Eric Schmidt said in Seoul .
Google already uses AI across its products, for services like automatically writing e-mails, recommending YouTube videos and providing the brains of its in-development self-driving cars. The next wave of AI technologies will use techniques akin to those developed by DeepMind, but the company hasn’t yet disclosed any particular products that use DeepMind’s techniques.
“Health care is one of the main things we’re looking at next,” Hassabis said. “The system and techniques that we’re using for AlphaGo should be useful for anywhere, any kind of problem where there’s lots and lots of data and you’re trying to understand the structure in that data and make some kind of decision.
2016/03/10 09:20:25
Subject: Google AI Wins First Match Against Korean Go Game Champion
Second game, too. We might see a reverse sweep, but I sincerely doubt it given that AlphaGo is actually starting to innovate. Heck, Lee Sedol actually left the room during tonight's game after it made a fairly unorthodox move in the early game. I think that heavily contributed to his eventual loss, as he hit overtime a lot earlier than AlphaGo did, and I think that playing from a time deficit didn't help his mindset.
2016/03/10 11:33:57
Subject: Google AI Wins First Match Against Korean Go Game Champion
Is this really surprising, though? It is a tool designed to operate within a very specific set of parameters and it appears to be functioning correctly. We aren't surprised when a calculator does math faster than a human or a computer calculates and renders complex geometries and physics, but for some reason humans have trouble accepting that we can build something works out solutions to different rules fast/better than a human?
It is a step forward, and the "learning" aspect is really impressive. Self correction and updating for the win!
-James
2016/03/10 16:57:00
Subject: Google AI Wins First Match Against Korean Go Game Champion
jmurph wrote: Is this really surprising, though? It is a tool designed to operate within a very specific set of parameters and it appears to be functioning correctly. We aren't surprised when a calculator does math faster than a human or a computer calculates and renders complex geometries and physics, but for some reason humans have trouble accepting that we can build something works out solutions to different rules fast/better than a human?
It is a step forward, and the "learning" aspect is really impressive. Self correction and updating for the win!
It is, because AlphaGo doesn't operate based on simple parameters. Making a simple deterministic "if this then that" AI for go is impossible, as the number of potential game states is larger than the number of atoms in the universe. This means that the AI has to take a much more human approach to learning go, focusing on a far smaller subset of possible moves every turn, and innovate based on what it knows are high percentage moves in similar situations weighted by its relative strength at various points on the board. It's really exciting when you look at the advances in pattern recognition and neural networks that made AlphaGo possible.
2016/03/11 01:59:44
Subject: Google AI Wins First Match Against Korean Go Game Champion
hotsauceman1 wrote: but, can it win a game of 40k? And will it understand the rules?
No one understands the rules for 40k If they could translate it for a computer to understand, they'd have a more solid rule set
alphago would go full skynet just trying to understand the psychic phase
And this is an impressive feat jmurph, It is surprising in that this is the first computer program to understand the game well enough to win against the masters. It was also quite historic when a computer could finally beat the chess masters. Computers are fast at math, but that's child's play, this program is a true step forward in AI. The computer learns from experience, and it learns by playing millions of games against itself, the AI develops long term strategies it can use to win.
2016/03/11 02:15:34
Subject: Google AI Wins First Match Against Korean Go Game Champion
Well it's over for the most part, alphago is now 3-0, but they'll still play the remaining 2 games. They might actually achieve a true AI within out lifetimes, and that should really make things interesting around here.
A Google-developed computer programme took an unassailable 3-0 lead in its match-up with a South Korean Go grandmaster on Saturday -- marking a major breakthrough for a new style of "intuitive" artificial intelligence (AI).
The programme, AlphaGo, secured victory in the five-match series with its third consecutive win over Lee Se-Dol -- one the ancient game's greatest modern players with 18 international titles to his name.
Lee, who has topped the world ranking for much of the past decade and confidently predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash defeat in the two remaining games on Sunday and Tuesday.
"AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability," said one of Lee's former coaches, Kwon Kap-Yong.
"The machine was increasingly gaining the upper hand as the series progressed," Kwon said.
Lee Se-Dol, one of the greatest modern players of the ancient board game Go, speaks during a press conference after the second game of the Google DeepMind Challenge Match, in Seoul, on March 10, 2016
For AlphaGo's creators, Google DeepMind, victory goes far beyond the $1.0 million dollar prize on offer in Seoul, proving that AI can go beyond superhuman number-crunching.
The most famous AI victory to date came in 1997 when the IBM-developed supercomputer Deep Blue beat Garry Kasparov, the then-world class chess champion, in its second attempt.
But a true mastery of Go, which has more possible move configurations than there are atoms in the universe, had long been considered the exclusive province of humans -- until now.
- 'Mt Everest' of AI -
AlphaGo's creators had described Go as the "Mt Everest" of AI, citing the complexity of the game, which requires a degree of creativity and intuition to prevail over an opponent.
Go game fans watch a TV screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association in Seoul, on March 9, 2016
AlphaGo first came to prominence with a 5-0 drubbing of European champion Fan Hui last October, but it had been expected to struggle against 33-year-old Lee.
Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI -- something resembling human reasoning based on a variety of inputs and, crucially, self-learning.
In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.
The 3,000-year-old Chinese board game involves two players alternately laying black and white stones on a chequerboard-like grid of 19 lines by 19 lines. The winner is the player who manages to seal off more territory.
AlphaGo uses two sets of "deep neutral networks" that allow it to crunch data in a more human-like fashion -- dumping millions of potential moves that human players would instinctively know were pointless.
It also employs algorithms that allow it to learn and improve from matchplay experience.
It is able to predict a winner from each move, thus reducing the search base to manageable levels -- something co-creator David Silver has described as "more akin to imagination".
2016/03/13 20:00:59
Subject: Google AI Wins First Match Against Korean Go Game Champion