If you really want to gain more ability than others, you have to be super diligent, Work Super Hard.
Does methodology matter? Very important, but when your methodology is refined to the extreme, you still have to return to the most fundamental diligence, even terrifying diligence!
Is that terrible diligence enough? Still not enough.
Maybe we're just spinning around, trying to do something with less effort.
So, we have to add two words - "efficient", efficient and terrible diligence.
Two days ago, the Ministry of Education Latest Mailing Database announced that 35 colleges and universities have added "artificial intelligence" undergraduate majors, which has aroused widespread concern in the society.
This means that artificial intelligence will be integrated into every aspect of our lives with greater speed and momentum.
Most people's understanding of artificial intelligence should start with AlphaGo, the robot that makes chess players grit their teeth.
AlphaGo embodies "efficient and terrible diligence" vividly. Big data and artificial intelligence technology allow it to establish a learning mechanism with feedback at every step.
If you learn more about AlphaGo's "growth process", you may be the same as me, from amazement to shudder.
In April 2016, AlphaGo defeated Li Shishi; in May 2017, AlphaGo defeated Ke Jie.
▲ Ke Jie said: It made a move that made me desperate, I knew that I could not win that game
AlphaGo, which appeared in front of Ke Jie, has changed a lot. In this year, it has evolved from version 1.0 to version 2.0.
Where is the difference?
Version 1.0 defeated Li Shishi's AlphaGo, first learned 100,000 chess scores, and had a panoramic view of the classic chess scores of all mankind. Then, it analyzes the chess position and gains and losses, and finally generates its own strategy algorithm.
But later, DeepMind, the company that developed AlphaGo, felt that this was not the strongest form.
Even if one were to learn 100,000 human games of chess, it would only be equivalent to all the ancient and modern Chinese and foreign Go masters fighting together for one person.
You can win a Li Shishi, but it is destined to be nowhere higher than Li Shishi.
If the opponent is extremely strong, it may not be an opponent no matter how many people swarm it.
Thus, there was the later AlphaGo 2.0.
The biggest difference between AlphaGo 2.0 and the previous one is that there is no chess sheet feeding.
The engineers only told AlphaGo the most basic rules of Go. It's probably black first, then white, and alternate moves, how to count losses, how to count wins... Then, find two such AlphaGo Go babies and start the game.
Learn from 0, play from 0, how many games have you played?
On the first day, put down 1 million plates and test the water.
That's it, 1 million disks a day...
The 2.0 version of AlphaGo no longer learns how to play Go from anthropology, but learns from itself.
At this time, AlphaGo must not know what lovesickness and worry-free corner are...
But they know who wins and who loses, and they can even replay the game, scoring each move, guessing which move was right, which was wrong, and which move could be better.
Based on the rules and winning or losing, AlphaGo has established a feedback system, and according to 1 million games per day, AlphaGo began to continuously optimize the algorithm.
In this way, every day, keep learning...
Until Ke Jie appeared, at this time, AlphaGo and Ke Jie were no longer of the same order of magnitude.
In other words, it has surpassed the entire human level of Go.
Nie Weiping once said: