Preface

 

When AlphaGo hit the news in early 2016, we were extremely excited about this groundbreaking advancement in computer Go. At the time, it was largely conjectured that human-level artificial intelligence for the game of Go was at least 10 years in the future. We followed the games meticulously and didn’t shy away from waking up early or staying up late to watch the broadcasted games live. Indeed, we had good company—millions of people around the globe were captivated by the games against Fan Hui, Lee Sedol, and later Ke Jie and others.

Shortly after the emergence of AlphaGo, we picked up work on a little open source library we coined BetaGo (see http://github.com/maxpumperla/betago), to see if we could implement some of the core mechanisms running AlphaGo ourselves. The idea of BetaGo was to illustrate some of the techniques behind AlphaGo for interested developers. While we were realistic enough to accept that we didn’t have the resources (time, computing power, or intelligence) to compete with DeepMind’s incredible achievement, it has been a lot of fun to create our own Go bot.