Google I/O conference
A popular concept in science fiction is the singularity, a moment of explosive accelerating growth in technology and artificial intelligence that rewrites the world. One of the better explanations for how this could happen is described by the Scottish sci-fi author Charles Stross as "a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood."

To translate: If an AI is capable of improving ("boostrapping") itself, or of building another, smarter AI, then that next version can do the same, and soon you have exponential growth. In theory this could lead to a system rapidly surpassing human intelligence, and, if you're in a Stross novel, probably a computer that's going to start eating people's brains.

The singularity still seems to be a long ways off (until we crack Moore's Law), but at Google I/O, we got a glimpse of our future robot overlords from Google CEO Sundar Pichai.

Sundar Pichai
Pichai talking about Google's AI research at I/O2017
Lifelong Learning

The new technology is called AutoML, and it uses a machine learning system (ML) to make other machine learning systems faster or more efficient. Essentially, it's a program that teaches other programs how to learn, without actually teaching them any specific skills (it's the liberal arts college of algorithms).

AutoML comes from the Google Brain division (not to be confused with DeepMind, the other Google AI project). Whereas DeepMind is more focused on general-purpose AI that can adapt to new tasks and situations, Google Brain is focused on deep learning, which is all about specializing and excelling in narrowly defined tasks.

According to Google, AutoML has already been used to design neural networks for speech and image recognition. (Fun fact: The networks to accomplish these two tasks are usually nearly identical. Images are typically analyzed by looking at repeating patterns in pixels, and speech is analyzed by turning sound into a graph of frequency over time that's analyzed the same way). Designed by AutoML, the image recognition algorithms were as good as those designed by humans, and the speech recognition algorithms were, as of February 2017, "0.09 percent better and 1.05x faster than the previous state-of-the-art model."

engineer-designed network on the left, and an AutoML designed one on the right
An engineer-designed network on the left, and an AutoML designed one on the right. Their structures are fundamentally different.
Using AI to build machine learning systems has been a hot area of research since 2016, as researchers at Google, UC Berkeley, OpenAI, and MIT have worked to reduce the time needed to set up and test new neural architectures.

Google's plan is not to bring about the AI apocalypse, but to lower the barrier to entry for companies interested in machine learning research or products. Instead of "automated" or "self-reinforcing," Google's AutoML software might be better said to be "self-assembling," or "self-optimizing" (the term Berkeley researchers used for their similar algorithm).

No, That's A Civet

Traditional machine learning takes two main approaches. A computer is either fed thousands of labeled pieces of data (say, photos of a cat, and photos not of a cat), and eventually it will build a system to differentiate "cat" from "not a cat." This system may be unique, and we won't necessarily be able understand exactly how it's working, but at the end it'll reliably identify a tabby.

The other method is the way computers can be trained to solve more flexible problems, like an efficient way to walk or how to escape a virtual maze. The computer is given a set of parameters to work in as well as a failure condition. The computer will then be set to experiment. At first it will work at random, but as it rules out more and more failed attempts, it'll zero in on a solution. A combination of these methods was how AlphaGo learned to play board games.

Once a network is trained with enough inputs, it can reliably classify novel inputs
Once a network is trained with enough inputs, it can reliably classify novel inputs.
Automated machine learning is just one more layer of abstraction. Instead of learning how to identify a cat by examining thousands of cat photos, the algorithm is trying to build the most efficient system for learning to identify cats.

This has a few major benefits. One is that deep learning systems need to be tuned to the inputs they will be analyzing. Although any machine learning system improves itself over time, a poorly designed algorithm may never be as fast or as accurate as a well designed one, and designing an optimized algorithm is hard. Some programmers swear they operate by intuition, and no matter what the method, the task of tuning and refining an algorithm takes time and expertise. Also, a well designed algorithm may be quicker to train, and require less time and input until it's proficient at a task.
AutoML iterates dozens of network structures to find the most efficient model
AutoML iterates dozens of network structures to find the most efficient model.
Clouds on the Horizon

As you can probably imagine, the process of using a neural net to create and test a set of other neural nets is incredibly expensive in terms of time and computation. To create the image and speech recognition algorithms designed by AutoML, Google reportedly let a cluster of 800 GPUs iterate and crunch numbers for weeks.

new Cloud TPU chips designed to bring machine learning to Google Cloud
Not GPUs, but a huge cluster of the new Cloud TPU chips designed to bring machine learning to Google Cloud.
This is likely not going to be a tool that you can run on your laptop, but it may become a selling point for Google Cloud. Access to AutoML and the ability to create and refine a machine learning system without a strong background in AI, could be a tool to give Google a leg up over Amazon, whose AWS cloud service Google has long trailed behind.

AutoML and similar tools may be the key to making machine learning accessible to a range of scientists and could help bring AI to new fields of study.

If it doesn't eat our brains first, that is.