Google dabbles in a lot these days. The search engine company works on everything from manufacturing smartphones to self-driving cars to life extension to its intelligent digital personal assistants. However, almost every one of those projects has one thing in common – artificial intelligence or “Google AI”.
According to a report the company provided to MIT Technology Review, Google published 218 journal or conference papers on machine learning last year. The amount almost doubled compared to 2015, showing a clear indication of where and how Google sees future work done. The reason is quite obvious – everybody is in the race to create more powerful AI in order to stay ahead. By publishing papers, it gives Google a distinct advantage in the tech world by showing that it is intensely involved with the development of artificial intelligence, putting tons of effort into research.
But research and theoretical concepts don’t mean much without practically applying them and Google knows this very well. That is why the tech giant is constantly experimenting with different technologies. Back in November last year, the company started a website called A.I. Experiments that is solely dedicated to all the fun and interactive AI projects based on machine learning. There are currently nine Google AI experiments in play. While all of them are neat in their own way, we’ll present to you the ones we think are the coolest, in no particular order.
Google’s Infinite Drum Machine
We start off with a bang, almost literally. With the tagline “Thousands of everyday sounds, organized using machine learning,” there’s a little bit of uncertainty in terms of what to expect. A drum machine can sample anything and that’s exactly what this experiment does. It uses machine learning to organize everyday sounds. Thousands of sounds were played to a computer without providing any descriptions, tags or any hints on how to classify the sounds.
The experiment uses a technique called t-SNE (t-distributed stochastic neighbor embedding) where machine learning algorithms reduce dimensionality. The way it works is that the computer creates a special “fingerprint” for each sample. Then, t-SNE compares all the “fingerprints” and places similar sounds together in high-dimensional spaces. It then reduces them to two dimensions so that the humans can easily visualize it. From there, you can explore neighborhoods of similar sounds by clicking on them and make beats using the drum sequencer.
Google AI Duet
We remain in the music category with this cool little experiment. A.I. Duet is Google’s latest artificial intelligence experiment that is a form of a piano bot, responding to any melody you play on keyboard with its own. It uses neural networks to recognize the input. In the beginning, the computer was played lots of examples of melodies, learning the correlation between notes and timings. It created its own library (neural net) based on the given examples. Hence, when you play something, the neural net then searches for melodic and rhythmic patterns it can identify and tries to produce a close, coherent version of that input.
The experiment is open-source, using Magenta project’s neural net as a foundation. Don’t worry, you don’t need to have a real keyboard to have fun with A.I. Duet (although you can connect a MIDI keyboard), just your computer keyboard will be enough to unleash the Mozart in you.
Google Quick, Draw!
Among all of the experiments, Quick, Draw! Is probably the most fun one. It owes a lot to its simplicity as what we have here is essentially a friendly game of Pictionary. You draw something and the neural network tries its best to guess what it is. It uses the same AI technology that Google Translate utilizes in order to translate your handwriting.
With machine learning, a computer learns not just what was drawn but how it was drawn as well (which strokes were first, the direction of those strokes). Thanks to those movements, Quick, Draw! starts to recognize patterns and improves accuracy the more you play with it. To make the experiment more fun, there is a 20-second limitation for you to draw (or at least try). Quick, Draw!’ will start saying words it thinks you’re trying to illustrate until it eventually gets it, if you draw good enough.
It looks like the majority of Google AI researchers and scientists are musicians deep down inside as this is another experiment that focuses on music or sounds, in this particular case. Giorgio Cam (named after Giorgio Moroder, an Italian DJ and a pioneer of electronic dance music)
is a camera app that turns what it sees into lyrics for a song with rhymes and everything. The app consists of two parts – image recognition and speech synthesis. It uses image recognition to label what it sees and incorporates those labels into lyrics of a rap song. Obviously, this is strictly for fun as the experiment doesn’t provide any other real-life usage. However, that certainly doesn’t make it any less impressive and it will find its audience for sure.
The Google Thing Translator
Since we’ve touched upon the subject of usage, Thing Translator goes out of its way to make itself super useful in terms of translation. Similar to Giorgio Cam, it uses your camera to take a photo of anything you want to know how to say in a different language. The basic principle is also the same. Image recognition labels what it sees, along with the confidence scores (the probability of accuracy) for each one . Then, it sends it to the second part of the equation – Translate API which translates the label with the top confidence score. As expected, the experiment doesn’t have a perfect score in terms of recognition but it comes pretty close for most of the time.
The translator works on laptops as these experiments are all web browser tools as well. Still, there is no mention of the exact number of languages it supports (“it can do a bunch of different languages”). We’ll just to have find for ourselves, which actually might be more fun.
The scope of these experiments best shows how Google AI, particularly machine learning and neural networks, have exciting new applications on the horizon. Because these use open-source platforms and APIs, you can see that those behind these cool experiments, among the usual developers and programmers, are also people without traditional AI expertise. This only shows that Google aims to turn its AI Experiments initiative into a creative space for anyone to explore in order to bring its free-to-use AI tools to the public. In return, Google AI gets valuable feedback on how these experiments can be improved, all the while helping create the next generation of powerful AI learning software.
Applications like Giorgio Cam, Quick, Draw! and others present a great and almost unique way to get acquainted with how machine learning and neural networks work, in order to identify input such as sounds, objects, doodles, and text and turn it into something useful. And since neural networks are a machine learning method, they learn from data. This means they will eventually learn from their mistakes and improve over time the more people use them. These are already in function on Facebook, Twitter, Google Photos and alike. Just by observing how a simple doodle provides a guess and ultimately a correct answer, Google is showing us how the technology works and all nuances of it. So far, we are loving it.