The problem with artificial intelligence (AI) today is that training one is a slow process. Take Siri, for example. Here’s an AI that will turn 6 this October, and an average 6-year old is still smarter. Try having an intelligent conversation with Siri; it’s impossible.

AI is now developed using deep learning techniques. This technique involves feeding an algorithm with vast quantities of data so that it can figure out what that data is. If an AI is being designed to identify cats, for example, it’ll need to analyse millions of images of cats to identify one on its own. Humans will have to find millions of photos of cats, tag them as such and then feed them to the system.

This is slow, time-consuming work, and it’s limited by the data set. More complex processing will require richer data, and that may not always be easy to come by.

If an AI is to understand, say, conversations, just imagine the vast quantities of curated voice data that will be needed to understand speech. This is also the reason why Google Assistant or Siri or Alexa or Samsung’s Bixby can barely understand English. Queries usually have to be suitably structured, spoken in the right accent, etc.

Wired spoke to researchers who’re working on a solution to this problem. One such physicist is Ian Goodfellow, who worked at Google’s OpenAI research lab. He is pioneering an entirely new technique of training AI, one that requires no human intervention.

Called a Generative Adversarial Network (GAN), the concept is rather simple. You pit two AIs against each other, one a creator, the other a critic, and let them both strive for perfection in their domain.

Goodfellow’s initial test includes two AI, one of which is attempting to create a realistic image while the other is attempting to find flaws in the created images.

Rather than have a human teach either AI to do its job, these AI are expected to learn off each other.

The first AI will need to learn to be better at creating images. This means that it needs a better understanding of an image to create an image that makes sense. While it’s busy learning how to create the best fake image, this AI will also gain an understanding of how the world works.

The second AI will also gain a better understanding of an image and the real world by the same means.

Wired quotes Goodfellow as saying, “What an AI cannot create, it does not understand.” This is, clearly, the guiding principle for Goodfellow’s work.

Goodfellow, and other researchers, believe that GAN will help create an “unsupervised learning network”: An AI training school for AI by AI.

This might very well be the holy grail of computing.

Publish date: April 18, 2017 11:43 am| Modified date: April 18, 2017 11:43 am

Tags: , , , , , , ,