close menu

Why are Google’s Neural Networks Making these Brain-Melting Images?

Quick, look at the byline below the title of this post. You recognize a little me, right? That simple task took billions of neurons in your brain a mere instant to figure out. Though our computer recognition software is good, and getting better, no tech can yet compete with the balls of jelly and few kilos of meat sitting in your skull.

Case-in-point, here’s what happens when Google’s image recognition programs are primed to find signals in the noise:

GoogleIncep_3

The acid trip you’re looking at is the result of a feedback loop occurring in an artificial neural network, which is a series of algorithm “layers” acting like neurons do in the brain. These networks can learn — each neuron connection can adjust its own “weighting”and prioritization — and with practice become much better. Google does this with their image-recognizing neural networks by feeding them millions of photos and adjusting the layers until the output is correct. The ultimate goal is to have a program that could see any kind of picture with a dog in it, for example, no matter the lighting, orientation, color, etc., and tell us there’s a dog in the photo.

For these trippy images however, Google engineers decided to look into what each neural network layer was doing as it looked for certain image aspects, starting with basic components like edges and lines, and ending with “dog”. For example, this is what it looks like when primary layers look for edges, and then engineers tell it to find more edges, to enhance and form a feedback loop:

GoogleIncep_1 Image: Zachi Evenor; Günther Noack

Or sometimes Google engineers would ask a layer to key in on animal-like shapes, then enter a feedback loop of more and more animals:

GoogleIncep_5

Google calls this technique of diving deep into the abstractions of images at certain layers of image recognition “Inceptionism”.

GoogleIncep_4

Even if the engineers gave the networks just random static, but keyed them into building-like shapes, the result was just as abstract:

GoogleIncep_2

GoogleIncep_4Image: MIT Computer Science and AI Laboratory

It’s not exactly what we’re looking for in an image recognition program, but these bizarre and often beautiful images can help programers better understand how these neural networks learn, and how to improve that process. And make horse-riding acid trip dog knights.

For more detail on Google’s Inceptionisms, head here.

HT: Google

How FARGO Turns Ewan McGregor into Two Characters

How FARGO Turns Ewan McGregor into Two Characters

article
NERDIST NEWS TALKS BACK Goes Murder Mystery

NERDIST NEWS TALKS BACK Goes Murder Mystery

article
Secret Science Nerds: Nichelle Nichols Boldly Goes Where No One Has Gone Before

Secret Science Nerds: Nichelle Nichols Boldly Goes Where No One Has Gone Before

article