Deep Dream (2015)

 

The Google Deep Dream software was designed to enable a computer to distinguish objects within images. This was to be used to categorise the billions of uncategorised images on the internet, so that they would be searchable. The software works by learning from a series of millions of images, in order to build up a model of what things should look like. This model is essentially a network of data, much like the human brain, only far less complex. The software can successfully caption an image with varying degrees of accuracy, it is rare for the computer to make a mistake that is unlike a mistake a child might make. Similar software has now also been used to distinguish cancerous cells from those without cancer, under a microscope.

The inner workings of this software are where it becomes really interesting. Over a series of iterations, the software is able to distort an image such that it conforms to the norms in the dataset. For example, the Google GNet Network, one of the earliest large scale networks, contains many images of animals, and as such produces images which are strikingly reminiscent of animals, is slightly strange mutant animals. Newer datasets have now come about including one 'Places' dataset which contains mainly scenic images and therefore produces mainly scenic results.

Close to the beginning of my experimentation with the Deep Dream software, I became aware of a video version of the software, and quickly took advantage of this feature to begin animating the otherwise still images. Up till now I have created nearly 70 short clips using the software, with varying degrees of success. The main prohibiting factor in the process is the time it takes to process each clip. Depending on the resolution and the length, it can be anywhere from about 50 minutes to 24 hours or even more.