Details / Links
- Supported by the Mondriaan Fund
- The Neural Cities - Arnhem installation is part of the In4Art collection
- Interview at Alphr
- Made using pix2pix, Google Streetview, openFrameworks and ofxStreetview.
Neural Cities
- Supported by the Mondriaan Fund
- The Neural Cities - Arnhem installation is part of the In4Art collection
- Interview at Alphr
- Made using pix2pix, Google Streetview, openFrameworks and ofxStreetview.
Neural Cities is an exploration into generative cityscapes. A neural network was trained on photos of the city of Arnhem, which allowed it to generate its own, new, and fictional photos of the city on request.
The generated images contain (or are based on) elements seen around the city. Something that clearly stands out when using images of Arnhem are the open skies (not a lot of high rise around) and the amount of green, tree like shapes. Using images from other cities will lead to other results, specific to those locations.
Though elements from the input data are visible in the output images, the system doesn’t have any way of knowing what these elements are: all it know is pixels. This leads to dream like locations where recognizable elements such as windows and walls morph into trees and take up unusual shapes and forms.
For the installation a selection of images was printed and mounted in a metal frame. In reference to the work’s digital origins, the prints are back-lit, creating a series of monitor-like panels.
More in depth explanation of the process: Besides regular photos, Google Streetview also provides depth images. These are basically black-and-white images showing the silhouettes of all objects in the image, with objects that are close to the camera drawn in white, and silhouettes becoming darker as their objects are farther away.
The network was trained on the relation between the regular photos and their depth data. By creating new depth data and using this as input, the system can create new ‘photos’ it thinks would match the input data.