NEUROSCAPE V1, 2017-2018
8CH VIDEO, 6CH SOUND, DUMMY HEAD, WEBCAM, CCD CAMERA, DEEP LEARNING ALGORITHM
<International Computer Music Conference>, Art Factory, Daegu, Korea, 2018
<Daejeon Biennale: Artist Project>, KAIST Vision Hall, Daejeon, Korea, 2018
<Getting Arty with Science 2017 III>, Platform-L, Seoul, Korea, 2017
<Getting Arty with Science 2017 II>, SEMA Gallery, Seoul, Korea, 2017
<Getting Arty with Science 2017 I>, KINTEX, Ilsan, Korea, 2017
“We see a lot of landscape, and think of a lot of thoughts.
The thoughts re-construct another scape.
People, nature, and city.
In the many scenes we see and feel, what does the machine think?”
In 2017, We proposed a "NEUROSCAPE" system for artificial soundscape based on the multi-modal connection of deep neural networks. “NEUROSCAPE" is a combination of the words "neuron" and "landscape," which means memories-scape restructured by artificial neural networks.
We developed a system that automatically maps the corresponding sound/image, after analyzing the natural or urban scenery image with artificial intelligence algorithm.
This system detects elements related by word through label detection algorithms by inputting landscape images of a city or nature. The detected words are calculated by 527 categories of audio data set keywords using a GloVe algorithm. The detected keywords retrieve the most relevant audio files and images from the final sound library through a sound-tagging algorithm.
Through this system, we proposed mixed-media installation. The mixed-media installation allows users to capture desired landscape scenes and control the sounds. The system interacts with the six images and sounds that are most relevant to the selected image. It generates a kind of audio/visual collage artwork in real time.
This artwork aims to raise a fundamental question on the "coexistence of humans and technology."
GAS 2017 Exhibition
Ministry of Science and ICT
Korea Foundation for The advancement of Science & Creativity