SEUNGSOON PARK
Cart 0
SEUNGSOON PARK
composer, cross media artist, creative researcher

ARCHIVE & PRESENTATION: INTERMEDIA MOBILITY R&D

ARCHIVE & PRESENTATION: INTERMEDIA MOBILITY R&D, 2018
ARCHIVE, PRESENTATION
VIDEO, SOUND AND KEYNOTE PRESENTATION

@ ZER01NE DAY, ZER01NE, Seoul, Korea | 2018

인터미디어 모빌리티 R&D는 매체음악가 박승순의 개인 프로젝트로, '환경소리' 데이터와 '인공 신경망'
이라는 두 주제가 '자동차'라는 매체에 어떻게 투영할 수 있는지 될 수 있는지 리서치 및 아이디에이션 과정을 아카이브 및 프리젠테이션 형식으로 소개한다.

작가는 '자율주행 자동차' 시대에는 기존의 운송수단 기능을 넘어 일종의 거대한 '인터미디어(또는 엔터테인먼트) 모빌리티' 디바이스로 확장 가능할 것이라 생각하였다.

리서치 과정에서 발생한 여러 가지 상상적 요소들을 COMPUTER VISION, MUSIC INFORMATION RETRIEVAL 등 최신 딥러닝 연구 분야와 연계하여 '실시간 영상 분석을 이용한 VISUAL OBJECT TO SOUND MAPPING', 'SPEECH RECOGNITION TO MULTI-MODALITY' 등의 아이디에이션을 통해 예술, 기술 그리고 산업 간의 유기적 상생 모델을 개발 중이며, 이를 위해 독립적인 랩 또는 스튜디오를 구축하여 다양한 전문가들과 연계 프로젝트로 확장할 예정이다.

전시 공간에는 총 3회에 걸쳐 제작된 아이디에이션 노트와 더불어 음악, 영상, 이미지, 서적 등 작가가 참고한 여러 포맷의 자료를 아카이브 전시 형태로 공개할 예정이며, ZER01NE DAY 기간 동안 작가가 직접 키노트 프리젠테이션을 진행할 예정이다. 

IMAGINARY SOUNDSCAPE

IMAGINARY SOUNDSCAPE, 2018
AI AUDIO/VISUAL SOUNDSCAPE PERFORMANCE

@ ACC_R Creators in Lab Showcase, Asia Culture Center, Gwangju, Korea | 2018. 12. 14 (Opening Showcase)

TELL ME WHAT YOU SEE

TELL ME WHAT YOU SEE, 2018
VIDEO INSTALLATION
3CH VIDEO, 4CH SOUND, DUMMY HEAD

@ACC_R Creators in Lab Showcase, Asia Culture Center, Gwangju, Korea | 2018. 12. 14. - 12. 23.

<Tell Me What You See> is an audio-vision artwork with an "auditory game" structure that compares differences that have been heard between the original image and imaginary image by listening to sentences and artificial soundscapes using the NEUROSCAPE AI system. NEUROSCAPE is an artificial intelligence soundscape system developed in 2017 by Seungsoon Park, the composer and cross media artist, and Jongpil Lee, the algorithm developer. It is a system that automatically generates and maps environmental sounds that correspond with the image of nature or an urban city.

This work reverses the sequence of the existing NEUROSCAPE process of "image analysis—word embedding—audio tagging and mapping." The audience first views the results of sentence and label detection outputs analyzed by the computer vision API, the sound mapped via the NEUROSCAPE, and the original image analyzed by the machine. The artist selected a target image based on several global issues to more dramatically express this structure. Although technology has benefited our lives by guiding human life, such as industrialization and urbanization, the natural environment is destroyed and polluted due to human errors such as inequality of wealth, moral defect, environmental destruction, and pollution. In other words, human beings are damaged by other human beings. In particular, as the iceberg fragments melted and flooded due to warming, he reflected on the refugees who were pushed down into the sea.

These narratives raise the problem that artificial intelligence still has limitations in addressing our various social issues. An image can be inferred from text and sounds, such as "a large green field with clouds in the sky," "a person in a cage," "a group of people walking in the rain," "a harbor filled with boats," and "a boy lying on the beach."

What image do you think of?

JUXTAPOSITION

JUXTAPOSITION, 2017
AUDIO/VISUAL PERFORMANCE, CHOREOGRAPHY

@ Incheon Art Platform, Incheon, Korea | 2017

Juxtaposition is an experimental electronic music and performance using audio clips detected via "NEUROSCAPE," an artificial intelligence soundscape system.

The order of production is as follows. Seungsoon Park, composer and cross media artist, selected 10 of the landscape images taken in 2017 in the Kassel and Münster region to juxtapose the image and sound by the machine intelligence with the movement of human simultaneously on the stage.

For the musical composition, 10 images were analyzed by the NEUROSCAPE system, and a total of 60 related audio samples were detected as outputs. Utilizing the detected environment sound as a material of experimental music creation was arranged and manipulated according to the intention of the composer. He arranged the audio samples in the sequence to form a music timeline of approximately 12 minutes. The context of the existing sound object was determined according to the intention of the composer, and the rhythm was found by decontextualization.

The process of discovering rhythmicity by decontextualizing and restructuring the context of a sound object according to the intention of the composer can be considered connected with the form of the existing concrete music. This represents the first attempt at a musical use of artificial soundscapes.

Two choreographers, Myunghyun Choi and Byungjin Lee, attempted to express improvisation by choreography movement with the most heterogeneous feeling exhibited from the landscape image.

As a result, the artificial sound and the movements created by human beings have been demonstrated on the stage of landscape scenery, revealing the possibility of coexistence between human, machine, and nature.

NEUROSCAPE V1

NEUROSCAPE V1, 2017-2018
MIXED-MEDIA INSTALLATION
8CH VIDEO, 6CH SOUND, DUMMY HEAD, WEBCAM, CCD CAMERA, DEEP LEARNING ALGORITHM

EXHIBITED AT
<International Computer Music Conference>, Art Factory, Daegu, Korea, 2018
<Daejeon Biennale: Artist Project>, KAIST Vision Hall, Daejeon, Korea, 2018
<Getting Arty with Science 2017 III>, Platform-L,  Seoul, Korea, 2017
<Getting Arty with Science 2017 II>, SEMA Gallery, Seoul, Korea, 2017
<Getting Arty with Science 2017 I>, KINTEX, Ilsan, Korea, 2017

“We see a lot of landscape, and think of a lot of thoughts. 
The thoughts re-construct another scape. 
People, nature, and city.
In the many scenes we see and feel, what does the machine think?”

In 2017, We proposed a "NEUROSCAPE" system for artificial soundscape based on the multi-modal connection of deep neural networks. “NEUROSCAPE" is a combination of the words "neuron" and "landscape," which means memories-scape restructured by artificial neural networks.

We developed a system that automatically maps the corresponding sound/image, after analyzing the natural or urban scenery image with artificial intelligence algorithm.

This system detects elements related by word through label detection algorithms by inputting landscape images of a city or nature. The detected words are calculated by 527 categories of audio data set keywords using a GloVe algorithm. The detected keywords retrieve the most relevant audio files and images from the final sound library through a sound-tagging algorithm.

Through this system, we proposed mixed-media installation. The mixed-media installation allows users to capture desired landscape scenes and control the sounds. The system interacts with the six images and sounds that are most relevant to the selected image. It generates a kind of audio/visual collage artwork in real time.

This artwork aims to raise a fundamental question on the "coexistence of humans and technology."

GAS 2017 Exhibition
www.gas-art.kr

Supported by
Ministry of Science and ICT
Korea Foundation for The advancement of Science & Creativity

 

ARTIFICIAL SOUNDSCAPE: WORLD CITY PROJECT

NEUROSCAPE in Incheon(KR), Kassel(DE), and Münster(DE)

ARTIFICIAL SOUNDSCAPE: WORLD CITY PROJECT, 2017
VIDEO INSTALLATION
3CH VIDEO, 6CH SOUND, 2017

EXHIBITED AT
Incheon Art Platform, Incheon, Korea | 2017

We collected video clips of landscapes from Incheon(KR), Kassel(DE), and Münster(DE). Each landscape image was analyzed by a deep learning algorithm. Then, we obtain six images and sounds that are most relevant to the selected image. Finally, we reconstructed the A.I. Soundscape by mapping the extracted sounds.

We surveyed 30 people and about 30% of audiences responded that artificial soundscape is more realistic than original soundscape. So how can we recognize what is real and fiction? Is it possible to distinguish?

SLCAT: SOUND/LANDSCAPE COGNITIVE ABILITY TEST
소리/풍경 인지능력 평가

SLCAT.jpeg

SLCAT (Sound / Landscape Cognitive Ability Test), 2017-2018
ARTIFICIAL SOUNDSCAPE SURVEY
VIDEO, SOUND, GOOGLE FORMS

EXHIBITED AT
Incheon Art Platform, Incheon, Korea | 2017


This work is the second version of SLCAT (Sound / Landscape Cognitive Ability Test). This was presented during the exhibition 'Chebo' at the Incheon Art Platform in 2017. It is a participatory soundscape exhibition in a survey format, that distinguishes between both the real and the artificial soundscapes.
If the distinction between an artificial soundscape and a real soundscape is vague, we will then need to define what is 'a real soundscape'?
This test will consist of 10 questions, and for every correct answer, the participant will be rewarded with 10 points.
You are able to play the video and select whether A or B is the real sound of landscape.

본 작품은 지난 2017년, 인천아트플랫폼 ‘제보’ 전시에서 선보인 <소리/풍경 인지능력평가(SLCAT: Sound/Landscape Cognitive Ability Test)>의 두 번째 버전으로, 딥러닝 알고리즘을 통해 생성된 인공적 풍경 소리와 실제 풍경 소리를 구별하는 설문 형식의 참여형 사운드스케이프 전시입니다.
인공적으로 만들어진 풍경의 소리와 실제 풍경 소리의 구분이 모호해진다면, 우리는 소리풍경(soundscape)을 어떻게 정의하고, 이해 할 수 있을까요?
본 평가는 총 10개 문항으로 구성되어 있으며, 각 문항 당 10점씩 배점하였습니다.
제시되는 영상을 재생하고 A와 B중 어느 것이 실제 풍경 소리인지 선택합니다.

SOUND TAXONOMY 소리 분류학

Sound Taxonomy, 7:00, Single Channel Video,2channel sound, 2017
소리분류학, 7분, 싱글채널 비디오, 2채널 사운드, 2017

exhibited at
Incheon Art Platform, Incheon, Korea | 2017
인천아트플랫폼 프리뷰전, 인천아트플랫폼, 인천, 대한민국 | 2017

<소리 분류학(Sound Taxonomy)>은 다양한 선행 연구에서 언급된 생태학적 소리 분류 기준들을 참고하여 일상에서 마주하게 되는 풍경 이미지와 소리를 재 배열한 작품이다. 추후 인공지능 사운드스케이프 연구 및 작업을 위한 기초적인 자료 수집단계로, 보편적인 소리 풍경의 구성요소를 다루고 있다.

MOMENTUM

Momentum.jpg

MOMENTUM, 2018
SOUND INSTALLATION
32-CHANNEL SPEAKER, DIRECTIONAL SPEAKER

exhibited at
ZER01NE DAY, ZER01NE, Seoul, Korea | 2018

모멘텀 MOMENTUM은 우주의 물리적 운동성을 소리와 음악적 구조로 재해석한 설치 음악(INSTALLATION MUSIC) 프로젝트이다.

소리를 주요 매체로 작업해온 두 작가 박승순, 전형산은 소리의 공간성과 운동성의 탐구를 바탕으로, 우주의 에너지 운동을 새로운 시/청각적 경험을 위한 다채널 사운드 인스톨레이션 및 이를 기반으로 한 음악 작품으로 표현하고자 한다.

본 작품은 '행성의 겉보기 역행/순행 운동'의 개념을 차용한다. 겉보기 역행 운동은 행성이나 천체가 특정 위치에서 관측될 때, 그것이 속한 행성계 내에서 다른 천체들의 방향과는 반대로 이동하는 것처럼 보이는 일종의 착시현상 중 하나로, 이를 협화(CONSONANCE)와 불협화(DISSONANCE)라는 청각적 심상(AUDITORY IMAGE)으로 변환 및 구조화하였다. 구조는 사운드 인스톨레이션의 형태로, 32채널 96개의 원형 서라운드 스피커 구조물을 구축하여 행성의 운동을 음악적으로 표현하며, 가운데 일정한 속도로 회전하는 초지향성 스피커는 다양한 신호를 발생하여 대립과 조화뿐만 아니라 소리의 이동을 제시한다.

소리의 특성을 과학적으로 해석하고 소리를 구조화시킴으로써, 음향 그 자체로부터 어떤 음악적 가치구조가 형성될 수 있는지에 대하여 실험하고, 그 안에서 소리와 사회 그리고 우리 자신과의 관계성을 다시금 찾아 나가고자 한다. 이렇게 생성된 소리는 공간을 이동하며 팽팽한 긴장 속에서 열림과 닫힘의 운동을 시작한다.

POETIC THEATRE 시적극장

POETIC THEATRE, 2018
INSTALLATION THEATER

@ PROJECT BOX ‘SEEYA’, WOORAN FOUNDATION, SEOUL, KOREA, 2018

NEIN FEST 1: SURVIVAL

NEIN FEST 1: SURVIVAL, 2016
DAWON FESTIVAL

@ MULLAE ART SPACE M30

Nein Fest 1 - 일시 : 2016년 1월 31일 일요일 오후 4-7시 장소 : 문래예술공장 M30 - 서울특별시 영등포구 경인로 88길 5-4 - 참여 작가 : 강진안X최민선 | 2 김규호 | Jukebox Baby 류혜욱 | 기동상황-1 박민희X윤재원X최미연 | 몸**건강**정신 박승순 | Vapor 이로경 | 물구경 장홍석X주희 | 난 여기 뒤에 숨어있었다.

DEPENDENT VARIABLE

RADIOPHONICS AUDIO/VISUAL LIVE PERFORMANCE Experimental Sound and Visual Performance Regulations 002 25 MAR 2016 @ ALLEY SOUND WWW.RADIOPHONICS.NET

DEPENDENT VARIABLE, 2016
AUDIO/VISUAL PERFORMANCE

@ALLEY SOUND

CHROMATIC SCAPE

CHROMATIC SCAPE, 2016
LIGHT AND SOUND INSTALLATION

@WOORAN FOUNDATION


<LX1. CHROMATIC SCAPE>는 ‘음계’ 또는 ‘색채’를 뜻하는 ‘Chromatic’과 ‘풍경’을 지칭하는 ‘Scape’의 합성어로, 전자음악가 박승순과 건축가 이재성이, 연극 [초대]의 주요 매개체인 ‘책’을 모티브로 하여 완성한 Sound & Light Installation이다. 삶의 단면이라는 레이어가 조금씩 다른 각도로 병치 되어 있는 32개의 공간(또는 책장)에, 소리에 반응하는 빛을 투영함으로써 삶과 죽음에 관한 ‘음색(音色) 풍경’을 표현하였다. (‘lx’는 빛의 조명도를 나타내는 단위 ‘럭스(lux)’의 기호를 뜻한다.)

Produced by SEUNGSOON PARK & JAESUNG LEE
Composed by RADIOPHONICS
Supported by WOORAN FOUNDATION

SYMPHONIE AQUATIQUE

Screen Shot 2019-03-18 at 9.18.41 AM.png

SYMPHONIE AQUATIQUE, 2016
WATER-SOUND INSTALLATION

@GALLERIA FATAHILLAH, JAKARTA, INDONESIA, 2016
https://www.thejakartapost.com/life/2016/10/24/engage-the-senses-at-media-installation-art-exhibition.html
https://www.bbc.com/indonesia/majalah-37809376

With emerging technology, we are surrounded by several digital devices like laptops, smartphones, and VR/AR devices. In music technology, there are also many devices or interfaces, such as controllers and applications. These devices allow users to control things with the touch of their hands and fingers. But I thought that as the digital experience increases in our lives, we grow apart from experiencing natural interactions such as feeling the wind, touching and seeing water and its reflection, and hearing birds singing or waves on the shore. So I proposed a sound installation, called "SYMPHONIE AQUATIQUE," which everyone from a non-musician to professional musician can participate in, to create interaction between digital technology and nature. In the early 1970s, Jacques Dudon, a French experimental musician, derived musical ideas from water. He developed several water-sound instruments over thousands of pieces, including an automatic instrument system using water without an instrument player as well as instruments played by a professional musician. Among these instruments, there were only a few musical instruments, including some pictures and short music sources such as Flutabulles and Aquavina. [1][2] So, I focused on his fundamental notion of developing a method of using several instruments, and tried to explore a way of developing a new musical interface using water, which can expand its usability from the theater to the gallery, and also the possibility of how a human being interacts with nature and the computer. SYMPHONIE AQUATIQUE is composed of six glass bowls containing water, with conductive tape connected from the water to the table. Each bowl of water is mapped with individual traditional instruments and different environmental sounds. The system uses a sensor board to obtain a signal from touching the water. User can touch the water with one hand, while touching the conductive tape with other hand. It also allows them to touch the water and see a reflecting water-wave on the wall while listening to music. Three features of this work are: 1) Applying cultural values to artworks by using musical instruments or environment sounds collected by its own specific regions. 2) Providing an experience that can easily be played by simply touching water to produce a musical instrument performance that is difficult for ordinary people to learn. 3) Interactive sound installation using three human senses (listening, seeing, touching). This work was invited to the "4th Korea-Indonesia Media Installation Art" exhibition in Jakarta, Indonesia in 2016. It was first introduced and designed in such a way as to map and play different Indonesian instruments. The name of SYMPHONIE AQUATIQUE is derived from a name of Berlioz’s symphony called "Symphonie Fantastique" (Fantastic Symphony), suggesting that this artwork will give audiences an experience like a symphony in aquatics.

Produced by Seungsoon Park
Supported by Korean Cultural Center, ArcoLab

AQUAPHONICS V2

AQUAPHONICS V2, 2105
WATER-SOUND INSTALLATION

@DAVINCI CREATIVE 2015, Seoul Art Space Geumcheon, Korea | 2015

The AQUAPHONICS V2 is a new interface for music performance, which is played by controlling the speed of a running fluid in several pipes. He explores how a human-being can interact with the nature by means of music.
AQUAPHONICS V2는 파이프의 유속을 제어하여 음악을 연주하는 새로운 형태의 인터페이스다. 음악을 매개로 자연과 인간이 상호작용할 수 있는 방법을 탐구한다.

Produced by Seungsoon Park(a.k.a. RADIOPHONICS)
Project Art Direction by Jinhwa Jeong
Documents Designed by RYU+ICH
Manufactured by IDEAN
Supported by Seoul Foundation for Arts and Culture, Seoul Art Space Geumcheon, DAVINCI CREATIVE 2015

AQUAPHONICS V1

AQUAPHONICS V1, 2014
Water-Sound Installation
PVC PIPE, WATER VALVE, ARDUINO, MAX/MSP

@2015 HCI KOREA, SEOUL, KOREA, 2015
@WORLD WATER FESTIVAL, GYEONGJU, KOREA, 2015