Developing Project: Synthesized Tweets into Animal Voices

They say humans are most sensitive to audio media, especially music. Reading a written sentence may not bring out its expression as much as when the sentence has became a music lyric. Sometimes, it is easier to perceive emotions of the message’s sender when there’s an audio accompanies it, such as the sender’s voice.

Today, global warming is one of the most controversial tropics of the society. Some people ‘voice’ their opinion about the topic in every media but, what about animals. Aren’t they also affected by these climate change? If they had ‘voice’ in this conversation, what would it sound like? And, would humans have more sympathy to the wildlife and nature if we can hear the animal’s emotion?

My proposed art project is a real-time interactive sound synthesizer in a form of an online website consisted of customized software that use multiple lexicons and produced a music-like audio with animal’s sound. This artwork’s customized software will be developed further from my creation in 2018 “Signal Moods” [ http://bit.do/signalm ].

Signal Moods is an interactive sound sculpture that examines human’s sense of hearing and the emotional digital content by visibly and conceptually juxtaposing a translation of messages from computed information and human intelligence.

Similar to Signal Moods, my proposed project’s system will collect tweets from the selected Twitter topic and match with multiple customized lexicons to generate synthesized sounds in real time based on the most influential emotions description of western musical chords, written by Christian Schubert. The project, however, will specially examine and interpret emotions of each tweet around the world about, then project the analysis as emotional animal’s sound. The audiences can experience the creation using the Internet browser, and are able to participate in the creation of this synthesized music by using the actual Twitter.

That being said, this artwork is not a scientific tools to transmit and translate humans or animal’s emotions. The artwork is an experimental and creative innovation using open-source data on the Internet to ignite conversation about any environmental issues. This creative digital work may deliver various tones and moods to the viewer’s experience.

As a person who was raised up in suburb and is lacking any wilderness experiences, the environmental problems and wildlife extinction seems discrete and unrelated to me. However, most of my knowledge about the mother nature are from the Internet and screenings and that allows me to form some kinds of connection with the nature I rarely know. My proposed project could be seen as an alternative way for people to observe and partake in something that they feel distant from and beyond their individual’s real-life experience.  


The current state of this project can be experienced here:

https://symbioticmusic.herokuapp.com/

The artwork has sound. Please adjust your speaker volume.


Some of these video document have sound but automatically mute at first. You can turn on the volume manually. Please adjust your speaker volume.

First Sketch: Visual Experiment with P5js and Rita.js

Experiment with Web Audio: Linked Tone.js with Pre-recorded Audio

Second Sketch: Playing Musical Chords Based on Emotion Analysis

Visual and Audio Design: WebGL and Synthesized Sound Experiment

Third Sketch: User Interface

Forth Sketch: New Visual and User Interface 0.2

The current state of this project can be experienced here:

https://symbioticmusic.herokuapp.com/


Related Project: https://tuangstudio.com/portfolio/signal-moods/


Algorithm, creative coding, emotions in English language text, musical chords theory, audience’s participation