Blind Sound Walk I

Marshall McLuhan, in his “Visual and Acoustic Space,” tells the story of Jacques Lusseyran who was accidentally blinded in elementary school. Lussseyran, talking about being blinded, says:

“Sounds had the same individuality of light. They were neither inside nor outside, but were passing through me. They gave me my bearings  in space and put me in touch with things. It was not like signals that they functioned but like replies…”

After this quote McLuhan goes on to postulate that in modern society we are much more visual than aural. He says that this was not always the case and that before the collapse of oral traditions sound eclipsed sight, and we lived in a primarily audible universe.

I decided I wanted  to see what it was like to be blind. My wife and I went to South Williamsburg and helped each other do blindfolded walks around that neighborhood:

WP_20131121_001 WP_20131121_003

At first it was scary, the fear of falling down or getting run over or something else outside of my control happening overshadowed the experience. After a little while of walking though, my brain/body realized Kiori (my wife) wasn’t going to let anything like that happen. Once I could let go of the fear I started to analyze sounds: oh there is a truck, oh someone is walking past me, a car over that-a-way is honking it’s horn. I was also involuntarily putting together a mental image of what I thought the space I was walking through looked like. After a little while longer of walking I let go further and this analyzing of sounds and visual imagining passed away. I found myself in a wash of sound, sound coming from everywhere of all various types. It was such a rich world, and I enjoyed it thoroughly.

After completing our route we switched, Kiori put on the blindfold and I led. This time around for me the sounds had such less precedence. I found myself totally absorbed visually and not even really paying attention to the sounds. It was a completely different world! I realized how visual I really am. The way I imagined the route looking when I was blind was totally different from how it looked in reality as well.

Genetic Algorithm Chord Progression Generator – idea

I’d like to create a plugin for DAW’s and/or an iPhone app that generates chord progressions genetically. This would start with a large database of chord progressions from all types of music that I will accumulate.

Algorithm

  • initial population is a random selection of progressions from the database
  • initial crossover will happen at random from initial population without fitness to generate something for the user to listen to
  • fitness will be two parts: 1. user will listen and rate how the progression sounds. 2. chord progressions adhering more closely to voice leading rules will receive higher fitness
  • mutation will happen to random parts of the population with a markov chain analysis of the entire database

User Interface

  • knobs: 1. evaluating fitness, 2. changing tempo of chord progression (playback), 3. changing rhythm (playback)
  • buttons: 1. evolving to next round (will evolve only once then loop new chord progression), 2. starting over with completely new population, 3. adding current chord progression to database (database evolves too)
  • screen for feedback

Here is a quick initial UI mockup:

chordGeneratorUI

Shapes as audio

In an attempt to translate the visual world into audio I wondered what if the shapes of objects could be contours of a sound wave. As an experiment I took the contour of an oak leaf and turned it into a wave form.

I first found an outline of an oak leaf online then cut it in two:

oakleafside

Next, using computer vision (openCV in Cinder), I analyzed the image for feature points:

Screen Shot 2013-11-17 at 4.38.03 PM

Using javascript I converted these points from a csv into a table that Max/MSP could use as a waveform. Since some of the points are over/under each other (and with a waveform you can’t have more then one value happening at any given point in time) the waveform in max has some interesting striations:

Screen Shot 2013-11-17 at 7.22.26 PM

With an oscilloscope the wave coming from the synthesizer looks like:

Screen Shot 2013-11-17 at 4.33.13 PM

It sounds real buzzy:

But in layers and with some filtering it can have a nice effect:

 

 

Sol Lewitt – wall drawings as audio, test 2

I started looking at work by Ryoichi Kurokawa, an artist whose work I really like, and whom I feel mixes audio and visual worlds really well. His work flow is to story board his entire concept (sometimes just in his head), work out a rough video, make audio for that video, then edit the video and audio together precisely. This process can take months.

I made a quick animation with Sol Lewitt’s wall drawing patterns simulating Kurokawa’s work flow, but with very little (read none) story board or revision.