Rising

2022 April

Movement: Concetta Cariello, Kiori Kawai
Music: Aaron Sherwood
Film making: Kiori Kawai

Developed with support from Ahmed Alyafei at Al Khatim Art Hub, Abu Dhabi, U.A.E

About the project: This piece is a part of a larger project where Italian dancer Concetta Cariello and Japanese dancer Kiori Kawai are exploring their movement in three different environments in the U.A.E including mangrove forest, rocky mountains, and sandy desert.

About the music: The piece begins with a synth drone, paralleling the starkness of the desert. The drone swells from time to time, in a manner similar to desert wind gusts. As the sun rises and the dancers move closer to each other and eventually touch, the music builds and various rhythmic patterns are introduced. There is a definite pulse, with everything fitting together precisely, but, because of the polyrhythms, it’s hard to find exactly where the pulse lands. This feeling mimics the dunes which, one after the other, keep rolling and falling on top of each other, never finally landing, never reaching a point of rest. The listener has to give up on finding that point of rhythmic rest (the downbeat). Thus the music becomes an invitation to let go and dwell in the wild beauty, yet stark brutality, of the desert.

Desert Improvisation Experiments – Day 4

We returned to the Day 3 location. We were satisfied with the setup and wanted to explore further. The wind was stronger, but we persisted. We decided we wanted to experiment with some other visuals to see what we like.

Results:

After trying out many different projected visuals we found ourselves attracted more to projecting the code, oddly enough. The water visuals were also nice, but without a brighter projector and a more deeply adequate low-light camera, we weren’t able to capture those images so well. We realized that we’re starting to reach the limits of what we can do with the equipment we brought on this trip.

Details:

Lat/Lon: 23°00’36.5″N 53°45’39.6″E

Wind speed: ~21 kph

MICRO

MICRO explores the small universe that is our body and mind. It consists of an 8ft x 12 ft x 8ft structure that has 200 translucent balls hanging from the top of the structure, each ball containing a speaker. When a ball is bumped into it generates a unique sound, and lights up with one of 5 different colors. As people play with the balls they are engulfed by a symphony of lights and sounds surrounding them on all sides.

Overview

Each ball is independent from all the other balls, and contains a custom made circuit board inside. Since the installation needed to run for days (and later on for months), and we didn’t want to be changing batteries all the time, external power is run into each ball. MICRO also needed to stand up to the elements as it was to be shown outdoors originally. All of this proved to be quite the engineering problem. Here’s how we did it…

Inside a sphere

IMG_4157

The circuit board contains a microcontroller, an audio amplifier, a flash storage chip for the audio file, a tilt switch, and a voltage regulator, as well as various transistors, capacitors and resistors. A high power LED plugs into the circuit board and floats above the board. The speaker attaches to the bottom of the circuit via velcro.

IMG_4144

The first task was to figure out how the interaction would work. When someone touches a ball the goal, of course, was that the ball should light up and make sound. We originally considered using an accelerometer, but those are expensive and a little overkill for our intended use. I ended up finding a $.70 tilt switch which worked beautifully:

We didn’t want wind triggering the balls, but the tilt switch had its own solution to this. Inside the tilt switch there is a ball bearing that rests in a little cone. If the ball moves in a smooth arc, much like how it would move when the wind blows, due to centrifugal force the ball bearing stays at the bottom of the cone and the ball is not triggered:

For the microcontroller we went with a Trinket from Adafruit. It was cheap, and had the added feature that it could PWM an audio file. Adafruit has a wonderful tutorial about how to play audio with the Trinket here. In a nut shell, the trinket emits the audio as a square wave, then a low pass filter is used to smooth out the PWM into a listenable audio source.

Most of the circuit operates on 3.3V, but we found we needed 5V to really get the most out of the LED. 5V is fed to the circuit and the LED, then the voltage regulator brings the 5V down to 3.3V for everything else. One of the problems we discovered early on is that high power LED’s need constant current. This means that if the current starts getting too high it is brought down, if it gets too low it is brought up. Here’s a great instructable on building a constant current circuit with a few resistors and transistors. You can see the constant current circuit I designed in the lower right of the MICRO schematic. Its those two FET’s, one transistor, and two resistors. It worked great, no more blown LED’s:

microschematic

Originally we showed MICRO at Burning Man and we needed to be sure it would withstand the elements. It can get quite hot on the playa, Eric Rosenthal (my mentor through many parts of this project) was encouraging me to throw the circuit in my oven. Hesitantly I did so, testing the circuit’s temperature with an infrared thermometer and hoping it would still work:

Moving outward

We used clear light fixtures for the balls, in three diameters: 6″, 9″ and 12″. We dipped them in rubber dip multiple times to get the texture and translucency we were looking for. We laser cut the bottoms for the spheres out of acrylic, and did the same rubber dip treatment to them.

IMG_3067

IMG_3069

We used a truss structure to suspend the spheres. This was great because it was easy to assemble, relatively light, and very strong. To keep the truss from blowing away in the high winds that often happen at Burning Man, we guy wired the top edges down to the ground. We lined LED strips on the guy wires so unsuspecting people on bikes and in art cars would not have an unpleasant (and possibly quite dangerous) surprise finding a guy wire where they didn’t expect one.

burningman19

Power for 200 spheres was a problem. We needed to switch 120V AC down to 5V DC for 200 separate circuits, and make sure we had enough amperage for the LED’s and audio. We ended up using 11 of these switching power supplies that could deliver 30 amps each. These also have a watertight rating when hung in their included case vertically. They’ve proven quite reliable:

Without whom it would not of happened

One of the most wonderful parts of this project was meeting so many generous and amazing people who helped out along the way, I would be remiss to overlook all of you. You all have special places in our hearts and in the hearts of all those who have experienced MICRO. This doesn’t do you credit, but here you are:

Made possible with generous support from:
Burning Man Arts, Federation Square/Pause Fest, Cameron Arts Museum

A Purring Tiger collaboration:

Concept/Design/Creative Direction
Kiori Kawai

Concept/Music/Electrical Design
Aaron Sherwood

Crew
Andy Sigler, Lisa Park, Rosalie Yu, Laura Chen,
Wyna Liu, Scott Horton, Mark Hebert, Elise Knudson, Chris Hallvik, Angela Orofino,
Momo Nakayama, Logan Scharadin, Julia Montepagani, Chelsea Southard, Ni Cai

Performers
Betta Lambertini, Logan Scharadin, Julia Montepagani,
Matthew Hardy, Joshua batson, Kris Seto, Elise Knudson, Kiori Kawai

Cinematography
Roy Rochlin, Talya Stein, Momo Nakayama

Guardian Angel
Eric Rosenthal

Special thanks
The Generator Inc., Big Bang The, Camp Contact, River School Farm

Photography
Momo Nakayama, Kiori Kawai

Thank you all!

www.purringt.com/micro

Genetic Algorithm Chord Progression Generator – idea

I’d like to create a plugin for DAW’s and/or an iPhone app that generates chord progressions genetically. This would start with a large database of chord progressions from all types of music that I will accumulate.

Algorithm

  • initial population is a random selection of progressions from the database
  • initial crossover will happen at random from initial population without fitness to generate something for the user to listen to
  • fitness will be two parts: 1. user will listen and rate how the progression sounds. 2. chord progressions adhering more closely to voice leading rules will receive higher fitness
  • mutation will happen to random parts of the population with a markov chain analysis of the entire database

User Interface

  • knobs: 1. evaluating fitness, 2. changing tempo of chord progression (playback), 3. changing rhythm (playback)
  • buttons: 1. evolving to next round (will evolve only once then loop new chord progression), 2. starting over with completely new population, 3. adding current chord progression to database (database evolves too)
  • screen for feedback

Here is a quick initial UI mockup:

chordGeneratorUI