People Watching…

Two summers ago my wife and I went to a different location in NYC almost every Saturday and did a pop-up interactive street installation. We’d run power off of the car’s alternator and set up a webcam, a projector, some sound amplification, and one or two computers. As pedstrians walked by on the sidewalk their movements would be captured by the webcam and would generate music, while the images from the webcam were manipulated and projected on a wall. Here are some video clips of one of those installations:

The reactions to these installations was quite varied. Some people would walk straight by as if there was nothing out of the ordinary going on. Some would walk with their eyes downcast in apparent hope to avoid any contact, and some even apologized to us for getting in our way! Most people would sort of slow their pace and scope out what was going on, before continuing on to wherever they were going. For the people who did actually stop and ended up playing with the installation I think I can safely say it was a delight. Adults started acting like kids, and kids just became, well, more kid like.

The first thing people noticed, of course, is that there was a giant colorful thing happening on a wall. The projection was a software mirror, so it reflected whatever the webcam saw, but the images were being manipulated (using Max) and sometimes people didn’t notice right away that they were seeing themselves. But after a few moments they pretty quickly started saying, “hey, that’s me!”, and the novelty of the manipulated projection drew them in. In line with Donald Norman’s “The Design of Everyday Things,” the mirror aspect provided great feedback and helped inform people right away that this was something they could interact with. There were no instructions needed, the design of it enabled people to grasp that point on their own, and without much effort.

The second interactive part of the installation was a little bit more tricky. The majority of people had no idea that they were triggering the sounds. We tried to make this aspect as simple as possible, and at one point even made it so that they triggered a major scale, note by note, as they walked by, but that still didn’t help. According to Chris Crawford’s “The Art of Interactive Design”, this should’ve been a great design, it had all three necessary parts of communication required for “good” interaction. The webcam took in input (listening), Max would process that input and see if there was any movement happening (thinking), and if so it would trigger a certain note based on where movement happened in the space (speaking). But people still didn’t pick up on it. Why? I think the main reason is that there just wasn’t enough feedback as to what actions made what sounds. People couldn’t map a particular action to a particular sound, and thus they just didn’t recognize that they were in fact triggering the sounds.

This is a continual consideration and problem point when I design hands free gestural sound interfaces. Unless there is some definite action that creates some definite sound it is very hard to make that interactive element visible. The most success I’ve had with it is if volume is mapped to an appendage like an arm. It’s easy for people to cognitively map an arm going up to volume going up, and an arm going down to volume going down. Triggers based off of moving the entire body through a particular place in space (which is how the street installations worked) are much harder for people to cognitively map.

At any rate, the street installations were a large success in my opinion. People could tell that they were interacting with something because of the software mirror, and they generally liked the music, even though they didn’t know they were making it. It was a joy for me watching them play and have fun. And they really did have fun, we all did! In the end, I couldn’t ask for anything more.