Physical Computing’s Greatest Hits (and misses) – response

I’ve seen a lot of the projects listed in Physical Computing’s Greatest Hits (and misses). One that I’ve had recent experience with is the hand as cursor example. For GO-Brooklyn Open Studios I created a hand as cursor projection where 100 particles followed each hand of each user in a different color, and drew lines (pictures below). The installation was pretty effective. People intuitively use their hands when they see something they want to interact with. With this installation people could immediately see the effects of their actions, since the particles drew lines wherever the user moved their hands. The results were pleasing to see too, due to the physics that the particles were operating with. It was really enjoyable for me watching everyone have so much fun with it.

The installation also had a sound component. Based on the distance between both hands, sounds were triggered in certain spaces of the room. This was a little less effective because their was no visual reference for the users. They could tell that they were doing something that made the sounds, but couldn’t tell quite what it was that the sounds were linked to.

I’ve also created body as cursor projects for live performance where dancers’ movements are tracked from above. This is a little less obvious to the audience, since they are not directly interacting with it, but can be effective for using multiple people’s movements to draw things on screen.

GO – Brooklyn Open Studios pictures:

go pic 1
go pic 1
go pic 1

People Watching…

Two summers ago my wife and I went to a different location in NYC almost every Saturday and did a pop-up interactive street installation. We’d run power off of the car’s alternator and set up a webcam, a projector, some sound amplification, and one or two computers. As pedstrians walked by on the sidewalk their movements would be captured by the webcam and would generate music, while the images from the webcam were manipulated and projected on a wall. Here are some video clips of one of those installations:

The reactions to these installations was quite varied. Some people would walk straight by as if there was nothing out of the ordinary going on. Some would walk with their eyes downcast in apparent hope to avoid any contact, and some even apologized to us for getting in our way! Most people would sort of slow their pace and scope out what was going on, before continuing on to wherever they were going. For the people who did actually stop and ended up playing with the installation I think I can safely say it was a delight. Adults started acting like kids, and kids just became, well, more kid like.

The first thing people noticed, of course, is that there was a giant colorful thing happening on a wall. The projection was a software mirror, so it reflected whatever the webcam saw, but the images were being manipulated (using Max) and sometimes people didn’t notice right away that they were seeing themselves. But after a few moments they pretty quickly started saying, “hey, that’s me!”, and the novelty of the manipulated projection drew them in. In line with Donald Norman’s “The Design of Everyday Things,” the mirror aspect provided great feedback and helped inform people right away that this was something they could interact with. There were no instructions needed, the design of it enabled people to grasp that point on their own, and without much effort.

The second interactive part of the installation was a little bit more tricky. The majority of people had no idea that they were triggering the sounds. We tried to make this aspect as simple as possible, and at one point even made it so that they triggered a major scale, note by note, as they walked by, but that still didn’t help. According to Chris Crawford’s “The Art of Interactive Design”, this should’ve been a great design, it had all three necessary parts of communication required for “good” interaction. The webcam took in input (listening), Max would process that input and see if there was any movement happening (thinking), and if so it would trigger a certain note based on where movement happened in the space (speaking). But people still didn’t pick up on it. Why? I think the main reason is that there just wasn’t enough feedback as to what actions made what sounds. People couldn’t map a particular action to a particular sound, and thus they just didn’t recognize that they were in fact triggering the sounds.

This is a continual consideration and problem point when I design hands free gestural sound interfaces. Unless there is some definite action that creates some definite sound it is very hard to make that interactive element visible. The most success I’ve had with it is if volume is mapped to an appendage like an arm. It’s easy for people to cognitively map an arm going up to volume going up, and an arm going down to volume going down. Triggers based off of moving the entire body through a particular place in space (which is how the street installations worked) are much harder for people to cognitively map.

At any rate, the street installations were a large success in my opinion. People could tell that they were interacting with something because of the software mirror, and they generally liked the music, even though they didn’t know they were making it. It was a joy for me watching them play and have fun. And they really did have fun, we all did! In the end, I couldn’t ask for anything more.

On physical interaction…

According to The Art of Interactive Design physical interaction is communication. It’s not just a switch turning something on and off. Nor is it fancy user interface design. It’s a dynamic process that can be broken down into three basic divisions: listening, thinking, and speaking. In the scope of physical computing this can be thought of as input, processing, and output. In terms of our class discussion last week this would seem to place implicit interaction over explicit, gestural over tangible, since explicit/tangible interaction was defined as deliberately turning a switch on or off. This line is not always so clear though. Each situation requires it’s own solution, as the author of A Brief Rant on the Future of Interaction Design alludes to.

He posits that tools are designed to meet a human need. The tool should fit both the problem and the capabilities of the user. In this light implicit/gestural interaction is not necessarily better than explicit/tangible. For example, let’s say I’m an OK piano player, I can play some chords, but playing a melody in one hand while playing chords in the other hand is beyond me. I want to be able to play a melody and have a computer analyze what i’m playing and then provide the backing chords while I play. Playing the melody is explicit, i’m choosing which notes to play, what rhythm, etc., but there is also the aforementioned communication going on too. The computer listens to what I play, thinks about it, and responds appropriately. The tool fits my capabilities, and meets my needs.

The same author also urges us to consider the entire range of human expression when designing interactivity. Computers usually observe such a small fraction of what humans are capable of. We can move and sense and feel and act in nearly countless subtle variations. A really excellent interactive device will be designed with this in mind.

So, taking all of the above into consideration, “good” interactive design could be defined as having the following characteristics:

  • it enables communication, specifically through listening (input), thinking (processing the input), and speaking (output)
  • it considers the user’s capabilities & needs; including making the result of their actions clearly recognizable
  • it address a specific situation, problem, or goal
  • it takes into account the whole range of human expression, i.e. listening more subtly and thus responding more naturally