top of page
Signal and Noise
00.jpg

Signal and Noise began as a research project in conjunction with the computer science department and the visual studies department at the University of North Carolina Wilmington.

My research intent started as follows:

------

1. A participant steps forward to the projection space

2. A motion tracking/depth-sensing camera begins reading the participant 

3. As the participant begins moving their hand(s) across the screen, their position will determine which sound piece is heard

4. The sound pieces are cycled through at random by the software itself, each time manipulating the sound further

5. This interaction is experienced with hand-painted 16mm film, indefinitely looped

The intention of this project is: 

1. Create a tactile and reactive three-dimensional space for sound to exist within

2. Create a user-generated and completely individual process of using movement to control sound

3. Create a conceptual connection between sound deteriorating over time and an image deteriorating with it

Movement will serve as a trigger for the volume of individual sound pieces. With no one occupying the projection space, the only sound is the 16mm projector pulling film through its gate. As someone steps forward and motion begins to track, volume will increase and decrease corresponding to the area of the projection a participant's hand is at. A short lag is applied to each volume trigger, maintaining that no movement will cause a hard cut-off of sound. 

After each individual sound piece reaches its end point, it is cycled back through the program and manipulated by the software (pitch, duration, clarity) before being assigned to a zone. 

------

As the project took shape, a further relationship was created between the completely analog visual image and the manipulation of the sound exclusively by the software. 

The software was created using TouchDesigner. Early tests were ran on Max, but TouchDesigner was chosen because of its built in compatibility with depth-sensing cameras and the automatic data it collects from them. Many of my issues came from calibrating the camera to detect depth while adjacent to a participant rather than from in front or behind. Ideally, the camera would be mounted above the participant angled down to capture only the hand and wrist. However, data became less predictable if the software was only able to see a hand rather than the entire body it was connected to. 

All data was output to a vertical grid with zones calibrated specifically for the projection space. Nine zones were chosen for this version of the project, but this could be expanded to cover each individual pixel of space.

The pink and blue circles to the left of the grid represent each hand. When data is collected, these representations move to the zones.

The entire program can be broken down into collecting data, setting the threshold for the amount and type of data collected, and then creating an output based on certain thresholds being met. These outputs are mapped onto a grid that mirrors the projection space.

Program
bottom of page