I've decided to do some work exploring the relationship between aural and visual language. I have been experimenting with different ways of making sounds based on bitmap images (playing a picture like a musical score).
Through a software program I've been able to open a bitmap file of a photo, where it's then broken down into fewer pixels. The musical "score" is based on these pixels, and played from left to right. The height of each pixel determines the pitch, the color determines where the note will be played on the stereo field (left or right speaker), and the brightness determines the volume.
Within these guidelines, the pitches can be redefined to fit a particular musical scale. Removing specific horizontal lines of pixels accomplishes this by eliminating all notes that do not fit within that scale. Further manipulation of the image results in multiple variations of sounds.
Here are a couple examples, showing the original photograph, the pixelized png (stretched out to match the length of the audio file), the further edited png, and the resulting MP3 file (looped).
Using this method, I plan on composing a longer-length (2-3 minute) song, based on a series of original photographs that will tell a narrative on their own. After this process is complete, I will create a movie file that scrolls through the line of photographs as the resulting audio file plays along in real time, showing both ways of communicating the same story via distinctly different processes.