My concept project incorporates the exploration of visual and sonic relationships produced by ecosystemic data mapping. More specifically, I'm interested how distinct spaces sharing a common boundary (e.g. rooms in a building or buildings within a university) could be melded into a common space, or "composite audiovisual ecosystem."
I would use microphones to track the sonic profiles of multiple distinct environments, preferably public spaces--libraries, hallways, cafes, playgrounds, etc. Using custom software, I would extract frequency and amplitude information from these signals in realtime and transform them into a series of data streams. These fluctuating data streams would be structurally coupled to various sound parameters of the audio signals being tracked, as well as video of the environments. This would form a "net" of data connections among the various spaces. This "net" of data couplings would enable the characteristic sounds events of each respective environment to induce change in the audio and video signals of the others, thus informing the overall audiovisual output of the piece. (The audiovisual output would include multiple realtime video projections as well as a multichannel speaker array.) In effect, the composite audiovisual output would represent the interactive intersection of multiple spaces in a single environment.