Call() and Response()

A Collaborative Composition Experience

Sana Sharma
MDes Tech 2021
Image
1 Entities as they ‘perform’ their own composition with sound, color, and movement.

Inspired by the famous musical interaction in “Close Encounters of the Third Kind,” “Call() and Response()” allows a human to perform a duet with a collection of digital entities that can respond and compose their own melodies. Driven by both machine-learning pitch detection and randomized, fractal-based pattern creation, this digital experience is part of a larger collection of SCI 6338 final projects that focus on the interpretation of melody to produce creative ‘translations’ and ‘transformations’.

During the experience, the human participant sings or speaks into a microphone, and in real time, the digital entities respond with a new melody of similar length based on the participant’s original input. It is possible for the participant to allow the digital entities ‘to take the lead,’ building a back and forth where both the human and digital composing parties riff off of one another. However, it is just as interesting to push the experience to its limits by providing unexpected or difficult to interpret inputs. By allowing human participants to glean more about how the digital entities listen, learn, and compose through interaction and collaboration, this experience showcases how collaborative auditory-visual experiences can provide unique perspectives into complex systems that underlie them.

Project video
Image
2 Entities in their ‘resting’ state, with fluctuations generated by Perlin noise.
Image
3 Detail of appearance when ‘listening’.
Image
4 Detail of appearance when ‘performing’.
Image
5 Entities as they ‘listen’ to a human performer, representing pitch with saturation.