Call() and Response()
A Collaborative Composition Experience
Inspired by the famous musical interaction in “Close Encounters of the Third Kind,” “Call() and Response()” allows a human to perform a duet with a collection of digital entities that can respond and compose their own melodies. Driven by both machine-learning pitch detection and randomized, fractal-based pattern creation, this digital experience is part of a larger collection of SCI 6338 final projects that focus on the interpretation of melody to produce creative ‘translations’ and ‘transformations’.
During the experience, the human participant sings or speaks into a microphone, and in real time, the digital entities respond with a new melody of similar length based on the participant’s original input. It is possible for the participant to allow the digital entities ‘to take the lead,’ building a back and forth where both the human and digital composing parties riff off of one another. However, it is just as interesting to push the experience to its limits by providing unexpected or difficult to interpret inputs. By allowing human participants to glean more about how the digital entities listen, learn, and compose through interaction and collaboration, this experience showcases how collaborative auditory-visual experiences can provide unique perspectives into complex systems that underlie them.