
Monday, June 30
(9h-17h, at Room 1.01 of the Main Venue)
Student ThinkTank & Doctoral Consortium
(9h-13h)
Workshops & Tutorials
- Live Coding Live Data: An exploratory approach to data sonification Shelly Knotts
- Telematic and In-Person Music Co-Creation in Perfect Sync Regardless the Pyhsical Distance, Network Bandwidth, or Latency: Introducing L2Ork Tweeter and Its Growing International Community of Creatives
Ivica Bukvic
— LUNCH —
(14h-18h)
Workshops & Tutorials
- Kinderklang Jam: expanded and collaborative play among children
Graziele Lautenschlaeger and Rafael Bresciani - COMPEL: Crowd-Sourcing a Computer Music and Performing Arts Database
Shelly Knotts
Tuesday, July 1
(9h-13h)
Workshops & Tutorials
- The Sounds of Sustainability: Examining Carbon Footprints through Sonification
Jordan Wirfs-Brock and Thomas Hermann - Ensonification: Facilitating Ensemble’s Improvisations through Data Sonification Performances
Tristan Peng, Hongchan Choi, Chris Chafe and Nilam Ram - Let’s Move Together: A Workshop on Movement Data Sonification for Sports and Rehabilitation
Vincent van Rheden, Michael Reichmann, Alexander Meschtscherjakov, Sascha Ketelhut, Antoni Rayzhekov, Daniel Hug and Nina Schaffert
— LUNCH —
(14h-15h30 – at the Main Venue)
Welcome Reception
(16h-18h, at Casa das Artes)
Music & Concerts #1
- Echoes of the Unseen
Nádia Carvalho and Jorge Sousa - Galaxy Spectra Suites
Adrian Garcia Riber and Francisco Serradilla - Tracotanza
Nicola Fumo Frattegiani - AIded Sonification : Women in STEAM in Three Movements
Visda Goudarzi and Areti Andreopoulou - The Cycle of Life and Decay
Berk Yagli
(18h30-20h, at the Main Venue)
Keynote – “Conversations and Reflections with ICAD’s Founder“
by Gregory Kramer
(22h-24h, at Salão Brazil)
Music & Concerts #2
- Tonspur: I’m telling you the truth
Se-Lien Chuang and Andreas Weixler - Interstellar
L2Ork International Ensemble - The application of recombinant DNA theory for sonification and music composition
Mark Temple - Frames for Nothing: Devices for Revelation in the Phase Space of the Marvelous
Jorge Boehringer
Wednesday, July 2
(9h-10h)
Keynote – “Transforming Body Perceptions through Audio (mostly) and other Senses: Neuroscientific, Human-Computer Interaction and AI-driven Approaches and Applications“
by Ana Tajadura-Jiménez
(10h-11h)
Paper Session #1 – Frameworks & Techniques 1
- Pya AGen – Audio Generators for Sonification in Python
Luka Finn Born and Thomas Hermann - MusicFlower: A Web App and Python Framework for Interactive Music Visualisation
Robert Lieck - Pixasonics: an image sonification toolbox for Python
Balint Laczko and Alexander Refsum Jensenius - Introducing WebPdL2Ork
Ivica Bukvic, William Furgerson, Justin Kerobo and Bradley Davis
(11h30-12h45)
Paper Session #2 – Design Challenges
- Sonic Scribbles – Constructing Sketch Classes from Visual
Associations of the Mental Model for Audio
Lars Engeln, Rainer Groh and Matthew McGinity - Sound’s Influences on Perceptions of Anthropomorphism and Animacy in Interfaces
Kc Collins, Maya Murad and Adel Manji - Signal and Symbol Representations in Sonification:
Structuring Data-to-Display Transformations
Miguel Angel Crozzoli, Gonzalo Muruaga and Thor Magnusson - Investigating the Affective Quality of Audio Designs in
Virtual Reality
Alvaro Olsen - Legacy, Active and Future Software Tools in Sonification Research
Sofia Vallejo Budziszewski and Katharina Groß-Vogt
— LUNCH —
(14h-15h)
Poster Session #1
- Trends in Goals and Authorship Over Three Decades: Insights from the Data Sonification Archive
Per Magnus Lindborg, Valentina Caiola, Sara Lenzi and Paolo Ciuccarelli - Helium Burning: affordances of stellar nucleosynthesis to
explore timbral plasticity, form and spatialization in
computer music
Juan Hernandez - Detmold musical Instrument Timbre Explorer (DmITE):
Interactive visualization of musical instruments radiation
pattern using IEEE 1599
Davide Andrea Mauro, Luca Andrea Ludovico, Hamit Batuhan Aydin, Axel Berndt and Timo Grothe - In the middle of the music: A qualitative study of headset
and loudspeaker distributed interactive spatial audio within
a musical mixed-reality environment
Laurence Cliffe, Mairah Khan and Steve Benford - A Sensor is not a Sensor: Diffracting the Preservation of Sonic Microinteraction with the SiFiBand
Hugh Alexander von Arnim, Çağrı Erdem, Ulysse Côté-Allard and Alexander Refsum Jensenius - From Installation to Instrument: A Qualitative Inquiry into
Gesture, Expression, and Composition with Soundscape
Trevor Hunter, Eve Klein and Stephen Viller - An investigation into the mapping of sonic parameters to
pen movement in tablet tasks
Georgios Marentakis and Ahmed Raza Mir - Where is That Bird? The Impact of Artificial Birdsong in
Public Indoor Environments
Maham Riaz, Jinyue Guo, Bilge Serdar Göksülük and Alexander Refsum Jensenius - Playing With Sound Beings: A Phenomenological Exploration
Mariana Seiça, Licinio Roque, Pedro Martins and F. Amílcar Cardoso - A case study of translating sonifications across musical
cultures for an educational application
Chris Harrison, James Trayford, Arron George, Leigh Harrison, Rubén García Benito, Shirin Haque and Rose Shepherd - Non/Repeat: Three Case Studies of Non-linear Live-Music
Practices
Maria Kallionpää, Axel Berndt, Davide Andrea Mauro and Damian Dziwis
Installations & Demos
- Idiosyncrasy: Environment ressoning as a music Instrument
João Simões, Jônatas Manzolli and João Neves - Sounding the Abstract
Woohun Joo - Interactive sonification of bibliometric data: the case of publications within the field of sonification and auditory display
Maurizio Berta, Roberto Bresin and Gaël Dubus - Real-Time Footstep Sonification for Enhancing Perceived Indoor Running Environments
Oskar Lövgren, David Segal and Julia Näsström - MUSE: A Bio-inspired Musical Memory System
João Aveiro, João Neves and Jônatas Manzolli - Interactive Visualisation of Tonal Space in Augmented Reality
Toby Watkinson and Robert Lieck - The Interactive Musical Periodic Table 2.0: Musical Transformations
of Element 2 Spectra Sonifications
Walker Smith and Marina Bosi - Music Video Scope Pad for Fast Video Selection
Masatoshi Hamanaka - Sound Balls: Revisiting Handheld Spherical Interfaces for
Participatory Music
Matthias Jung
(15h-16h)
Paper Session #3 – Playing with AI
- Meaningful Interactions in Human-AI Musicking
Oliver Miles, Adrian Hazzard, Solomiya Moroz, Laura Bishop
and Craig Vear - Leveraging small datasets for ethical and responsible AI
music making
Nick Bryan-Kinns, Anna Wszeborowska, Olga Sutskova, Elizabeth Wilson, Phoenix Perry, Rebecca Fiebrink, Gabriel Vigliensoni, Rikard Lindell, Andrei Coronel and Nuno N. Correia - AI Meets Sonification: Research Agenda and Technology Demonstration
Fabian Hommel and Thomas Hermann - Motiv: A Dataset of Latent Space Representations of Musical Phrase Motions
Nádia Carvalho, Jorge Sousa, Gilberto Bernardes and Henrique Portovedo
(16h30-18h)
Paper Session #4 – Perception & Cognition & Multimodality
- Interface Sound Variations Influence Learned User Responses
Miguel Valle and Kc Collins - EXPLORING HOW 5-SECOND-LONG MUSIC CAUSES WAITING TIME TO BE PERCEIVED AS SHORTER OR LONGER
Nao Harada and Takanori Komatsu - Two Empirical Studies on Audiovisual Semiotics of
Uncertainty
Sita Vriend, David Hägele and Daniel Weiskopf - Audio-Influenced Pseudo-Haptics: A Review of Effects,
Applications and Research Directions
Keru Wang, Yi Wu, Pincun Liu, Zhu Wang, Agnieszka Roginska, Qi Sun and Ken Perlin - Cyclic Patterns and Spatial Orientations in Artificial
Impulsive Autonomous Sensory Meridian Response (ASMR) Sounds
Henrik Haraldsen Sveen, Laura Bishop and Alexander Refsum Jensenius - Harmonic Materials: Composing with found objects and a
camera-based synthesiser
Jasmine Butt, Nathan Renney, Benedict Gaster and Tom Mitchell
(22h-24h, at Salão Brazil)
Music & Concerts #3
- The noise of Peace!
Biagio Francia - Bacteriophage in Granular Waves
Stephen Roddy - Langue Étrangère
Héloïse Garry - Echoes of Presence: Sonifying Collective Spacial Behaviour in Workplace Environments
Misaki Yamao
Thursday, July 3
(9h-10h)
Keynote – “Creating Audio Software: a Ride of a Lifetime“
by Nuno Fonseca
(10h-11h)
Paper Session #5 – Design in Context
- Towards sonification for accessibility in public transport
Niklas Rönnberg - Sound design for HRI: a case study exploring appropriateness, naturalness, and sociality of a zoomorphic robot
Balandino Di Donato, Marta Romeo and Matthew Aylett - Sonification-based Steering Assistance for Curve Driving:
Evaluations in Virtual and Real-World Environments
Sayuri Matsuda, Satoshi Nakamura, Kento Watanabe, Takeshi Torii, Hideyuki Takao, Yuki Mizuhara and Saeri Shimizu - Sound-driven Design through the lens of four applications
for healthcare
Elif Özcan, Nicolas Misdariis and Stefano Delle Monache
(11h30-12h45)
Paper Session #6 – Culture & Listening & Sound Art
- Listening Together-Apart: On the Social Mediation of Sound Installation Listening
Nicole Robson, Andrew McPherson and Nick Bryan-Kinns - Beyond Universality: Cultural Diversity in Music and Its Implications for Sound Design and Sonification
Rubén García-Benito - Situating Norths: Critical Thinking from Critical Listening in Data-derived Sound Arts Practice
Jorge Boehringer - The Conduction Series: Live Collaborative Transmission Art
across Borders
August Black, Betsey Biggs, Anna Friz, Maximilian Goldfarb, Peter Courtemanche, Florencia Curci, Virginia Mantinian, Jimmy Garver, Jeff Economy, Rodrigo Ríos Zunino and Galen Joseph-Hunter
— LUNCH —
(14h-15h)
Poster Session #2
- ACOUSTIC ANALYSIS OF RESONANT ABSORBER USING RECYCLED MATERIALS FOR STANDING WAVE REDUCTION
Sanghoon You and Seungyon-Seny Lee Lee - Refining sonification methods to improve shooting
performance and ergonomics for the visually impaired
Coline Fons, Sylvain Huet, Denis Pellerin and Christian Graff - The Design and Exploration of Auditory Display Effects for
General Vision Substitution
David Dewhurst - Comparing Trend-Based and Direct HRV Biofeedback in an Adaptive
Game Environment
Michael Louka, Kedi Krasniqi, Hassan Mahauri and Georgios Marentakis - 360 VSP: Issues and limitations of 360 Virtual Sound
Productions for mobile devices
Balandino Di Donato, Martin Rieger and Iain McGregor - Making a Binaural Microphone: Comparing Audio Quality, Listener Experience, and Costs of DIY with a Commercial Option
André Hedlund, Tanja Jörgensen and Rikard Lindell - Virtual Reality For The Visually Impaired: 3D audio, audio
description and haptics in games and immersive experiences
Cesar Portillo - Speed and Direction as two Overlapping Rhythms: Accuracy and Precision in Interpretation and Performance
Alessio Bellino, Davide Rocchesso, Chiara Mascetti and Giorgio Presti - DJ AI: Optimizing Playlist Alignment and Generating
Transitions with Generative and Embedding Models
Orkun Kınay, Barış Tekdemir, Göktuğ Gökyılmaz, Ekmel Yavuz, Berk Ay and Selim Balcisoy - Creating the Ideal Soundscape: Designing Study Friendly
Cafe Environments
Xinyue Hu - RhyGlyph: Radial Glyph Visualization of Rhythm Interactions
Cagri Erdem, Carsten Griwodz and Davide Rocchesso - User-Centered Insights into Analogue and Digital Mixing:
Perspectives of Novice Music Producers
Vanja Budsberg, Christopher P. Dewey and Austin Moore
Installations & Demos
- Idiosyncrasy: Environment ressoning as a music Instrument
João Simões, Jônatas Manzolli and João Neves - Sounding the Abstract
Woohun Joo - Interactive sonification of bibliometric data: the case of publications within the field of sonification and auditory display
Maurizio Berta, Roberto Bresin and Gaël Dubus - Real-Time Footstep Sonification for Enhancing Perceived Indoor Running Environments
Oskar Lövgren, David Segal and Julia Näsström - MUSE: A Bio-inspired Musical Memory System
João Aveiro, João Neves and Jônatas Manzolli - Interactive Visualisation of Tonal Space in Augmented Reality
Toby Watkinson and Robert Lieck - The Interactive Musical Periodic Table 2.0: Musical Transformations of Element 2 Spectra Sonifications
Walker Smith and Marina Bosi - Music Video Scope Pad for Fast Video Selection
Masatoshi Hamanaka - Sound Balls: Revisiting Handheld Spherical Interfaces for
Participatory Music
Matthias Jung
(15h-16h)
Paper Session #7 – Games & Immersion
- Plétude: New Paradigms for Notation with Game Engine-Based Scores
Austin Franklin and Mary Bergeron - An Exploratory Phenomenological Study into how Audio Shapes Emotions in Player Experience
Pedro Garruço, Licínio Roque and Luís Lucas Pereira - Procedural Sonification of Environmental Phenomena for
Realistic Sound Design
Dimitris Menexopoulos - I am the Ball
Yann Seznec
(16h30-17h45)
Paper Session #8 – Sounds of Science
- THE EUCALYPTUS TREE MONOLOGUES
Mark Temple - Auditory Graphs for Stochastic Processes: A Case Study on
Mathematical Accessibility
Haru Negami, Shun Yanashima, Teturo Hori, Makiko Kobayashi, Kyosuke Ono, Kennosuke Tanaka and Ryota Yamakawa - Sonification Strategies for Mapping Visible Element Emission Spectra to Perceptually Relevant Sounds While Retaining Tonal Information
Walker Smith and Marina Bosi - Listen to Your Neighbor: Auditory Display of Spatial Microbiome Differences Based on Biological Classification
Fushi Sano, Kihiro Tokuno, Ze Hu and Mamoru Hiramatsu - Interactive Sonification of 2D Quantum Systems
Jannis L. Müller, Arthur Freye and Thomas Hermann
— Social Event & Conference Dinner —
Friday, July 4
(9h-10h)
Keynote – “Searching for the sweet spot“
by Miriam Quick
(10h-11h)
Paper Session #9 – Frameworks & Techniques 2
- Mapping the Neighborhood of Microtonal Music Scales Using
Self-Organizing Maps
Kousha Nikkar, Matteo Sacchetto and Cristina Rottondi - Harmonic Sonification of High-Dimensional Continuous Data
Tony Dalziel and Robert Lieck - Navigable Semantic Sound Maps for Auditory Displays
Mika Alexander Sieweke and Thomas Hermann - An Interactive Self-Assembly Swarm Music System in Extended Reality
Pedro Lucas, Stefano Fasciani, Alex Szorkovszky and Kyrre Glette
(11h30-12h45)
Paper Session #10 – Motion & Sports
- Sonic Skater Jump: Exploring the Sound Design of Complex
Auditory Movement Guidance for Physical Therapy
Daniel Hug, Michelle Haas, Jasmin Meier and Eveline Graf - SoniBould: Towards understanding the design of sonified
expressions for bouldering
Hakan Yilmazer, Aykut Coşkun and Florian ‘Floyd’ Mueller - Flip, Freeze, Flow: Exploring Interactive Sonification of
Breakdancing
Casper Preisler, Thomas Saunter, Niccolò Sarraga, William Gerdes and Prithvi Kantan - Comparison of Speed-based and Position-based Auditory
Feedback in Eccentric Strength Training
Ryuto Oishi and Satoshi Nakamura - Spatial Audio Paths – Spatial Expressiveness of Immersive
3D Audio Trajectory Editing
Lars Engeln, Leon Georgi, Robert Ludwig, Krishnan Chandran, Luca Steindorf, Fabian Töpfer and Matthew McGinity
Keynotes
Gregory Kramer
Conversations and Reflections with ICAD’s Founder – Tuesday, at 18h30

Biography
Gregory Kramer is a meditation teacher, author, researcher, and composer. He is a founding figure in the field of Auditory Display and published the first book in this area, “Auditory Display: Sonification, Audification and Auditory Interfaces” (Addison Wesley). He inaugurated the International Conference on Auditory Display. As a member of the Santa Fe Institute, his area of concentration was sonification of high dimensional systems and understanding complexity. Kramer was on the Editorial Board of the MIT journal Presence and was a visiting professor at Japan’s National Institute of Fusion Science. He received his B.F.A. from California Institute of the Arts, his M.A. in Composition from NYU, and his Ph.D. in “Learning and Change in Human Systems” from CIIS.
A National Endowment for the Arts Composition Fellow, and former Assistant Professor of composition at New York University, Kramer also co-founded the non-profit arts organization Harvestworks, in New York City. He founded and toured extensively with the Electronic Art Ensemble, and is recorded on multiple labels. Greg has scored award-winning films, dance, and video works. He has been developing new instruments with Dr. Robert Moog and holds patents in auditory display and audio signal processing.
Gregory is the Founding Teacher of the Insight Dialogue Community and has been teaching insight meditation since 1980. He developed the practice of Insight Dialogue and has been teaching it since 1995, offering retreats worldwide. He has studied with esteemed monastic teachers from Thailand and Sri Lanka. Gregory is the author of “A Whole Life Path: A Layperson’s Guide to a Dhamma-infused Life” (Insight Dialogue Community); “Insight Dialogue: The Interpersonal Path to Freedom” (Shambhala); “Seeding the Heart: Practicing Lovingkindness with Children”; “Meditating Together, Speaking from Silence: The Practice of Insight Dialogue”; and “Dharma Contemplation: Meditating Together with Wisdom Texts”. He has taught widely on a relational understanding of the Buddha’s teachings and is currently writing a book on relational Dharma.
Greg is the father of three sons and grandfather to seven. He lives with his wife on Orcas Island in the northwest USA.
Ana Tajadura-Jiménez
Transforming Body Perceptions through Audio (mostly) and other Senses: Neuroscientific, Human-Computer Interaction and AI-driven Approaches and Applications – Wednesday, at 9h
Body perceptions are crucial for individuals’ motor, social, and emotional functioning, as well as for health. Importantly, neuroscientific research shows that body perceptions are continually updated through sensorimotor information. This talk will showcase our group’s research on how audio (mostly) feedback, particularly sound related to one’s body and actions, can modify body perception, leading to Body Transformation Experiences. I will discuss how these findings, contribute to the design of innovative body-centered technologies to address people’s needs and support health. Additionally, beyond such practical applications, these technologies serve as valuable tools for neuroscientific research into multisensory influences on body perception, as well as the interplay between body perception, motor functions, and social interactions. Our ERC-funded project, BODYinTRANSIT, aims to establish a framework for individualized sensorial manipulation of body perceptions with long-lasting effects in everyday use contexts. The framework stands on four scientific pillars to induce, measure, support, personalize, and preserve body transformations: neuroscience of multisensory body perception; AI-driven data modeling of the links between body perception, behavior, and emotion; wearable-based embodied multisensory interaction design; and field studies in real-life contexts with diverse user groups. Finally, I will identify challenges and opportunities in this research field.

Biography
Ana Tajadura-Jiménez is an Associate Professor at Universidad Carlos III de Madrid (UC3M) and Honorary Research Fellow at the University College London Interaction Centre (UCLIC). She leads the i_mBODY lab, where they conduct multidisciplinary research at the intersection of Human-Computer Interaction (HCI), Cognitive Neuroscience, Engineering and AI. Her team investigates how to design novel sensorial and body-centred paradigms and technologies that change people’s perceptions of their own body and the world, their interactions and emotions. One of the aims of this research is to inform solutions that support emotional and physical health, as well as drive behaviour change in real-world contexts.
She holds a European Research Council Consolidator Grant BODYinTRANSIT (2022-2026), focused on advancing research on sensory-driven Body Transformation Experiences. She is also principal Investigator of the project SENSEBEAT-DS, focused on sensory and body perception in depressive symptomatology.Previously, she obtained a PhD in Applied Acoustics from Chalmers University of Technology in Sweden. She was a postdoctoral researcher in the Lab of Action and Body at Royal Holloway, University of London, an ESRC Future Research Leader and Principal Investigator of The Hearing Body project at University College London Interaction Centre and a Ramón y Cajal fellow at Universidad Loyola Andalucía.
Her research has led her to receive the 2019 “Excellence Award” from the UC3M Consejo Social and the 2021 Science and Engineering Award from Fundación Banco Sabadell. She is an Editor of IEEE Transactions on Affective Computing, Frontiers in Virtual Reality, Frontiers in Computer Science, Scientific Reports and International Journal of Human-Computer Studies. Her work has been published in high-impact venues such as Current Biology, Neuroimage, Cerebral Cortex, Human-Computer Interaction or the Proceedings of the ACM CHI Conference on Human Factors in Computing Systems.
E-mail: atajadur@inf.uc3m.es
X: @AnaTajadura
LinkedIn: https://www.linkedin.com/in/ana-tajadura-jimenez-910b8723/
Web: www.imbodylab.com
Nuno Fonseca
Creating Audio Software: a Ride of a Lifetime – Thursday, at 9h
The story of Sound Particles, from a single idea of a university professor that wanted to generate thousands of sounds around the listener, to a company whose software is used by all major Hollywood studios, in productions such as Dune, Oppenheimer, Game of Thrones, and StarWars.
The keynote will mention the challenges, the adventure, the highs and lows of the journey of creating an audio technology, bringing it to the market, and see that tech being used by some of the best audio professionals in the world.
Finally, the session will also mention some of the work the company is doing on probably the most complex audio problem of the industry at the present: searching for the holy grail of personalized binaural (3D sound over headphones).

Biography
Founder/CEO of Sound Particles, a company that creates 3D audio software, used in AAA games and big Hollywood productions.
With a PhD in Computer Audio, Nuno Fonseca is the author of 2 books, 1 eBook, and more than 20 papers on audio research.
Over his career, Nuno gave presentations at multiple places, including Skywalker, Disney, Pixar, WB Bros, Universal, Sony, Paramount, Fox, Netflix, Apple, Google, Playstation, Blizzard, Stanford, TEDx, Academy of Motion Pictures, among many other places.
More info at www.soundparticles.com
Miriam Quick
Searching for the sweet spot – Friday, at 9h
Making musical sonifications that connect with everyday listeners means navigating a delicate three-way balance. You’re constantly negotiating between data accuracy, musical form, and storytelling – three elements that don’t always want to play nicely together. And often, your audience has never even heard of sonification. So before they can understand what you’ve made, you need to show them what to listen out for. In that moment, your work becomes a stand-in for everything sonification is, or could be.
It’s a lot to carry. But rather than seeing these constraints as limits, what if we treat them as creative fuel? What if they push us to listen more closely to the data, to discover its hidden rhythms and textures?
That’s the approach we take at Loud Numbers, the sonification studio I run with Duncan Geere. We started out making a podcast, and since then we’ve created data-driven music for museums, brands, and art spaces, often based around environmental issues – from dub tracks built on air quality readings, to jazz-infused climate scores performed around COP27 (in collaboration with composer-trombonist Simon Petermann), to dramatic evolving soundscapes inspired by wildfires in Canada and Sweden.
Much of our work is sonification-as-activism: using sound to direct people’s attention to data and topics we think are important. And in all our work, we are chasing a kind of alchemy: the moment where story, sound, and data click into place. The sweet spot. This talk is about how we try to find it – and what we’ve learned along the way.

Biography
Miriam Quick is a UK-based data journalist, artist and musician who explores novel, diverse and multi-sensory ways of telling stories that blend science and art. She has turned numbers into everything from charts, graphics and books to museum installations, necklaces, engraved 12-inch records and pieces of music. She is co-founder of the Loud Numbers data sonification studio and co-author, with Stefanie Posavec, of the award-winning I am a book. I am a portal to the universe (Penguin, 2020).
See more at:
https://www.miriamquick.com/
https://www.loudnumbers.net/
Workshops & Tutorials
Live Coding Live Data: An exploratory approach to data sonification – Shelly Knotts
The workshop explores practices and processes for sonifying live environmental datasets using live coding techniques with Sonic Pi and IoT platform Thingspeak.com.
Using live coding techniques enables: quick iterative prototyping of sonifications; can be of use in educational contexts; and opens up the potential to engage with sonification techniques in performative and exploratory ways. Live coding tools work with short feedback cycles, providing the potential to work with data in a more dynamic and playful way. Exploring the data in a flexible way may lead us to surprising and interesting results, that may not have been reached through other techniques and creates accessible entry-points to working with sonification in public facing settings.
The workshop will explore the whole workflow from sensors to sound, and ways that we can usefully introduce explorative practices into this workflow. Participants will be guided through a set of resources developed for educational purposes and will engage in a short evaluation of these resources.
Telematic and In-Person Music Co-Creation in Perfect Sync Regardless the Pyhsical Distance, Network Bandwidth, or Latency: Introducing L2Ork Tweeter and Its Growing International Community of Creatives – Ivica Bukvic
This workshop focuses on L2Ork Tweeter, a free and open source app that is included with the Pd-L2Ork (a Pure-Data variant) for the purpose of collaborative telematic and in-person musicking.
L2Ork Tweeter offers unique affordances, like the ability to co-create and co-direct every stage and aspect of musicking, including instrument design, and to do so while maintaining perfect sync and pristine audio quality regardless the physical distance, network bandwidth, and latency. Originally motivated by the 2020 pandemic, the software was designed to challenge the limitations of telematic musicking, initially focusing on the EDM-style music that would challenge the synchronization limitations. Since, it has grown into an experimental platform, spawning a number
of international communities, including the Virginia Tech L2Ork Ensemble, and the L2Ork International Ensemble. The international community has premiered and performed six new works across four continents and with the longest achieved distance of synchronous musicking being 12,100 miles, mere 300 miles short of the longest possible distance between any two points on the Earth.
In the spirit of the conference theme “Let’s Play Together”, through a hands-on session comprising the majority of the workshop, the workshop participants will engage in a collaborative musicking in a hybrid setting using L2Ork Tweeter, co-creating with their on-site fellow participants, as well as the L2Ork International Ensemble community members across the world who will connect over the internet. The goal of the 4-hour workshop will be a co-creation of a new short music piece that will leverage L2Ork Tweeter’s unique affordances and be presented at one of the conference performances, ideally in a late night informal club scene setting. Following the workshop, participants will be invited to join the L2Ork Tweeter international community, explore community events,
such as the annual Hackathon, and/or create a new satellite ensemble
of their own.
Kinderklang Jam: expanded and collaborative play among children – Graziele Lautenschlaeger and Rafael Bresciani
This workshop proposal consists of a preliminary exploration to gather creative inputs from children and sound art specialists for developing the first prototype of the cross-disciplinary project Kinderklang Jam: digital media and sound art for, with and by children.
The project envisions the development of a sound environment to be played collaboratively by children aged between 4 and 8 years old in a jam logic. In the context of the workshop, the proposed activities are meant to verify what are the desirable basic features and didactic approaches to foster an interaction that contributes to children’s sound-based aesthetic education, simultaneously enabling the construction of repertoire and the joyful experience of improvisation. Through an expanded notion of play, the workshop explores ways in which core aspects of sound art and free improvisation practices, such as proactive engagement, rhythm synchronisation, timbre composition, adaptive harmony, relational difference, aesthetic coherence, collective listening and adaptation, could be designed and implemented ergonomically for and with children.
COMPEL: Crowd-Sourcing a Computer Music and Performing Arts Database – Hollis Wittman, Kara Long and Ico Bukvic
In this workshop, we introduce the COMPEL database, demonstrate its use, and invite workshop participants to contribute their own data or that of their colleagues. This database uses linked open data (LOD) in an open source Wikibase instance to collect metadata about computer music performances (defined very broadly), with a focus on documenting specific performances and living musicians.
The workshop will briefly introduce the project’s history and development but will focus on creating user accounts for attendees, adding and linking items brought by attendees or provided by instructors, and demonstrating querying the database for results.
The Sounds of Sustainability: Examining Carbon Footprints through Sonification – Jordan Wirfs-Brock and Thomas Hermann
As the impacts of climate change become increasingly severe, our connection to the problem remains an abstract one: Our consumption of resources largely remains invisible (and silent) to us, as we do not perceive our electricity use and its associated CO2 emissions directly. How might sonification, which has unique capabilities to engage us and to disrupt our relationship to data, make the energy we consume while conducting daily activities—showering, traveling, eating—more apparent and more visceral?
This workshop uses hands-on sonification activities to deepen our connection with the resources we use in our daily lives. We will facilitate a set of participatory activities to create sonifications of resource use, including but not limited to the carbon footprint of attending the AM.ICAD conference. Working in small groups, participants will develop sonification concepts and use vocal sketching to prototype their ideas.
If time permits, we will also implement these sonifications using digital tools. In a closing discussion, we will examine how our sonic perception can help us understand our individual actions in context with broader human and natural systems, and how sound could more generally
be used to boost awareness of the unseen and unheard consequences
of our actions.
Ensonification: Facilitating Ensemble’s Improvisations – Tristan Peng, Hongchan Choi, Chris Chafe and Nilam Ram
Ensemble sonification, stylized as Ensonification, represents a innovative interaction method for engaging with data through data-guided structured ensemble improvisation. This accessible and democratized paradigm for music-making and data exploration presents novel opportunities for the composer, performer, and listener with an embodied experience. While it is important to motivate the purpose of Ensonification in an archival format as template for conducting this type of sonification performance, in the spirit of Ensonification, a participatory, collaborative workshop serves as the best medium for experiencing this new paradigm.
Let’s Move Together: A Workshop on Movement Data Sonification for Sports and Rehabilitation – Vincent van Rheden, Michael Reichmann, Alexander Meschtscherjakov, Sascha Ketelhut, Antoni Rayzhekov, Daniel Hug and Nina Schaffert
Sound and sonification are established feedback modalities within sports and rehabilitation. Sound and music can motivate, support, and guide people in the physical movement process. Even though movement sonification is a widely acknowledged approach within HCI and sport science practice to support athletes, the creation of sonifications based on movement data is a challenge for non-musicians and non-sound designers and engineers.
This workshop introduces a novel sonification toolkit that supports and simplifies the creation of movement data sonification and empowers people less competent with sonification tools to do so. Within this workshop we will demonstrate the current version of the toolkit and invite participants to (1) engage and try out the toolkit, and (2) develop a new sonification feature through an iterative design process. Results of the workshop will be made publicly available and will function as input for workshops with SportsHCI researchers in followup stages.
Music & Concerts – Program
Session #1 (Tuesday, 16h), at Casa das Artes
Echoes of the Unseen
Nádia Carvalho and Jorge Sousa
“Echoes of the Unseen” [c. 10’] is an AI Music Improvisation for tenor saxophone and live-electronics, focusing on live improvised music performance. Departing from research on the explainability of Variational Autoencoder (VAE) latent spaces within timbre spaces, the performance explores previously trained Realtime Audio Variational autoEncoder (RAVE) models (such as musicnet and vocalset). Specifically, it investigates the timbre of the saxophone within these latent space representations to manipulate electronics in real-time, employing a methodology akin to concatenative synthesis.
Puredata (PD) serves as the live-coding platform, utilizing IRCAM’s nn tilde object. Real-time visuals accompanying the performance are generated with TouchDesigner based on activations in the latent space, enhancing the audience’s multi-sensory experience. This interdisciplinary approach merges music performance, machine learning, and live coding, offering insights into the creative possibilities at the intersection of AI and improvisational music.
Galaxy Spectra Suites
Adrian Garcia Riber and Francisco Serradilla
Galaxy Spectra Suites is a stereo fixed media collection of six short electronic suites, generated through the conversion of real sky galaxy spectra into musical chords. The autonomous composition system used for the creation of the piece, generates an underlying musical structure based on Bach’s Cello Suites to create a completely original astronomical data-driven soundtrack, while providing a musical exploration of actual galaxy spectra from the CALIFA survey.
Tracotanza
Nicola Fumo Frattegiani
The work presents itself as an allegory of the concept of conflict, understood in its most intimate sense. A descent into the abyss of discord, where acoustic entities battle for supremacy. There is no redemption, only blind fury, untamed and unyielding, that moves toward the destruction of the other. What remains is pure silence.
The entire composition employs concrete samples of percussion, metal objects, and sine wave frequencies. Signal processing techniques include time-stretching and granular synthesis. The concrete samples processed with granular synthesis were broken down into tiny fragments through manual editing and then reassembled with a new temporal order through a kind of micro-editing
AIded Sonification : Women in STEAM in Three Movements
Visda Goudarzi and Areti Andreopoulou
This piece presents a data-driven sonification of women’s representation across STEAM disciplines, using a publicly available dataset on gender distribution in U.S. college majors. Structured in three interconnected movements, the work explores diverse synthesis techniques and expressive mapping strategies.
The first section employs subtractive synthesis, with modulated bandpass filtering to shape harmonic content. Representation values are mapped to amplitude, spectral motion, and spatial positioning. The second movement utilizes granular synthesis of computer-generated female voices, where data controls grain density, duration, playback rate, and stereo spread—evoking layered presence and identity. The final section uses resonant comb filters to sculpt breathy harmonic textures, with representation values mapped to filter count, base frequency, Q factor, and duration—producing a delicate, spatially shifting sound cloud.
Developed in collaboration with ChatGPT-4.0, this piece explores AI as a creative coding companion—supporting the implementation of synthesis techniques in SuperCollider. The collaboration was shaped by the authors’ aesthetic and conceptual vision, including all decisions around data mapping, structure, and sonic design. In line with the conference theme “Let’s Play Together”, the project embraces interdisciplinary play between human and machine, using sound as a medium to reflect on data, gender, and authorship.
The Cycle of Life and Decay
Berk Yagli
The Cycle of Life and Decay is about the condition that surrounds all living things: All things are bound to a never-ending cycle of life, growth, and decay. Unlike our general human perception, nothing is permanent, and everything is bound to change. Life and death, suffering and tranquillity, and ever-changing states of consciousness are what is stable.
In Buddhist belief, Samsara is the endless cycle of life, death, and suffering. This piece attempts to find a parallel between these two notions to create a sonic farewell to ‘Şampiyon Melekler’ (two Cypriot high school volleyball teams) who went to Turkey in February 2023 to compete in the finals and lost their lives (due to their hotel collapsing because of the 7.8 Earthquake) wishing that they found liberation from this cycle in which they are no longer experiencing suffering but only tranquillity.
Session #2 (Tuesday, 22h), at Salão Brazil
Tonspur: I’m telling you the truth
Se-Lien Chuang and Andreas Weixler
In an artistic analogy to musical expression, the rhythmic dynamics of the marble, the inerita-related principles, the articulation of the swinging rotation and the transformation of the live electronics during the curious-playing-momentum result environment of innovative exploration by combining the characteristics of the marble and the ceramic within time and space in a distinctively individual and an immediately compositional way.
Here is the occasion to pick up the marble for the desire of linking the playing of the childhood reminiscence with the globally interconnected understanding of contemporary music with respect to the playful and joyful manner.
The Marble Ceramic Scenic Score is a playful and joyful environment for spurring the variety and virtuosity and work up curiosity of the contemporary music using acoustic sounds and electronic sounds through audiovisual realtime processing.
A strongly embossed constitution of compositional concept and improvisational practice in a mutual interaction can be learned and perceived during the performance.
Interstellar
L2Ork International Ensemble
“Interstellar” is the latest work co-created by the members of the L2Ork International Ensemble. Led by its founder and Director Dr. Ivica Ico Bukvic, the performance features live performers over 6,000 miles apart. Its 2024 premiere also integrates projection mapping co-developed by a visual artist Thomas Tucker and Bukvic. Tightly integrated sync of the ensuing telematic electronic music that blends EDM and Ambiental is made possible using L2Ork Tweeter free and open-source software platform, a part of the Pd-L2Ork Pure-Data variant that is also designed to interface with the MadMapper software responsible for the premiere’s visual projection mapping. In this iteration, due to logistical constraints, the performance will omit the projection mapping component.
“Interstellar” is commissioned by the Alexandria VA Office of the Arts. It is inspired by StudioKCA’s “Interstellar Influencer (Make an Impact)” installation on display in Alexandria’s Waterfront Park. Like the installation, this piece tells the story of an asteroid whose impact shaped Chesapeake Bay over 35 million years ago.
L2Ork Tweeter International Ensemble members who co-created this iteration of the work are (listed in alphabetical order): Preston Arnn (Virginia), Ivica Ico Bukvic (Director, Virginia), Uma Futoransky (Buenos Aires, Argentina), Val Gigena (Argentina), Gala Gonzalez Barrios (Virginia), Justin Kerobo (Virginia), Joaquín Montecino (Buenos Aires, Argentina), William Rhodes (North Carolina), and Jacob Alan Smith (North Carolina).
The application of recombinant DNA theory for sonification and music composition
Mark Temple
In this report, sonification algorithms originally designed for DNA sequence analyses were used for music generation in a creative context. This creativity and music composition aspect was achieved through the editing of the digital DNA sequences with the aim to create particular audio patterns and a musical narrative.
The audio is not generative, it is systematic, since the same DNA sequence always give rise to the same note mapping. This allows one to edit the DNA sequence and listen to the sonified outcome, then re-edit and re-listen until the composition of the musical patterns are achieved. It is common to construct a digital recombinant DNA sequence or otherwise constructed DNA sequence for biotechnology purposes. This work reports the creation of novel DNA sequences for the sole purpose of musical composition and science outreach.
Frames for Nothing: Devices for Revelation in the Phase Space of the Marvelous
Jorge Boehringer
This brief proposal summarizes the artistic concept and context, as well as performance procedures and technical apparatus, for an approximately 9-minute solo performance piece whose playfully unwieldy title is given above. This piece builds upon a series of recent sound artworks by the author that intersect with data sonification practices. These works draw upon the inherent criticality of artistic practice to demonstrate how structures phenomenologically encountered within sonification reveal as much about their environmental and systemic context as they do about the data sonified.
The present work draws upon a dataset constructed from pseudo-random values. The piece slowly unfolds from a process of gradual rescaling and reframing, applying common sonification techniques to this (random) data. Structure is clearly perceivable in the resulting sound-world, but where does it come from and what is structuring what?
Session #3 (Wednesday, 22h), at Salão Brazil
The noise of Peace!
Biagio Francia
“The noise of peace” is a music composition for augmented music instruments mixing traditional musical instruments (drums), synths and live electronics. The music composition is based on a free improvisation solo, where the creativity operate without boundaries in a free and independent way, furthermore this form of musical practice hosts possibility to interact also live with audiences that become active part of performance.
In the execution of the musical improvisation, audio samples of speech will be used (except from Charlie Chaplin The Great Dictator final speech) The music composition played follows “Non Linear music performance structure” by its flexible structure. It can be Dynamic Music with meaning of music that can change and adapt in real time to data and moods. It is the idea of the author that in a live electronics performance based on a free music improvisation, the Listeners have unfiltered experience and can consider and have better perception of sounds as colors that change during the performance.
Bacteriophage in Granular Waves
Stephen Roddy
Bacteriophage in Granular Waves is a musical work and performance that adopts data-driven composition methods, generative systems and sonification techniques to produce music work from a synthetic virology dataset. It has emerged from a larger collaboration exploring creative and artistic strategies for the visualization and sonification of virology data.
LANGUE ÉTRANGÈRE
Héloïse Garry
LANGUE ÉTRANGÈRE reflects on the process of learning and inhabiting a foreign language, where meaning fluctuates between comprehension and abstraction. Constructed entirely from processed recordings of the composer’s own voice, the piece unfolds as a multi-layered sonic landscape interweaving spoken fragments in French, English, Japanese, and Chinese.
The composition is structured in four distinct movements, each portraying a different stage of linguistic and emotional adaptation:
- I. Situation (Situation): The opening movement juxtaposes video installation, live voice performance, and recorded dialogues in Japanese and Chinese, drawn from standardized language learning materials. Everyday phrases — exchanging greetings, asking about nationality, looking at family photos — are recited in a structured, almost mechanical manner. These vignettes highlight the tension between rote linguistic repetition and the deeper human desire for authentic connection.
- II. Paroles Indiscrètes (Indiscrete Words): Language begins to dissolve as spoken words transform into rhythmic chants and vocal loops. A stark black screen replaces the video projection, directing focus toward the physicality of communication. In the background, the performer reflects on the very act of articulating vowels, highlighting the mechanics of speech as sound rather than meaning.
- III. Romances sans Paroles (Romance Without Words): This movement foregrounds phonetic universality by deconstructing speech into its fundamental elements. Against an evolving sonic texture, the performer engages in a rhythmic recitation of Japanese Hiragana characters containing the vowel sound “a” (あ, か, が, さ, ざ, etc.). This phonemic progression highlights the musicality of language, revealing how meaning can exist outside of semantics, purely in rhythm, articulation, and repetition.
- IV. Habiter la Langue (Inhabiting Language): The final movement weaves together the multiple languages internalized — French, English, Japanese, and Chinese — into a dynamic sonic tapestry. Overlapping vocal recordings blend miscommunication with moments of fluency, transforming linguistic struggle into poetic resonance. We can hear a live recitation in French, accompanied by pre-recorded voices in synchronized translations. The layers of sound create a space where languages coalesce and meaning transcends linguistic boundaries.
At the core of this work is a reflection on what it means to dwell within language — to navigate its barriers, its estrangements, and ultimately, its transformative beauty. The following text, translated across languages and embedded within the piece, encapsulates this experience:
“Feeling like a foreigner in a foreign country is a complex and bittersweet experience. It’s as if every sound, every sign, every social code is slightly out of reach, like listening to an unfamiliar melody and desperately trying to hum along.”
ECHOES OF PRESENCE: Sonifying Collective Spacial Behaviour in Workplace Environments
Misaki Yamao
ECHOES OF PRESENCE is a sonification system that translates human position detection data—recorded one hour earlier—into time signal melodies composed to enhance workplace well-being. Each melody functions like an echo, transmitting traces of recent human activity into the near future and allowing listeners to perceive the subtle imprint of past occupancy.
By listening to the sonification of the office floor, individuals can recognise patterns of density, distribution, and dynamics, which may prompt them to consider how to spend the next hour to improve comfort and productivity. The system is designed with an emphasis on intuitive data-to-sound mapping and clearly defined sonification rules that a general audience can easily understand.