top of page

Shanghai Conservatory of Music Team Wins Platinum Prize in AI-Generated Art for “Intelligent Mirror” at Future Art & Design Award UK 2025 Autumn Season

A team from the Shanghai Conservatory of Music — Huixin Xue, Shihong Ren, and Xinyu Bai — has been awarded the Platinum Prize in the AI-Generated Art category at the Future Art & Design Award UK 2025 Autumn Season for their groundbreaking project “Intelligent Mirror: An AI Interactive Work of Guqin and Xiao Duet Therapeutic Music with Real-time EEG Feedback.”

The jury described the work as "A visionary work blending traditional artistry, advanced technology, and therapeutic innovation."


By combining real-time EEG (electroencephalogram) feedback with traditional Chinese instruments Guqin and Xiao, Intelligent Mirror creates a living dialogue between music and mind. The listener’s brainwaves act not as passive input, but as an active compositional force — shaping rhythm, tone, and emotional color in real time. The result is a deeply personal and therapeutic musical experience, where human thought and AI co-create sound.


ree

Interview

Q: What initially inspired this project? Was there a particular idea, moment, or question that sparked its creation?

Team: In fact, the technique of using EEG signals for music creation has actually been explored and developed several decades ago. However, efforts to incorporate brainwave signals as an active, integral part of the music performance and listening feedback loop are still relatively rare even to this day.


What truly sets our work apart is the novel approach of treating the listener’s brain activity as an autonomous agent, capable of directly and dynamically influencing the parameters of music generation. In this way, we are able to create a unique, highly interactive listening experience that is customized for each individual in real time.


Huixin Xue / Shihong Ren / Xinyu Bai
Huixin Xue / Shihong Ren / Xinyu Bai

Q: What was the most exciting or most challenging aspect of bringing this work to life?

Team: From a technical perspective, both brainwaves and sound waves are multichannel signals, and as such, many of the same signal processing techniques and algorithms can be applied to their analysis. However, a major distinction lies in the frequency domains these signals occupy. Brainwaves typically fluctuate at much lower frequencies than those audible to the human ear, making direct conversion impractical.


One of the main challenges of our work lies in designing a meaningful and scientifically sound method for encoding EEG data into music, ensuring that the entire system maintains robustness and reliability throughout live performance. To meet this challenge, we decided to let EEG-derived parameters modulate various aspects of the music in real time — such as influencing rhythm, pitch, and loudness.


At an even higher level, the listener’s overall brain state can affect the emotional tone of the music and certain details of the audio mixing process, allowing the music to respond much more subtly to the listener’s changing mental or emotional state.


In order to achieve the best possible outcome, close collaboration with composers and performers is essential, as they must be able to respond in real time to shifts in the generated musical parameters. This project is not only a technical breakthrough, but also a bold and forward-thinking experiment in reimagining the boundaries of traditional music creation and performance.



Related Posts

bottom of page