Blog

  • Listening to the Mountains: Reflections from the PEMS Study at Euronoise 2025

    When we think of mountains, we picture towering peaks, sweeping valleys, and dramatic skies. Yet, at Forum Acusticum / Euronoise 2025: 11th Convention of the European Acoustics Association in Málaga, we explored something less visible but equally powerful: the soundscapes that define these environments. Our paper, PEMS: People’s Experience of Mountain Soundscapes, presented findings from a global survey of mountaineers and hikers, revealing how sound shapes safety, navigation, and emotional connection in the high places we love.

    Why Soundscapes Matter in the Mountains

    Mountains are dynamic acoustic environments. Wind whistling through ridges, water cascading down slopes, birds calling across valleys—these sounds are not just aesthetic; they are functional. In low-visibility conditions, auditory cues often become lifelines. Creaking ice can warn of instability, a distant rumble may signal rockfall, and the muffled “wumph” of snow can indicate avalanche risk. These natural signals complement visual information, helping mountaineers make critical decisions.

    But soundscapes are more than survival tools. They shape our emotional experience. Participants in our study described feelings of peace, awe, and excitement triggered by natural sounds. Silence itself—often rare in our urbanized lives—was seen as a profound marker of remoteness and fragility.

    The Study at a Glance

    Our research involved 219 participants from 27 countries, ranging from casual hillwalkers to seasoned mountaineers. The median age was 51, and most reported no hearing loss (88%). Interestingly, 17% were audio professionals, adding a unique perspective on acoustic awareness.

    Activities varied widely: hiking and hillwalking dominated, but responses also came from climbers, photographers, and even professional bird surveyors. This diversity enriched the dataset, revealing how soundscapes influence both technical and recreational engagement with mountains.

     

    How Participants Rated Mountain Soundscapes

    On a scale from 1 (unpleasant) to 5 (pleasant), mountain soundscapes scored a median of 4. Natural sounds—birdsong, wind, running water—were consistently praised for their calming and immersive qualities. These elements fostered a sense of connection to nature and offered psychological restoration.

    Conversely, human-made noise was the villain of the story. Traffic, aircraft, and overcrowding were repeatedly cited as disruptive, masking natural cues and eroding the sense of wilderness. Overcrowding and biodiversity loss were also mentioned as factors diminishing acoustic richness.

    What We Hear Up There

    The most frequently reported sounds were wind, birds, and water, each with a median frequency of 4 on a 1–5 scale. These were also among the highest-rated, with birds and water achieving a median rating of 5. Silence and wildlife sounds followed closely, reinforcing their value in creating tranquil, restorative experiences.

    On the other end of the spectrum, traffic noise and rockfall were least frequent and least appreciated. While rockfall is a natural phenomenon, its association with danger explains its lower rating (median 3). Traffic noise, unsurprisingly, scored just 1—an unwelcome reminder of human intrusion.

    Soundscapes as Navigation Tools

    One of the most striking findings was the role of sound in navigation. 169 participants said they used auditory cues to orient themselves. Examples included following the sound of rivers during foggy conditions or using wind direction to estimate proximity to ridges. In some cases, anthropogenic sounds—voices, distant traffic—helped locate groups or roads when visibility was poor.

    Real-life anecdotes brought this to life. One participant recalled a misty fell race in the Duddon Valley, where the sound of a river guided them to a checkpoint. Another described navigating thick fog by listening for flowing water, confirming their position when visual cues failed.

    Soundscapes and Safety

    Safety was another domain where sound proved indispensable. 189 participants reported using auditory information for risk assessment. Wind intensity often signaled exposure or approaching storms. Creaking snow and groaning ice warned of instability, while the distinctive “wumph” indicated potential avalanche conditions. Rockfalls and rushing streams also served as hazard indicators, influencing route choices.

    Group communication emerged as a critical safety factor. Hearing teammates’ voices in poor visibility or during emergencies reinforced collective awareness and coordination.

    Understanding the Environment

    Beyond navigation and safety, soundscapes deepen environmental understanding. 193 participants said sound helped them interpret surroundings. Water sounds revealed terrain features, while wildlife calls highlighted biodiversity. Silence itself conveyed remoteness and fragility, amplifying the sense of solitude and connection to nature.

    Challenges and Disruptions

    While natural sounds were celebrated, anthropogenic noise was a recurring frustration. Traffic, aircraft, and drone activity were seen as intrusive, masking vital cues and diminishing the immersive experience. Overcrowding compounded the issue, introducing chatter and mechanical noise into spaces once defined by tranquillity.

    Looking Ahead: Technology and Conservation

    Our findings underscore the need to preserve natural soundscapes—not just for ecological integrity but for human experience and safety. Future research should explore:

    • Inclusive design for individuals with hearing impairments.
    • Longitudinal studies on climate change and biodiversity loss impacts on acoustic environments.
    • Technological integration, such as AI and AR tools that amplify natural cues for navigation and hazard detection.
    • Public education initiatives to raise awareness about noise pollution in mountain regions.

    Imagine wearable devices that isolate critical sounds—like creaking ice or distant water—while filtering out disruptive noise. Or interactive soundscape maps that help hikers anticipate acoustic conditions along their route. These innovations could transform how we engage with mountains, blending tradition with technology.

    Final Thoughts

    Presenting this work at Euronoise 2025 was a reminder that mountains speak—and we need to listen. Soundscapes are not passive backdrops; they are active, dynamic systems that inform, protect, and inspire. As human activity expands into remote areas, safeguarding these acoustic environments becomes as urgent as preserving the visual landscapes we so admire.

    The next time you venture into the hills, pause and tune in. The wind, the water, the silence—they’re telling you a story. And if our research has shown anything, it’s that listening can make the difference between awe and danger, serenity and stress.

     

    References

    Donato, B. D., & Mcgregor, I. (2025, June 23-26). PEMS: Peoples Experience Of Mountain Soundscapes. Forum Acusticum / Euronoise 2025: 11th Convention of the European Acoustics Association, Málaga, Spain. https://euracoustics.org/conferences/forum-acusticum/

     

    Author – Dr Balandino Di Donato

  • Reimagining Sound in Live Theatre II

    Part 2: Engineering Actor-Controlled Sound Effects with IoS Devices

    This post builds on insights from Part 1 of our series on interactive theatre sound design. If you haven’t read it yet, check out Part 1: Collaborative Sound Design in Theatre.


    Rethinking Sound Control in Theatre

    Traditional theatre sound design relies heavily on off-stage operators using software like QLab to trigger pre-recorded cues. While reliable, this model limits spontaneity and performer agency. This research investigates a shift: giving actors direct control over sound effects using networked Internet of Sound (IoS) devices embedded in props or costumes.


    What Is the Internet of Sound?

    The Internet of Sound (IoS) is a subdomain of the Internet of Things (IoT), focused on transmitting and manipulating sound-related data over wireless networks. It includes:

    • IoMusT (Internet of Musical Things): Smart instruments with embedded electronics.
    • IoAuT (Internet of Audio Things): Distributed audio systems for production, reception, and analysis.

    This project leans toward the IoMusT domain, emphasizing performer interaction with sound-generating devices.


    Technical Architecture

    The workshop deployed 9 IoS devices built with Arduino MKR 1010 microcontrollers, chosen for their built-in Wi-Fi and affordability. Each device communicated via Open Sound Control (OSC) over UDP, sending sensor data to Pure Data patches running on local laptops.

    Sensors Used:

    • Accelerometers – for dynamic control (e.g., storm intensity)
    • Force Sensitive Resistors (FSRs) – for pressure-based triggers
    • Circular and Rotary Potentiometers – for pitch and volume control
    • Photoresistors – for light-triggered samples
    • Buttons – for simple cue activation

    Each performance space had its own router, enabling modular and fault-tolerant deployment.

    Hardware set up in the main theatre space.
    Hardware set up in the main theatre space.

     

    Hardware setup in the rehearsal space.
    Hardware setup in the rehearsal space.

    Interaction Design

    Participants interacted with both pre-recorded samples and procedural audio models:

    Pre-recorded Samples:

    • Triggered via buttons, light sensors, or rotary knobs
    • Used for audience reactions, chorus sounds, and character cues

    Procedural Audio Models:

    • Spark – Triggered by button (gain envelope)
    • Squeaky Duck – Controlled by FSR (pitch modulation)
    • Theremin – Controlled by circular potentiometer (oscillator frequency)
    • Stormstick – Controlled by accelerometer (rain and thunder intensity)

    These models allowed for expressive, real-time manipulation of sound, enhancing immersion and authenticity.

    Circular potentiometer used to control a Theremin type sound effect.
    Circular potentiometer used to control a Theremin type sound effect.

     

    An accelerometer within the 'Stormstick' controls the gain of a rain synthesis model and the trigger rate of a thunder one.
    An accelerometer within the ‘Stormstick’ controls the gain of a rain synthesis model and the trigger rate of a thunder one.

    Participant Feedback & Findings

    Benefits:

    • Enhanced Timing – Actor-triggered cues improved synchronisation
    • Creative Freedom – Enabled improvisation and dynamic adaptation
    • Authenticity – Increased believability and audience engagement
    • Actor Agency – Encouraged deeper integration into the production process

    Challenges:

    • Reliability – Wi-Fi dropouts and device failures were noted
    • Cognitive Load – Actors expressed concern over added responsibilities
    • Integration – Costume and prop design must accommodate sensors
    • Audience Distraction – Poorly integrated devices could break immersion

    Engineering Considerations

    To ensure successful deployment in live theatre:

    • Robust Wi-Fi – Site-specific testing and fallback systems (e.g., QLab) are essential
    • Thermal Management – Embedded devices must remain cool and accessible
    • Modular Design – Quick-release enclosures and reusable components improve sustainability
    • Cross-Department Collaboration – Early involvement of costume, prop, and production teams is critical

    Sound Design Strategy

    Sound designers must consider:

    • Spot vs. Atmosphere – One-off effects may suit samples; dynamic ambiences benefit from procedural audio
    • Sensor Mapping – Choose intuitive controls (e.g., FSR for pressure-based sounds)
    • Actor Suitability – Confident performers are better candidates for device control
    • Rehearsal Integration – Early adoption helps reduce cognitive load and improve fluency

    Future Directions

    The next phase involves deploying IoS devices in a live pantomime performance in December 2025. Beyond this, distributed performances across locations (e.g., London and New York) could leverage IoS for synchronised, remote interaction.

    Exploration of alternative microcontrollers (e.g., Teensy) and operating systems (e.g., Elk Audio OS) may improve scalability and reliability.


    Conclusion

    Actor-controlled IoS devices represent a promising evolution in theatre sound design—merging technical innovation with artistic expression. While challenges remain, the potential for more immersive, responsive, and collaborative performances is clear.

  • Reimagining Sound in Live Theatre

    Part 1: Collaborative Sound Design in Theatre – A Workshop Approach

    In an age where immersive experiences are reshaping the boundaries of performance, sound design in theatre is undergoing a quiet revolution. A recent workshop held at The Dibble Tree Theatre in Carnoustie explored this transformation, bringing together actors, sound designers, and experimental technologies to co-create a new kind of theatrical soundscape.

    Two pantomime characters
    The Dame and the Barron, ready to collaborate with our sound designers!

    Why Sound Design Needs a Shake-Up

    Despite its central role in storytelling, sound design in theatre has lagged behind lighting and projection in terms of innovation. Traditional tools like QLab remain industry staples, but they often limit sound to pre-programmed cues triggered by operators. This workshop challenged that model by asking: What if actors could control their own sound effects live on stage?


    Collaboration at the Core

    The workshop was designed as a playful, hands-on experience. Participants—ranging from amateur theatre enthusiasts to experienced backstage crew—worked in small groups to rehearse and perform short pantomime scenes. They used Foley props (slide whistles, rain sticks, thunder tubes), pre-recorded samples, and procedural audio models to sketch out their sound designs.

    Importantly, actors and sound designers collaborated from the outset, rehearsing together and experimenting with timing, mood, and interaction. This flattened hierarchy fostered creativity and mutual learning.

    A character and a sound designer
    Long John Silver performing his actions along with a sound designer on a slide whistle

    Enter the Internet of Sounds

    A standout feature of the workshop was the use of networked sound devices—custom-built tools powered by Arduino MKR 1010 boards and Pure Data software. These devices allowed actors to trigger sounds via sensors embedded in props or wearable tech. For example:

    • A motion sensor in a prop triggered audience reactions.
    • A rotary knob controlled volume and playback of samples.
    • An accelerometer and force-sensitive resistor enabled real-time manipulation of procedural audio.

    These embodied interfaces blurred the line between performer and sound operator, creating a more organic and responsive soundscape.

    Sound designer studying the script
    Sound designer studying the script with the Internet of Sound devices beside him.
    Sound designer performing
    Sound designer performing the sounds on the Internet of Sound devices, with script on other hand and watching the stage to get her timing right.

    What Participants Learned

    Feedback was overwhelmingly positive. Participants reported:

    • Greater appreciation for the complexity of sound design.
    • Enjoyment of the collaborative and playful structure.
    • Insights into how sound design principles transfer to other media like film and radio.

    Challenges included cognitive load—especially for actors managing props, cues, and performance simultaneously—and occasional technical glitches with Wi-Fi connectivity.


    Key Takeaways

    • Actor-led sound triggering offers better timing and authenticity.
    • Early integration of sound design into rehearsals is crucial.
    • Embodied interaction (e.g., using props or wearables) enhances engagement.
    • Collaboration between departments—sound, props, costumes—is essential for success.

    Final Thought

    This workshop offered a fresh perspective on how sound can be more deeply integrated into live theatre. By inviting collaboration between actors and sound designers and experimenting with interactive technologies, it opened up new possibilities for creative expression. While challenges like reliability and cognitive load remain, the enthusiasm and insights from participants suggest that actor-led sound design is a promising direction worth exploring further.


    In Part 2, we explore the technical implementation of actor-controlled sound effects using Internet of Sound (IoS) devices. Stay tuned for a deeper dive into the engineering behind the performance.

  • Reflections on Graduation and a New Academic Year

    BSc graduates Oran Talbot, Mitchell MacPherson, Andrew Clelland, and Aedan Wilson after receiving their awards.
    BSc graduates Oran Talbot, Mitchell MacPherson, Andrew Clelland, and Aedan Wilson after receiving their awards.

    This week marks Freshers’ Week at Edinburgh Napier University—the beginning of a new academic year and a time when I welcome new MSc Sound Design students. For the first time, I’ll be greeting both on-campus students and those joining us from around the world. Alongside in-person attendance, we’re introducing new modules designed to challenge and inspire.

    But before diving into Trimester 1, I’d like to take a moment to reflect on a key event from the previous trimester: graduation.


    Celebrating Success at the Usher Hall

    In July, we held our summer graduation ceremony at the iconic Usher Hall in Edinburgh. Six students from the MSc Sound Design programme were awarded their degrees. While three couldn’t attend in person, the other three travelled from England, Italy, and the USA to celebrate their achievements.

    MSc graduate Federico Aramini with his MSc award
    MSc graduate Federico Aramini with his MSc award

    Their dissertations covered a fascinating range of topics, including:

    • Infrasound in horror films
    • Sound design in smart homes
    • Game audio adapted for age-related hearing loss
    • Authenticity of AI in podcasts
    • Techniques to improve dialogue intelligibility

    As always, supervising these projects was a learning experience for me too.


    My First Time on the Graduation Stage

    The day was a glorious summer’s day. After locating the staff entrance, donning a gown, and blagging a hat, I joined the academic procession into the hall. This was my first time participating in such a ceremony—my last ENU graduation was when I received my honours degree in electronics, more years ago than I care to admit!

    I found myself seated at the edge of the front row, with the Chancellor’s procession front and centre. The speeches were heartfelt, praising students for their hard work and thanking their loved ones for their support. It was a resonant reminder of the sacrifices made by those closest to our students.


    A Moment of Pride

    As the awards were handed out, I spotted several MSc and BSc Sound Design students I had supervised. I was the only lecturer from the Sound Design team present, so I clapped especially enthusiastically when our students crossed the stage—doing my best to make some noise from behind the Chancellor!

    One student even gave me a big thumbs-up as they walked across the stage—a lovely moment of levity and pride.

    It’s easy to forget how transformative the journey through postgraduate study can be. Many of these students began with uncertainty, juggling work, family, and study. Seeing them walk across the stage, confident and accomplished, was a powerful indication of why we do what we do. Their success is not just academic—it’s personal, creative, and deeply human.


    Sunshine, Smiles, and Goodbyes

    MSc graduate Amanda Rainey travelled all the way from Nashville, USA, to receive her MSc award.
    MSc graduate Amanda Rainey travelled all the way from Nashville, USA, to receive her MSc award.

    After the ceremony, we stepped out into the sunshine to meet students and their families. Hands were shaken, photos were taken, and robes were returned. There was laughter, hugs, and a few emotional moments as students said goodbye to classmates and staff.

    Graduation is always bittersweet. While it marks the end of one chapter, it also signals the beginning of new adventures. I’m excited to see where our graduates go next—and equally excited to welcome the next cohort of students ready to begin their own journey.


    Looking Ahead

    This year, the MSc Sound Design programme continues to evolve. We’ve introduced three new modules: Advanced AI for Audio and Sound DesignIntroduction to Audio Programming, and Soundscapes. These additions reflect the changing landscape of sound design and aim to give students fresh opportunities to explore emerging technologies and creative practices.

    It’s an exciting time to be teaching and learning in this field. The boundaries of sound design are expanding rapidly, and our students are right at the edge of that frontier. Whether they’re interested in immersive audio, interactive media, or sonic arts, the programme now offers even more pathways to explore.

  • Designing Funny Sounds: A Practical Framework for Comic Timing, Texture, and Payoff

    Why are some sounds instantly funny, while others just miss the mark? Sound has enormous power in comedy, but it is rarely discussed in its own right. It is not just about squeaks and splats. Funny sounds are about timing, tension, layering, and audience permission. Done well, they can elevate a moment into something memorable. Done poorly, they can kill the joke.

    This article lays out a practical, principle-based approach to designing funny sounds, from animation and film to games, performance, and beyond. Whether you are a sound designer, editor, creative director, or someone who just enjoys thinking about what makes people laugh, this framework is for you.

    1. Funny sounds need a comedic foil

    No funny sound works in isolation. It needs a setup, something believable, serious, or steady to push against. This is the role of the comedic foil. A squeaky shoe is funny in a formal hallway. A wet splat is funny against silence. Without the foil, the comic moment has nothing to rupture.

    2. Mistiming is everything, even when it misfires

    Comedy lives in surprise. Funny sounds often arrive just too soon, or just too late. But sometimes what makes a moment land is that it does not land — the creak that never resolves, the fanfare that fails, the punchline that falls flat. These “failures” become part of the rhythm. They create nervous, awkward, or self-aware laughs. The mistimed moment says something went wrong, and that is the joke.

    3. Sequence, overlap, and layering

    No sound exists on its own. Comic moments are shaped by how sounds are arranged, layered, and spaced. A groan over a thud followed by a squeak can create a mini gag in sound alone. A quiet, odd noise buried under other activity can reward the attentive listener, a sonic in-joke. The best comic sound design pays attention to sequence and interplay, not just isolated gags.

    4. Do not leave me hanging

    Funny sound needs to feel either deliberately cut short or uncomfortably extended. A thud that ends abruptly, or a groan that goes on too long, creates a kind of tension that becomes funny through its refusal to resolve. This incomplete rhythm mirrors the social unease of awkward pauses or missteps, and gives the audience something to laugh through.

    5. Mismatch the dynamics

    Big moments with tiny sounds, or small moments with huge sounds, are comic staples. A whisper with the impact of a cannon. A major fall with the sound of a teacup breaking. This mismatch between visual scale and auditory response undermines realism and forces laughter. It is the wrong sound, delivered with full confidence.

    6. Escalate repetition

    The same sound is only funny if it builds. Repetition works when each instance raises the stakes, longer, louder, brighter, more absurd. The laugh comes from tension and excess, not just from hearing the sound again. Without escalation, repetition flattens. With it, the sound becomes a rising joke that demands release.

    7. Use texture to tip it over the edge

    Detail matters. A splat is funnier when it has brightness or a hint of filth. A creak becomes unbearable when it has upper harmonics. The texture pushes the sound closer to bodily, embarrassing, or disgust-inducing territory, just enough to provoke a reaction without crossing into revulsion. Funny sounds often sit right on this line.

    8. Let empathy in, just a little

    Comic sound often represents failure, pain, or humiliation. The audience laughs, but a part of them knows they probably should not. That flicker of empathy, just enough to feel the fall, not enough to stop the laugh, creates comic tension. The sound designer can dial that balance in texture, tone, and pacing.

    9. Place it in the world

    The reaction of characters matters. Does anyone hear it? Is someone embarrassed? Oblivious? Does the sound exist only for the audience? These choices shape the comedy. A sound acknowledged in-world lands differently than one the characters ignore. Comic sound must be worldised, not just audible, but meaningful in the story space.

    10. Leave space for the laugh

    If you do not leave room, the audience cannot laugh. Comic sound design must make space: a beat of silence, a held shot, a moment of stillness after the noise. Without this pause, even the best gag can disappear. The laugh often lives in what follows the sound, not the sound itself.

    11. Use afterthoughts for the final flick

    Sometimes the funniest sound is not the main event, but the tag at the end. A glob of pie hitting the floor. A delayed squeak after a pratfall. A faint ding as something small falls offscreen. These afterthoughts punctuate the moment, not by escalating, but by extending the rhythm in a new, often absurd direction.

    12. Preview the collapse

    Anticipation makes comedy stronger. Let the sound world warn the audience: creaks, rattles, drips, and growing instability. These are previews, and they invite the audience to imagine what is about to go wrong. The laugh builds before the gag lands.

    13. Let the audience in on the joke

    Not every funny sound needs to be big or shared. Sometimes the best comic sound is quiet, tucked into the mix, and heard only by those paying attention. These subtle, almost secret gags invite the listener into a private joke. They create a sense of complicity, as if the sound designer is winking directly at the audience.

    14. Stylise to make the pain safe

    When a comic moment pushes too far, when a fall looks painful or a slap sounds violent, stylised sound can signal that it is okay to laugh. Cartoonish exaggeration acts as an emotional buffer: a boing, a sproing, a rubbery wobble. These reassure the audience that no one is truly hurt. Stylisation protects the joke by softening its impact.

    These principles apply across comic styles, from deadpan realism to farce, but how far each is pushed depends on the tone of the piece.

    Comic sound is not about chaos or randomness. It is about control, of timing, contrast, rhythm, and texture. It is about how a squeak becomes a laugh because of when it lands, what surrounds it, and who reacts. Whether bold or barely noticeable, funny sound lives in friction. And the more carefully it is designed, the more effortlessly it lands.

    Have you used sounds to land a joke, or save one? We would love to hear how others approach funny sound, especially in screen media, games, or performance.

  • Dr John McGowan – Customisable Approaches to Accessible Technologies

    I am Dr John McGowan, a lecturer at Edinburgh Napier University, exploring how customizable approaches to accessible technologies can be useful for neurodiverse adults. 

    Dr John McGowan

    My interest in research stems from a passion in creativity, primarily in music, where non-linear communication led me to explore how multimodality could be a useful approach in exploring the capabilities of expression in neurodiverse adults. I recognise the ways in which musical expression can reflect unspoken emotions and feelings, and how valuable personal expression can be as a transformative way to develop for human beings. This was the basis for my PhD research where musical exploration was combined with visual modalities, via an interactive application, that would allow autistic adults with differing capabilities the ability to express themselves through play in music therapy sessions.  

    Prior to this, my master’s degree focused on the 3D visualization of sound using cymatics as a basis. Cymatics are the impressions that sound leaves through media like water, or through salt on a Chladni plate. The visualization involved in the master’s degree investigated what sound might look like travelling through air as sonic bubbles. This concept was further developed during my PhD where a real time application allowed the visualization of sound through a projector, using input from microphone, or a MIDI keyboard. Depending on the volume, pitch and tone of the note triggered, a specific 3D cymatic shape would be visualized in real-time. Importantly, customization of colour was facilitated for users, allowing them to personalize the experience. In addition, a custom-built interactive table was designed and built to accommodate skills of any level where the table could be played like an audio-visual instrument for immediate audio-visual feedback based on tactile input.    

    My own continued use of technology, as well as contemporary research, has demonstrated the over adoption at times, as well as the potential, for using and augmenting existing technologies. Research in this area has also supported this notion, especially regarding autistic adults using familiar tools that already perform reliable functionality for their needs. What we can do, as responsible researchers, is look at ways of exploiting the existing tools and components within mobile technology that will allow development of augmented multimodal stimuli as self-management tools. This may allow also greater accessibility for those who are economically challenged, as well as those who prefer to use familiar tools and technologies.  

    Currently, I am leading a project that is looking into the potential for augmented reality to be used in stress management for autistic adults. Via the use of real time biometric monitoring (for example, by using a smartwatch to detect heartrate, or using the microphone on a mobile device to measure breathing), the proposed application will react and allow the user to use customized sensory stimuli as positive distractors, or as some form of real time assistance in times of stress. Two phases of this study have already been completed which includes an initial survey with over 200 participants, both autistic and caregivers, that have provided feedback on the stress triggers and issues related to hypersensitivity and hypo sensitivity. A second study, which included interviews with over 20 participants, investigated some of these issues in more detail regarding the needs of autistic adults, and the desire and potential for new tools that could be useful in their day-to-day lives managing stressful situations. Some of the key themes that we aim to develop focus on the idea of familiarity, positive distraction, and alerting individuals to changes in their physical state, which may go unnoticed due to sensory issues.   

  • Theatre Sound Design Workshop

    Date: Saturday 3rd May 2025
    Location: Dibble Tree Theatre, 99A High St, Carnoustie DD7 7EA
    Morning workshop: 09:30 – 12:30
    Afternoon workshop: 13:30 – 16:30
    Workshop Duration: 3 hours
    Tea, coffee and biscuits provided
    Suitable for individuals 16-years and over

    Step into the enchanting world of theatre with our exciting project! We’re opening the doors to everyone in our community, giving you a backstage pass to the magic of live performances. Dive into the fascinating realm of sound effects (SFX) and team up with actors to create a one-of-a-kind performance.

    This workshop is a unique adventure, perfect for anyone curious about the hidden roles in theatre or looking to boost their confidence. You’ll see how SFX can transform and elevate the emotions in a performance right before your eyes.

    Unlike typical arts-based workshops, where you collaborate with an artist, this experience lets you interact with sound-producing objects and sensors to craft unique sonic experiences. Work alongside actors to bring a unique performance to life, exploring everything from classic devices to cutting-edge synthesis models. Plus, you’ll discover how STEM skills can be applied to various media. It’s a fun, hands-on journey into the heart of theatre!

    Booking link for morning session (09:30 – 12:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-morning

    Booking link for afternoon session (13:30 – 16:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-afternoon

    Any issues or questions, please contact Rod Selfridge (r.selfridge@napier.ac.uk).

  • Dr Iain McGregor: Advancing Interactive Media Design at Edinburgh Napier University

    Dr Iain McGregor serves as an Associate Professor at Edinburgh Napier University, where he specialises in interactive media design and auditory perception research. He earned a PhD in soundscape mapping, focusing on comparing sound designers’ expectations with listeners’ experiences, providing insights into perceptual differences and design approaches. With over 30 years of experience, he specialises in sound design across various media, including film, video games, mixed reality, and auditory displays. His research covers soundscapes, sonification, and human interaction with auditory systems.

    Contributions to Auditory Perception Research

    Dr McGregor has collaborated with researchers on a range of studies that explore sound design and auditory perception. One such contribution includes his work on auditory perception, particularly his patent, *Evaluation of Auditory Capabilities* (WO2024041821A1). This patent presents a method for assessing auditory perception, with potential applications in accessibility, user experience design, and auditory technologies.

    Research in Sound and Human-Robot Interaction

    Dr McGregor’s research covers sound design, auditory perception, and human-robot interaction (HRI). He investigates how naming conventions shape perceptions of robotic personalities, improving trust and usability in assistive robotics. His research in sonification aids scientific analysis, while his work on auditory alerts improves their effectiveness in healthcare and transportation. He also explores how immersive audio enriches virtual and mixed reality and examines Foley artistry’s impact on character realism in animation. Collaborating with industry and academia, he applies these insights to mixed reality, film, video games, and robotics.

    Industry Experience

    At the start of his career, Dr McGregor worked with renowned artists and organisations, including the Bolshoi Opera, the City of Birmingham Symphony Orchestra under Sir Simon Rattle, Ravi Shankar, and Nina Simone. His work integrates auditory technologies with creative methodologies, driving innovation in sound research and education. In addition to his academic work, he is currently serving as a consultant for technology companies in the fields of mixed reality and robotics, helping to shape the development of innovative auditory interfaces.

    Academic Contributions and Mentorship

    Beyond his research, Dr McGregor mentors MSc and PhD students in sound design, auditory perception, and human-computer interaction. He encourages interdisciplinary collaboration among designers, engineers, and cognitive scientists. He contributes to curriculum development, aligning courses with advancements in sound and interactive media design. His work in interactive media design and auditory perception informs research and industry practices.

    Technological and Adaptive Advancements in Sound Design

    Advancements in reinforcement learning and edge computing are enabling real-time adaptation in sound design. These technologies allow auditory interfaces to intelligently filter and process sounds, reducing noise while enhancing clarity. Extended audiograms and dynamic digital signal processing (DDSP) further optimise clarity while minimising cognitive load. By integrating real-time adjustments based on user-specific hearing profiles, auditory systems can offer a consistent and accessible listening experience across different environments.

    Sound Design in Cultural and Museum Spaces

    In cultural and museum environments, sound design is also becoming more interactive and adaptive. Augmented reality audio systems offer dynamic storytelling and personalised navigation, responding to visitor movement and engagement levels. Audio cues can guide individuals with mobility constraints along optimised routes, while tailored auditory content enhances inclusivity and immersion.

    Sound Design for Digital and Interactive Environments

    Sound design is transforming interaction with digital environments, robotics, and everyday devices by enhancing immersion, accessibility, and engagement. Spatial audio accurately places sound in mixed reality, creating more natural user experiences, while in robotics, auditory cues foster trust and facilitate smoother interactions. Augmented reality audio supports dynamic storytelling and navigation, adapting to user movement and preferences. Additionally, personalised auditory content and accessibility-focused cues improve inclusivity in museums, public spaces, and virtual environments.

    Sound Design in Transportation and IoT

    To compensate for the near-silent operation of electric vehicles, the automotive industry is developing tailored audio cues that enhance safety and driver awareness. As the Internet of Things (IoT) expands, intuitive auditory interfaces are becoming crucial for seamless device navigation and control. Advancements in loudspeaker technology are also helping reduce noise pollution while improving communication in public spaces.

    The Future of Sound Design

    Research continues to advance adaptive and personalised sound experiences across multiple domains. Innovations in extended audiograms and dynamic digital signal processing (DDSP) optimise clarity while reducing cognitive load, ensuring accessibility across different environments and hearing abilities. Emerging sound technologies are exploring real-time adjustments tailored to user-specific hearing profiles, enhancing personalisation in auditory media experiences. As sound design evolves, it will create more intuitive, efficient, and engaging experiences that seamlessly adapt to diverse user needs.

  • Help Us Improve Your TV and Streaming Experience

    If you’re someone who loves watching films or streaming TV shows, you probably know how important sound is to the experience. Whether it’s the intensity of a car chase or the subtle whispers of a tense scene, sound adds layers to what we see on screen. But what if the sound you hear doesn’t quite match what you’re looking for, or maybe the dialogue is difficult to follow? That’s where Ph.D. candidate, Ahmed Shalabi’s research within the Interactive Media Design group here at Edinburgh Napier University comes in. He is working to make your listening experience more personalised and tailored for you.

    Ahmed Shalabi
    Ahmed Shalabi

    Why Sound in Films and TV Matters

    When we watch a film or show, we all have our own preferences for how it should sound. We also have our favourite playback hardware and listening devices, be they loudspeakers or earbuds.

    Some people love rich, booming bass; others prefer clear dialogue. But here’s the thing: the sound you hear while watching isn’t always customised to fit your needs. Most systems follow a “one-size-fits-all” approach. They don’t account for things like your hearing abilities, the acoustics of your room, your playback device, or even your taste in audio.

    That’s why Ahmed is developing an adaptive mixing framework that automatically adjusts the sound to your environment and hearing preferences. This could mean clearer dialogue, better-balanced background sounds, and a soundscape that makes watching TV or streaming films more enjoyable and engaging. Imagine being able to fine-tune the sound so that it feels just right for you without having to fiddle with the settings every time.

    Why You Should Participate

    Ahmed is reaching out to the public because each person’s input could help make this a reality. He is looking for participants to help test and develop this personalised audio technology. By participating, you’ll be contributing directly to the development of sound systems that adapt to you.

    But that’s not all. This research could make a big difference for people with hearing difficulties or unique preferences. For instance, it could help those who struggle to hear dialogue clearly or people who want a more immersive sound experience. Ultimately, the goal is to make entertainment more accessible and enjoyable for everyone.

    Even though there’s no monetary reward for participating, your feedback will be an important part of shaping how we experience sound in the future.

    Who Can Join?

    If you’re between 18 and 70 years old and have full or corrected eyesight, you’re eligible to participate. No special knowledge of sound or technology is required — all you need is a love for films and TV shows and a willingness to provide feedback. Whether you watch movies every weekend or just catch the latest episodes on streaming platforms, your experience is valuable.

    What Will You Be Doing?

    You’ll be visiting the auralisation suite at Edinburgh Napier University, a space designed to simulate different sound environments. During your visit, you’ll watch short films in a controlled setting while controlling each sound element. Then, you’ll give feedback on your experience. Your answers will help refine the adaptive mixing framework.

    This experiment will help understand how different audio setups influence the way we watch and enjoy media. Your participation will help ensure that future sound systems can adapt to individual needs, making entertainment more enjoyable for everyone.

    How to Get Involved

    If you’re interested in being part of this exciting research, it’s easy to sign up. Just visit this link to schedule your visit to the auralisation suite (C72).

    https://calendly.com/gelby/30min

    Make a Difference in How We Experience Sound

    This is your chance to help improve the way we experience sound in TV and films. By taking part in this research, you’ll help create a more personalised, immersive, and accessible audio experience for everyone. Whether you’re a film lover or just want to help improve how we all listen to media, your feedback will play a huge role in the future of sound tech.

  • Investigating the Impact of Anthropogenic Noise on Freshwater Soundscapes and Invertebrates

    This research is being carried out by PhD student Jess Lister at Edinburgh Napier University. Jess is designing and conducting the study to better understand how anthropogenic noise affects freshwater ecosystems. She is supported by a supervisory team, including Dr. Jennifer Dodd (Director of Studies)Dr. Iain McGregor (Second Supervisor)Dr. Matthew Wale, and Buglife’s Conservation Director, Dr. Craig Macadam. Their expertise in bioacoustics, environmental science, and invertebrate ecology ensures a multidisciplinary approach to studying noise pollution’s effects on freshwater biodiversity.

    Jess Lister

     

    Understanding the Challenges of Noise Pollution in Freshwater Ecosystems

    Freshwater environments contain a variety of natural sounds from flowing water, aquatic species, and atmospheric conditions. However, increasing human activity is introducing noise that could interfere with species that rely on acoustic communication. While much research has explored noise pollution’s effects on terrestrial and marine life, freshwater invertebrates remain underrepresented in these studies. Jess’s work addresses this gap by examining how noise impacts stoneflies, an important group of insects in river ecosystems.

     

    Stoneflies and Vibrational Communication

    Stoneflies (Order: Plecoptera) use substrate-borne vibrational signals, known as drumming, to communicate during mating. This is essential for species recognition and reproduction. However, road traffic noise overlaps with the frequency of their signals, raising concerns that it could disrupt mate attraction. Jess’s research examines whether noise pollution alters their communication patterns.

     

    Developing a Controlled Research Environment

    To study these effects, Jess has implemented a controlled experimental setup. The BeatBox, an acoustic chamber designed to minimise external interference, allows for precise playback experiments. This setup ensures that stoneflies’ responses to different noise conditions can be observed and measured accurately.

     

    Experimental Methods and Playback Studies

    Stonefly nymphs are collected from river sites and reared to adulthood in aquaria under controlled conditions. Once they emerge, males are placed in the BeatBox, where their drumming behaviour is recorded with and without road noise playback. This controlled approach ensures accurate measurements and allows for detailed analysis of any changes in communication patterns.

    Initial findings suggest that noise pollution may affect the frequency and timing of stonefly drumming signals. If further analysis confirms this, it will provide important evidence that freshwater invertebrates—like many terrestrial and marine species—are affected by human-generated noise, with potential consequences for biodiversity and ecosystem function.

     

    The Impact of Noise on River Soundscapes

    Beyond individual species, Jess’s research explores how road traffic noise interacts with river ecosystems. By combining hydrophone recordings with in-air microphones, she is investigating how sound travels through both water and air, providing a broader understanding of how noise pollution alters freshwater environments. She will also be capturing ground-borne noise, adding another dimension to the study by examining how vibrations travel through the riverbed and surrounding terrain. This comprehensive approach will provide deeper insights into how different types of noise interact within freshwater habitats.

    Because stoneflies are sensitive to temperature increases, climate change and habitat loss pose significant threats to their populations. Their decline can lead to disruptions in freshwater food webs, affecting fish populations and overall river health. Monitoring and protecting stoneflies is essential for maintaining biodiversity and ecosystem function in freshwater environments.

     

    Future Directions

    Jess’s work is contributing new insights to freshwater bioacoustics. As human activity continues to shape natural environments, her findings could inform conservation strategies aimed at reducing the impact of noise pollution on freshwater species. The BeatBox could also be used to study other invertebrates that rely on substrate-borne communication.

     

    Conclusion

    Jess Lister’s research is helping to clarify how anthropogenic noise affects freshwater ecosystems. Her work highlights an often-overlooked aspect of environmental change, demonstrating the importance of including soundscapes in conservation efforts. By developing new methods and expanding knowledge of freshwater bioacoustics, she is making an important contribution to ecology and environmental science.