Tag: Research

  • Reimagining Sound in Live Theatre II

    Part 2: Engineering Actor-Controlled Sound Effects with IoS Devices

    This post builds on insights from Part 1 of our series on interactive theatre sound design. If you haven’t read it yet, check out Part 1: Collaborative Sound Design in Theatre.


    Rethinking Sound Control in Theatre

    Traditional theatre sound design relies heavily on off-stage operators using software like QLab to trigger pre-recorded cues. While reliable, this model limits spontaneity and performer agency. This research investigates a shift: giving actors direct control over sound effects using networked Internet of Sound (IoS) devices embedded in props or costumes.


    What Is the Internet of Sound?

    The Internet of Sound (IoS) is a subdomain of the Internet of Things (IoT), focused on transmitting and manipulating sound-related data over wireless networks. It includes:

    • IoMusT (Internet of Musical Things): Smart instruments with embedded electronics.
    • IoAuT (Internet of Audio Things): Distributed audio systems for production, reception, and analysis.

    This project leans toward the IoMusT domain, emphasizing performer interaction with sound-generating devices.


    Technical Architecture

    The workshop deployed 9 IoS devices built with Arduino MKR 1010 microcontrollers, chosen for their built-in Wi-Fi and affordability. Each device communicated via Open Sound Control (OSC) over UDP, sending sensor data to Pure Data patches running on local laptops.

    Sensors Used:

    • Accelerometers – for dynamic control (e.g., storm intensity)
    • Force Sensitive Resistors (FSRs) – for pressure-based triggers
    • Circular and Rotary Potentiometers – for pitch and volume control
    • Photoresistors – for light-triggered samples
    • Buttons – for simple cue activation

    Each performance space had its own router, enabling modular and fault-tolerant deployment.

    Hardware set up in the main theatre space.
    Hardware set up in the main theatre space.

     

    Hardware setup in the rehearsal space.
    Hardware setup in the rehearsal space.

    Interaction Design

    Participants interacted with both pre-recorded samples and procedural audio models:

    Pre-recorded Samples:

    • Triggered via buttons, light sensors, or rotary knobs
    • Used for audience reactions, chorus sounds, and character cues

    Procedural Audio Models:

    • Spark – Triggered by button (gain envelope)
    • Squeaky Duck – Controlled by FSR (pitch modulation)
    • Theremin – Controlled by circular potentiometer (oscillator frequency)
    • Stormstick – Controlled by accelerometer (rain and thunder intensity)

    These models allowed for expressive, real-time manipulation of sound, enhancing immersion and authenticity.

    Circular potentiometer used to control a Theremin type sound effect.
    Circular potentiometer used to control a Theremin type sound effect.

     

    An accelerometer within the 'Stormstick' controls the gain of a rain synthesis model and the trigger rate of a thunder one.
    An accelerometer within the ‘Stormstick’ controls the gain of a rain synthesis model and the trigger rate of a thunder one.

    Participant Feedback & Findings

    Benefits:

    • Enhanced Timing – Actor-triggered cues improved synchronisation
    • Creative Freedom – Enabled improvisation and dynamic adaptation
    • Authenticity – Increased believability and audience engagement
    • Actor Agency – Encouraged deeper integration into the production process

    Challenges:

    • Reliability – Wi-Fi dropouts and device failures were noted
    • Cognitive Load – Actors expressed concern over added responsibilities
    • Integration – Costume and prop design must accommodate sensors
    • Audience Distraction – Poorly integrated devices could break immersion

    Engineering Considerations

    To ensure successful deployment in live theatre:

    • Robust Wi-Fi – Site-specific testing and fallback systems (e.g., QLab) are essential
    • Thermal Management – Embedded devices must remain cool and accessible
    • Modular Design – Quick-release enclosures and reusable components improve sustainability
    • Cross-Department Collaboration – Early involvement of costume, prop, and production teams is critical

    Sound Design Strategy

    Sound designers must consider:

    • Spot vs. Atmosphere – One-off effects may suit samples; dynamic ambiences benefit from procedural audio
    • Sensor Mapping – Choose intuitive controls (e.g., FSR for pressure-based sounds)
    • Actor Suitability – Confident performers are better candidates for device control
    • Rehearsal Integration – Early adoption helps reduce cognitive load and improve fluency

    Future Directions

    The next phase involves deploying IoS devices in a live pantomime performance in December 2025. Beyond this, distributed performances across locations (e.g., London and New York) could leverage IoS for synchronised, remote interaction.

    Exploration of alternative microcontrollers (e.g., Teensy) and operating systems (e.g., Elk Audio OS) may improve scalability and reliability.


    Conclusion

    Actor-controlled IoS devices represent a promising evolution in theatre sound design—merging technical innovation with artistic expression. While challenges remain, the potential for more immersive, responsive, and collaborative performances is clear.

  • Reimagining Sound in Live Theatre

    Part 1: Collaborative Sound Design in Theatre – A Workshop Approach

    In an age where immersive experiences are reshaping the boundaries of performance, sound design in theatre is undergoing a quiet revolution. A recent workshop held at The Dibble Tree Theatre in Carnoustie explored this transformation, bringing together actors, sound designers, and experimental technologies to co-create a new kind of theatrical soundscape.

    Two pantomime characters
    The Dame and the Barron, ready to collaborate with our sound designers!

    Why Sound Design Needs a Shake-Up

    Despite its central role in storytelling, sound design in theatre has lagged behind lighting and projection in terms of innovation. Traditional tools like QLab remain industry staples, but they often limit sound to pre-programmed cues triggered by operators. This workshop challenged that model by asking: What if actors could control their own sound effects live on stage?


    Collaboration at the Core

    The workshop was designed as a playful, hands-on experience. Participants—ranging from amateur theatre enthusiasts to experienced backstage crew—worked in small groups to rehearse and perform short pantomime scenes. They used Foley props (slide whistles, rain sticks, thunder tubes), pre-recorded samples, and procedural audio models to sketch out their sound designs.

    Importantly, actors and sound designers collaborated from the outset, rehearsing together and experimenting with timing, mood, and interaction. This flattened hierarchy fostered creativity and mutual learning.

    A character and a sound designer
    Long John Silver performing his actions along with a sound designer on a slide whistle

    Enter the Internet of Sounds

    A standout feature of the workshop was the use of networked sound devices—custom-built tools powered by Arduino MKR 1010 boards and Pure Data software. These devices allowed actors to trigger sounds via sensors embedded in props or wearable tech. For example:

    • A motion sensor in a prop triggered audience reactions.
    • A rotary knob controlled volume and playback of samples.
    • An accelerometer and force-sensitive resistor enabled real-time manipulation of procedural audio.

    These embodied interfaces blurred the line between performer and sound operator, creating a more organic and responsive soundscape.

    Sound designer studying the script
    Sound designer studying the script with the Internet of Sound devices beside him.
    Sound designer performing
    Sound designer performing the sounds on the Internet of Sound devices, with script on other hand and watching the stage to get her timing right.

    What Participants Learned

    Feedback was overwhelmingly positive. Participants reported:

    • Greater appreciation for the complexity of sound design.
    • Enjoyment of the collaborative and playful structure.
    • Insights into how sound design principles transfer to other media like film and radio.

    Challenges included cognitive load—especially for actors managing props, cues, and performance simultaneously—and occasional technical glitches with Wi-Fi connectivity.


    Key Takeaways

    • Actor-led sound triggering offers better timing and authenticity.
    • Early integration of sound design into rehearsals is crucial.
    • Embodied interaction (e.g., using props or wearables) enhances engagement.
    • Collaboration between departments—sound, props, costumes—is essential for success.

    Final Thought

    This workshop offered a fresh perspective on how sound can be more deeply integrated into live theatre. By inviting collaboration between actors and sound designers and experimenting with interactive technologies, it opened up new possibilities for creative expression. While challenges like reliability and cognitive load remain, the enthusiasm and insights from participants suggest that actor-led sound design is a promising direction worth exploring further.


    In Part 2, we explore the technical implementation of actor-controlled sound effects using Internet of Sound (IoS) devices. Stay tuned for a deeper dive into the engineering behind the performance.

  • Theatre Sound Design Workshop

    Date: Saturday 3rd May 2025
    Location: Dibble Tree Theatre, 99A High St, Carnoustie DD7 7EA
    Morning workshop: 09:30 – 12:30
    Afternoon workshop: 13:30 – 16:30
    Workshop Duration: 3 hours
    Tea, coffee and biscuits provided
    Suitable for individuals 16-years and over

    Step into the enchanting world of theatre with our exciting project! We’re opening the doors to everyone in our community, giving you a backstage pass to the magic of live performances. Dive into the fascinating realm of sound effects (SFX) and team up with actors to create a one-of-a-kind performance.

    This workshop is a unique adventure, perfect for anyone curious about the hidden roles in theatre or looking to boost their confidence. You’ll see how SFX can transform and elevate the emotions in a performance right before your eyes.

    Unlike typical arts-based workshops, where you collaborate with an artist, this experience lets you interact with sound-producing objects and sensors to craft unique sonic experiences. Work alongside actors to bring a unique performance to life, exploring everything from classic devices to cutting-edge synthesis models. Plus, you’ll discover how STEM skills can be applied to various media. It’s a fun, hands-on journey into the heart of theatre!

    Booking link for morning session (09:30 – 12:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-morning

    Booking link for afternoon session (13:30 – 16:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-afternoon

    Any issues or questions, please contact Rod Selfridge (r.selfridge@napier.ac.uk).

  • Help Us Improve Your TV and Streaming Experience

    If you’re someone who loves watching films or streaming TV shows, you probably know how important sound is to the experience. Whether it’s the intensity of a car chase or the subtle whispers of a tense scene, sound adds layers to what we see on screen. But what if the sound you hear doesn’t quite match what you’re looking for, or maybe the dialogue is difficult to follow? That’s where Ph.D. candidate, Ahmed Shalabi’s research within the Interactive Media Design group here at Edinburgh Napier University comes in. He is working to make your listening experience more personalised and tailored for you.

    Ahmed Shalabi
    Ahmed Shalabi

    Why Sound in Films and TV Matters

    When we watch a film or show, we all have our own preferences for how it should sound. We also have our favourite playback hardware and listening devices, be they loudspeakers or earbuds.

    Some people love rich, booming bass; others prefer clear dialogue. But here’s the thing: the sound you hear while watching isn’t always customised to fit your needs. Most systems follow a “one-size-fits-all” approach. They don’t account for things like your hearing abilities, the acoustics of your room, your playback device, or even your taste in audio.

    That’s why Ahmed is developing an adaptive mixing framework that automatically adjusts the sound to your environment and hearing preferences. This could mean clearer dialogue, better-balanced background sounds, and a soundscape that makes watching TV or streaming films more enjoyable and engaging. Imagine being able to fine-tune the sound so that it feels just right for you without having to fiddle with the settings every time.

    Why You Should Participate

    Ahmed is reaching out to the public because each person’s input could help make this a reality. He is looking for participants to help test and develop this personalised audio technology. By participating, you’ll be contributing directly to the development of sound systems that adapt to you.

    But that’s not all. This research could make a big difference for people with hearing difficulties or unique preferences. For instance, it could help those who struggle to hear dialogue clearly or people who want a more immersive sound experience. Ultimately, the goal is to make entertainment more accessible and enjoyable for everyone.

    Even though there’s no monetary reward for participating, your feedback will be an important part of shaping how we experience sound in the future.

    Who Can Join?

    If you’re between 18 and 70 years old and have full or corrected eyesight, you’re eligible to participate. No special knowledge of sound or technology is required — all you need is a love for films and TV shows and a willingness to provide feedback. Whether you watch movies every weekend or just catch the latest episodes on streaming platforms, your experience is valuable.

    What Will You Be Doing?

    You’ll be visiting the auralisation suite at Edinburgh Napier University, a space designed to simulate different sound environments. During your visit, you’ll watch short films in a controlled setting while controlling each sound element. Then, you’ll give feedback on your experience. Your answers will help refine the adaptive mixing framework.

    This experiment will help understand how different audio setups influence the way we watch and enjoy media. Your participation will help ensure that future sound systems can adapt to individual needs, making entertainment more enjoyable for everyone.

    How to Get Involved

    If you’re interested in being part of this exciting research, it’s easy to sign up. Just visit this link to schedule your visit to the auralisation suite (C72).

    https://calendly.com/gelby/30min

    Make a Difference in How We Experience Sound

    This is your chance to help improve the way we experience sound in TV and films. By taking part in this research, you’ll help create a more personalised, immersive, and accessible audio experience for everyone. Whether you’re a film lover or just want to help improve how we all listen to media, your feedback will play a huge role in the future of sound tech.

  • Investigating the Impact of Anthropogenic Noise on Freshwater Soundscapes and Invertebrates

    This research is being carried out by PhD student Jess Lister at Edinburgh Napier University. Jess is designing and conducting the study to better understand how anthropogenic noise affects freshwater ecosystems. She is supported by a supervisory team, including Dr. Jennifer Dodd (Director of Studies)Dr. Iain McGregor (Second Supervisor)Dr. Matthew Wale, and Buglife’s Conservation Director, Dr. Craig Macadam. Their expertise in bioacoustics, environmental science, and invertebrate ecology ensures a multidisciplinary approach to studying noise pollution’s effects on freshwater biodiversity.

    Jess Lister

     

    Understanding the Challenges of Noise Pollution in Freshwater Ecosystems

    Freshwater environments contain a variety of natural sounds from flowing water, aquatic species, and atmospheric conditions. However, increasing human activity is introducing noise that could interfere with species that rely on acoustic communication. While much research has explored noise pollution’s effects on terrestrial and marine life, freshwater invertebrates remain underrepresented in these studies. Jess’s work addresses this gap by examining how noise impacts stoneflies, an important group of insects in river ecosystems.

     

    Stoneflies and Vibrational Communication

    Stoneflies (Order: Plecoptera) use substrate-borne vibrational signals, known as drumming, to communicate during mating. This is essential for species recognition and reproduction. However, road traffic noise overlaps with the frequency of their signals, raising concerns that it could disrupt mate attraction. Jess’s research examines whether noise pollution alters their communication patterns.

     

    Developing a Controlled Research Environment

    To study these effects, Jess has implemented a controlled experimental setup. The BeatBox, an acoustic chamber designed to minimise external interference, allows for precise playback experiments. This setup ensures that stoneflies’ responses to different noise conditions can be observed and measured accurately.

     

    Experimental Methods and Playback Studies

    Stonefly nymphs are collected from river sites and reared to adulthood in aquaria under controlled conditions. Once they emerge, males are placed in the BeatBox, where their drumming behaviour is recorded with and without road noise playback. This controlled approach ensures accurate measurements and allows for detailed analysis of any changes in communication patterns.

    Initial findings suggest that noise pollution may affect the frequency and timing of stonefly drumming signals. If further analysis confirms this, it will provide important evidence that freshwater invertebrates—like many terrestrial and marine species—are affected by human-generated noise, with potential consequences for biodiversity and ecosystem function.

     

    The Impact of Noise on River Soundscapes

    Beyond individual species, Jess’s research explores how road traffic noise interacts with river ecosystems. By combining hydrophone recordings with in-air microphones, she is investigating how sound travels through both water and air, providing a broader understanding of how noise pollution alters freshwater environments. She will also be capturing ground-borne noise, adding another dimension to the study by examining how vibrations travel through the riverbed and surrounding terrain. This comprehensive approach will provide deeper insights into how different types of noise interact within freshwater habitats.

    Because stoneflies are sensitive to temperature increases, climate change and habitat loss pose significant threats to their populations. Their decline can lead to disruptions in freshwater food webs, affecting fish populations and overall river health. Monitoring and protecting stoneflies is essential for maintaining biodiversity and ecosystem function in freshwater environments.

     

    Future Directions

    Jess’s work is contributing new insights to freshwater bioacoustics. As human activity continues to shape natural environments, her findings could inform conservation strategies aimed at reducing the impact of noise pollution on freshwater species. The BeatBox could also be used to study other invertebrates that rely on substrate-borne communication.

     

    Conclusion

    Jess Lister’s research is helping to clarify how anthropogenic noise affects freshwater ecosystems. Her work highlights an often-overlooked aspect of environmental change, demonstrating the importance of including soundscapes in conservation efforts. By developing new methods and expanding knowledge of freshwater bioacoustics, she is making an important contribution to ecology and environmental science.