Tag: Edinburgh Napier University

  • Reimagining Sound in Live Theatre II

    Part 2: Engineering Actor-Controlled Sound Effects with IoS Devices

    This post builds on insights from Part 1 of our series on interactive theatre sound design. If you haven’t read it yet, check out Part 1: Collaborative Sound Design in Theatre.


    Rethinking Sound Control in Theatre

    Traditional theatre sound design relies heavily on off-stage operators using software like QLab to trigger pre-recorded cues. While reliable, this model limits spontaneity and performer agency. This research investigates a shift: giving actors direct control over sound effects using networked Internet of Sound (IoS) devices embedded in props or costumes.


    What Is the Internet of Sound?

    The Internet of Sound (IoS) is a subdomain of the Internet of Things (IoT), focused on transmitting and manipulating sound-related data over wireless networks. It includes:

    • IoMusT (Internet of Musical Things): Smart instruments with embedded electronics.
    • IoAuT (Internet of Audio Things): Distributed audio systems for production, reception, and analysis.

    This project leans toward the IoMusT domain, emphasizing performer interaction with sound-generating devices.


    Technical Architecture

    The workshop deployed 9 IoS devices built with Arduino MKR 1010 microcontrollers, chosen for their built-in Wi-Fi and affordability. Each device communicated via Open Sound Control (OSC) over UDP, sending sensor data to Pure Data patches running on local laptops.

    Sensors Used:

    • Accelerometers – for dynamic control (e.g., storm intensity)
    • Force Sensitive Resistors (FSRs) – for pressure-based triggers
    • Circular and Rotary Potentiometers – for pitch and volume control
    • Photoresistors – for light-triggered samples
    • Buttons – for simple cue activation

    Each performance space had its own router, enabling modular and fault-tolerant deployment.

    Hardware set up in the main theatre space.
    Hardware set up in the main theatre space.

     

    Hardware setup in the rehearsal space.
    Hardware setup in the rehearsal space.

    Interaction Design

    Participants interacted with both pre-recorded samples and procedural audio models:

    Pre-recorded Samples:

    • Triggered via buttons, light sensors, or rotary knobs
    • Used for audience reactions, chorus sounds, and character cues

    Procedural Audio Models:

    • Spark – Triggered by button (gain envelope)
    • Squeaky Duck – Controlled by FSR (pitch modulation)
    • Theremin – Controlled by circular potentiometer (oscillator frequency)
    • Stormstick – Controlled by accelerometer (rain and thunder intensity)

    These models allowed for expressive, real-time manipulation of sound, enhancing immersion and authenticity.

    Circular potentiometer used to control a Theremin type sound effect.
    Circular potentiometer used to control a Theremin type sound effect.

     

    An accelerometer within the 'Stormstick' controls the gain of a rain synthesis model and the trigger rate of a thunder one.
    An accelerometer within the ‘Stormstick’ controls the gain of a rain synthesis model and the trigger rate of a thunder one.

    Participant Feedback & Findings

    Benefits:

    • Enhanced Timing – Actor-triggered cues improved synchronisation
    • Creative Freedom – Enabled improvisation and dynamic adaptation
    • Authenticity – Increased believability and audience engagement
    • Actor Agency – Encouraged deeper integration into the production process

    Challenges:

    • Reliability – Wi-Fi dropouts and device failures were noted
    • Cognitive Load – Actors expressed concern over added responsibilities
    • Integration – Costume and prop design must accommodate sensors
    • Audience Distraction – Poorly integrated devices could break immersion

    Engineering Considerations

    To ensure successful deployment in live theatre:

    • Robust Wi-Fi – Site-specific testing and fallback systems (e.g., QLab) are essential
    • Thermal Management – Embedded devices must remain cool and accessible
    • Modular Design – Quick-release enclosures and reusable components improve sustainability
    • Cross-Department Collaboration – Early involvement of costume, prop, and production teams is critical

    Sound Design Strategy

    Sound designers must consider:

    • Spot vs. Atmosphere – One-off effects may suit samples; dynamic ambiences benefit from procedural audio
    • Sensor Mapping – Choose intuitive controls (e.g., FSR for pressure-based sounds)
    • Actor Suitability – Confident performers are better candidates for device control
    • Rehearsal Integration – Early adoption helps reduce cognitive load and improve fluency

    Future Directions

    The next phase involves deploying IoS devices in a live pantomime performance in December 2025. Beyond this, distributed performances across locations (e.g., London and New York) could leverage IoS for synchronised, remote interaction.

    Exploration of alternative microcontrollers (e.g., Teensy) and operating systems (e.g., Elk Audio OS) may improve scalability and reliability.


    Conclusion

    Actor-controlled IoS devices represent a promising evolution in theatre sound design—merging technical innovation with artistic expression. While challenges remain, the potential for more immersive, responsive, and collaborative performances is clear.

  • Reimagining Sound in Live Theatre

    Part 1: Collaborative Sound Design in Theatre – A Workshop Approach

    In an age where immersive experiences are reshaping the boundaries of performance, sound design in theatre is undergoing a quiet revolution. A recent workshop held at The Dibble Tree Theatre in Carnoustie explored this transformation, bringing together actors, sound designers, and experimental technologies to co-create a new kind of theatrical soundscape.

    Two pantomime characters
    The Dame and the Barron, ready to collaborate with our sound designers!

    Why Sound Design Needs a Shake-Up

    Despite its central role in storytelling, sound design in theatre has lagged behind lighting and projection in terms of innovation. Traditional tools like QLab remain industry staples, but they often limit sound to pre-programmed cues triggered by operators. This workshop challenged that model by asking: What if actors could control their own sound effects live on stage?


    Collaboration at the Core

    The workshop was designed as a playful, hands-on experience. Participants—ranging from amateur theatre enthusiasts to experienced backstage crew—worked in small groups to rehearse and perform short pantomime scenes. They used Foley props (slide whistles, rain sticks, thunder tubes), pre-recorded samples, and procedural audio models to sketch out their sound designs.

    Importantly, actors and sound designers collaborated from the outset, rehearsing together and experimenting with timing, mood, and interaction. This flattened hierarchy fostered creativity and mutual learning.

    A character and a sound designer
    Long John Silver performing his actions along with a sound designer on a slide whistle

    Enter the Internet of Sounds

    A standout feature of the workshop was the use of networked sound devices—custom-built tools powered by Arduino MKR 1010 boards and Pure Data software. These devices allowed actors to trigger sounds via sensors embedded in props or wearable tech. For example:

    • A motion sensor in a prop triggered audience reactions.
    • A rotary knob controlled volume and playback of samples.
    • An accelerometer and force-sensitive resistor enabled real-time manipulation of procedural audio.

    These embodied interfaces blurred the line between performer and sound operator, creating a more organic and responsive soundscape.

    Sound designer studying the script
    Sound designer studying the script with the Internet of Sound devices beside him.
    Sound designer performing
    Sound designer performing the sounds on the Internet of Sound devices, with script on other hand and watching the stage to get her timing right.

    What Participants Learned

    Feedback was overwhelmingly positive. Participants reported:

    • Greater appreciation for the complexity of sound design.
    • Enjoyment of the collaborative and playful structure.
    • Insights into how sound design principles transfer to other media like film and radio.

    Challenges included cognitive load—especially for actors managing props, cues, and performance simultaneously—and occasional technical glitches with Wi-Fi connectivity.


    Key Takeaways

    • Actor-led sound triggering offers better timing and authenticity.
    • Early integration of sound design into rehearsals is crucial.
    • Embodied interaction (e.g., using props or wearables) enhances engagement.
    • Collaboration between departments—sound, props, costumes—is essential for success.

    Final Thought

    This workshop offered a fresh perspective on how sound can be more deeply integrated into live theatre. By inviting collaboration between actors and sound designers and experimenting with interactive technologies, it opened up new possibilities for creative expression. While challenges like reliability and cognitive load remain, the enthusiasm and insights from participants suggest that actor-led sound design is a promising direction worth exploring further.


    In Part 2, we explore the technical implementation of actor-controlled sound effects using Internet of Sound (IoS) devices. Stay tuned for a deeper dive into the engineering behind the performance.

  • Theatre Sound Design Workshop

    Date: Saturday 3rd May 2025
    Location: Dibble Tree Theatre, 99A High St, Carnoustie DD7 7EA
    Morning workshop: 09:30 – 12:30
    Afternoon workshop: 13:30 – 16:30
    Workshop Duration: 3 hours
    Tea, coffee and biscuits provided
    Suitable for individuals 16-years and over

    Step into the enchanting world of theatre with our exciting project! We’re opening the doors to everyone in our community, giving you a backstage pass to the magic of live performances. Dive into the fascinating realm of sound effects (SFX) and team up with actors to create a one-of-a-kind performance.

    This workshop is a unique adventure, perfect for anyone curious about the hidden roles in theatre or looking to boost their confidence. You’ll see how SFX can transform and elevate the emotions in a performance right before your eyes.

    Unlike typical arts-based workshops, where you collaborate with an artist, this experience lets you interact with sound-producing objects and sensors to craft unique sonic experiences. Work alongside actors to bring a unique performance to life, exploring everything from classic devices to cutting-edge synthesis models. Plus, you’ll discover how STEM skills can be applied to various media. It’s a fun, hands-on journey into the heart of theatre!

    Booking link for morning session (09:30 – 12:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-morning

    Booking link for afternoon session (13:30 – 16:30): https://calendly.com/r-selfridge-napier/theatre-sounds-workshop-afternoon

    Any issues or questions, please contact Rod Selfridge (r.selfridge@napier.ac.uk).

  • Dr Iain McGregor: Advancing Interactive Media Design at Edinburgh Napier University

    Dr Iain McGregor serves as an Associate Professor at Edinburgh Napier University, where he specialises in interactive media design and auditory perception research. He earned a PhD in soundscape mapping, focusing on comparing sound designers’ expectations with listeners’ experiences, providing insights into perceptual differences and design approaches. With over 30 years of experience, he specialises in sound design across various media, including film, video games, mixed reality, and auditory displays. His research covers soundscapes, sonification, and human interaction with auditory systems.

    Contributions to Auditory Perception Research

    Dr McGregor has collaborated with researchers on a range of studies that explore sound design and auditory perception. One such contribution includes his work on auditory perception, particularly his patent, *Evaluation of Auditory Capabilities* (WO2024041821A1). This patent presents a method for assessing auditory perception, with potential applications in accessibility, user experience design, and auditory technologies.

    Research in Sound and Human-Robot Interaction

    Dr McGregor’s research covers sound design, auditory perception, and human-robot interaction (HRI). He investigates how naming conventions shape perceptions of robotic personalities, improving trust and usability in assistive robotics. His research in sonification aids scientific analysis, while his work on auditory alerts improves their effectiveness in healthcare and transportation. He also explores how immersive audio enriches virtual and mixed reality and examines Foley artistry’s impact on character realism in animation. Collaborating with industry and academia, he applies these insights to mixed reality, film, video games, and robotics.

    Industry Experience

    At the start of his career, Dr McGregor worked with renowned artists and organisations, including the Bolshoi Opera, the City of Birmingham Symphony Orchestra under Sir Simon Rattle, Ravi Shankar, and Nina Simone. His work integrates auditory technologies with creative methodologies, driving innovation in sound research and education. In addition to his academic work, he is currently serving as a consultant for technology companies in the fields of mixed reality and robotics, helping to shape the development of innovative auditory interfaces.

    Academic Contributions and Mentorship

    Beyond his research, Dr McGregor mentors MSc and PhD students in sound design, auditory perception, and human-computer interaction. He encourages interdisciplinary collaboration among designers, engineers, and cognitive scientists. He contributes to curriculum development, aligning courses with advancements in sound and interactive media design. His work in interactive media design and auditory perception informs research and industry practices.

    Technological and Adaptive Advancements in Sound Design

    Advancements in reinforcement learning and edge computing are enabling real-time adaptation in sound design. These technologies allow auditory interfaces to intelligently filter and process sounds, reducing noise while enhancing clarity. Extended audiograms and dynamic digital signal processing (DDSP) further optimise clarity while minimising cognitive load. By integrating real-time adjustments based on user-specific hearing profiles, auditory systems can offer a consistent and accessible listening experience across different environments.

    Sound Design in Cultural and Museum Spaces

    In cultural and museum environments, sound design is also becoming more interactive and adaptive. Augmented reality audio systems offer dynamic storytelling and personalised navigation, responding to visitor movement and engagement levels. Audio cues can guide individuals with mobility constraints along optimised routes, while tailored auditory content enhances inclusivity and immersion.

    Sound Design for Digital and Interactive Environments

    Sound design is transforming interaction with digital environments, robotics, and everyday devices by enhancing immersion, accessibility, and engagement. Spatial audio accurately places sound in mixed reality, creating more natural user experiences, while in robotics, auditory cues foster trust and facilitate smoother interactions. Augmented reality audio supports dynamic storytelling and navigation, adapting to user movement and preferences. Additionally, personalised auditory content and accessibility-focused cues improve inclusivity in museums, public spaces, and virtual environments.

    Sound Design in Transportation and IoT

    To compensate for the near-silent operation of electric vehicles, the automotive industry is developing tailored audio cues that enhance safety and driver awareness. As the Internet of Things (IoT) expands, intuitive auditory interfaces are becoming crucial for seamless device navigation and control. Advancements in loudspeaker technology are also helping reduce noise pollution while improving communication in public spaces.

    The Future of Sound Design

    Research continues to advance adaptive and personalised sound experiences across multiple domains. Innovations in extended audiograms and dynamic digital signal processing (DDSP) optimise clarity while reducing cognitive load, ensuring accessibility across different environments and hearing abilities. Emerging sound technologies are exploring real-time adjustments tailored to user-specific hearing profiles, enhancing personalisation in auditory media experiences. As sound design evolves, it will create more intuitive, efficient, and engaging experiences that seamlessly adapt to diverse user needs.

  • Help Us Improve Your TV and Streaming Experience

    If you’re someone who loves watching films or streaming TV shows, you probably know how important sound is to the experience. Whether it’s the intensity of a car chase or the subtle whispers of a tense scene, sound adds layers to what we see on screen. But what if the sound you hear doesn’t quite match what you’re looking for, or maybe the dialogue is difficult to follow? That’s where Ph.D. candidate, Ahmed Shalabi’s research within the Interactive Media Design group here at Edinburgh Napier University comes in. He is working to make your listening experience more personalised and tailored for you.

    Ahmed Shalabi
    Ahmed Shalabi

    Why Sound in Films and TV Matters

    When we watch a film or show, we all have our own preferences for how it should sound. We also have our favourite playback hardware and listening devices, be they loudspeakers or earbuds.

    Some people love rich, booming bass; others prefer clear dialogue. But here’s the thing: the sound you hear while watching isn’t always customised to fit your needs. Most systems follow a “one-size-fits-all” approach. They don’t account for things like your hearing abilities, the acoustics of your room, your playback device, or even your taste in audio.

    That’s why Ahmed is developing an adaptive mixing framework that automatically adjusts the sound to your environment and hearing preferences. This could mean clearer dialogue, better-balanced background sounds, and a soundscape that makes watching TV or streaming films more enjoyable and engaging. Imagine being able to fine-tune the sound so that it feels just right for you without having to fiddle with the settings every time.

    Why You Should Participate

    Ahmed is reaching out to the public because each person’s input could help make this a reality. He is looking for participants to help test and develop this personalised audio technology. By participating, you’ll be contributing directly to the development of sound systems that adapt to you.

    But that’s not all. This research could make a big difference for people with hearing difficulties or unique preferences. For instance, it could help those who struggle to hear dialogue clearly or people who want a more immersive sound experience. Ultimately, the goal is to make entertainment more accessible and enjoyable for everyone.

    Even though there’s no monetary reward for participating, your feedback will be an important part of shaping how we experience sound in the future.

    Who Can Join?

    If you’re between 18 and 70 years old and have full or corrected eyesight, you’re eligible to participate. No special knowledge of sound or technology is required — all you need is a love for films and TV shows and a willingness to provide feedback. Whether you watch movies every weekend or just catch the latest episodes on streaming platforms, your experience is valuable.

    What Will You Be Doing?

    You’ll be visiting the auralisation suite at Edinburgh Napier University, a space designed to simulate different sound environments. During your visit, you’ll watch short films in a controlled setting while controlling each sound element. Then, you’ll give feedback on your experience. Your answers will help refine the adaptive mixing framework.

    This experiment will help understand how different audio setups influence the way we watch and enjoy media. Your participation will help ensure that future sound systems can adapt to individual needs, making entertainment more enjoyable for everyone.

    How to Get Involved

    If you’re interested in being part of this exciting research, it’s easy to sign up. Just visit this link to schedule your visit to the auralisation suite (C72).

    https://calendly.com/gelby/30min

    Make a Difference in How We Experience Sound

    This is your chance to help improve the way we experience sound in TV and films. By taking part in this research, you’ll help create a more personalised, immersive, and accessible audio experience for everyone. Whether you’re a film lover or just want to help improve how we all listen to media, your feedback will play a huge role in the future of sound tech.

  • Meaningful Learning Experiences Survey

    Postgraduate student Suzi Cathro is conducting a survey to better understand what makes university life meaningful for a variety of students as part of her PhD study into using mixed reality to create meaningful learning experiences within higher education. The survey will take no more than 30 minutes, and all responses will remain anonymous.

    Who can participate?

    To be able to participate in this study you must:

    • Be over 18 years old
    • Be a current or former Edinburgh Napier Student

    Or

    • A staff member at Edinburgh Napier University who has attended any university as a student.

    Why Participate?

    • Contribute to valuable research that could improve the student
    • Share your personal insights on what makes university life impactful.
    • Participation is completely voluntary, and you can skip any questions you don’t wish to answer.

    If you’re interested in shaping this research, take a few moments to complete the survey. Your voice matters!

    https://forms.office.com/e/2z0CFsK68D