Reimagining Sound in Live Theatre II

Part 2: Engineering Actor-Controlled Sound Effects with IoS Devices

This post builds on insights from Part 1 of our series on interactive theatre sound design. If you haven’t read it yet, check out Part 1: Collaborative Sound Design in Theatre.


Rethinking Sound Control in Theatre

Traditional theatre sound design relies heavily on off-stage operators using software like QLab to trigger pre-recorded cues. While reliable, this model limits spontaneity and performer agency. This research investigates a shift: giving actors direct control over sound effects using networked Internet of Sound (IoS) devices embedded in props or costumes.


What Is the Internet of Sound?

The Internet of Sound (IoS) is a subdomain of the Internet of Things (IoT), focused on transmitting and manipulating sound-related data over wireless networks. It includes:

  • IoMusT (Internet of Musical Things): Smart instruments with embedded electronics.
  • IoAuT (Internet of Audio Things): Distributed audio systems for production, reception, and analysis.

This project leans toward the IoMusT domain, emphasizing performer interaction with sound-generating devices.


Technical Architecture

The workshop deployed 9 IoS devices built with Arduino MKR 1010 microcontrollers, chosen for their built-in Wi-Fi and affordability. Each device communicated via Open Sound Control (OSC) over UDP, sending sensor data to Pure Data patches running on local laptops.

Sensors Used:

  • Accelerometers – for dynamic control (e.g., storm intensity)
  • Force Sensitive Resistors (FSRs) – for pressure-based triggers
  • Circular and Rotary Potentiometers – for pitch and volume control
  • Photoresistors – for light-triggered samples
  • Buttons – for simple cue activation

Each performance space had its own router, enabling modular and fault-tolerant deployment.

Hardware set up in the main theatre space.
Hardware set up in the main theatre space.

 

Hardware setup in the rehearsal space.
Hardware setup in the rehearsal space.

Interaction Design

Participants interacted with both pre-recorded samples and procedural audio models:

Pre-recorded Samples:

  • Triggered via buttons, light sensors, or rotary knobs
  • Used for audience reactions, chorus sounds, and character cues

Procedural Audio Models:

  • Spark – Triggered by button (gain envelope)
  • Squeaky Duck – Controlled by FSR (pitch modulation)
  • Theremin – Controlled by circular potentiometer (oscillator frequency)
  • Stormstick – Controlled by accelerometer (rain and thunder intensity)

These models allowed for expressive, real-time manipulation of sound, enhancing immersion and authenticity.

Circular potentiometer used to control a Theremin type sound effect.
Circular potentiometer used to control a Theremin type sound effect.

 

An accelerometer within the 'Stormstick' controls the gain of a rain synthesis model and the trigger rate of a thunder one.
An accelerometer within the ‘Stormstick’ controls the gain of a rain synthesis model and the trigger rate of a thunder one.

Participant Feedback & Findings

Benefits:

  • Enhanced Timing – Actor-triggered cues improved synchronisation
  • Creative Freedom – Enabled improvisation and dynamic adaptation
  • Authenticity – Increased believability and audience engagement
  • Actor Agency – Encouraged deeper integration into the production process

Challenges:

  • Reliability – Wi-Fi dropouts and device failures were noted
  • Cognitive Load – Actors expressed concern over added responsibilities
  • Integration – Costume and prop design must accommodate sensors
  • Audience Distraction – Poorly integrated devices could break immersion

Engineering Considerations

To ensure successful deployment in live theatre:

  • Robust Wi-Fi – Site-specific testing and fallback systems (e.g., QLab) are essential
  • Thermal Management – Embedded devices must remain cool and accessible
  • Modular Design – Quick-release enclosures and reusable components improve sustainability
  • Cross-Department Collaboration – Early involvement of costume, prop, and production teams is critical

Sound Design Strategy

Sound designers must consider:

  • Spot vs. Atmosphere – One-off effects may suit samples; dynamic ambiences benefit from procedural audio
  • Sensor Mapping – Choose intuitive controls (e.g., FSR for pressure-based sounds)
  • Actor Suitability – Confident performers are better candidates for device control
  • Rehearsal Integration – Early adoption helps reduce cognitive load and improve fluency

Future Directions

The next phase involves deploying IoS devices in a live pantomime performance in December 2025. Beyond this, distributed performances across locations (e.g., London and New York) could leverage IoS for synchronised, remote interaction.

Exploration of alternative microcontrollers (e.g., Teensy) and operating systems (e.g., Elk Audio OS) may improve scalability and reliability.


Conclusion

Actor-controlled IoS devices represent a promising evolution in theatre sound design—merging technical innovation with artistic expression. While challenges remain, the potential for more immersive, responsive, and collaborative performances is clear.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *