Category: Online guest lectures

  • The Fast and the Sonorous: Vehicle Sound Design Insights from Codemasters’ Jethro Dunn

    Jethro Dunn, Senior Audio Designer at Codemasters, has contributed to a range of projects, from tactical military shooters to arcade racing games. During his lecture, he shared how vehicle sound effects are shaped by technical constraints, creative objectives, and genre-specific requirements—whether simulating the weight of an armoured convoy or signalling damage in a playful kart racer.

    Drawing on titles such as Operation Flashpoint: Red River and F1 Race Stars, Dunn focused on practical techniques for crafting immersive vehicle soundscapes, managing acoustics, and enhancing player feedback.

    Jethro Dunn

    Streamlining Vehicle Audio in Tactical Shooters

    In Operation Flashpoint: Dragon Rising and Red River, vehicles like jeeps and APCs required sound design that balanced realism with hardware limitations. Early designs utilised layered loops for engines, transmissions, and mechanical effects, but this approach led to unnecessary system overhead.

    “We were wasting more memory managing complex sound events than on the actual audio data, so we had to rethink how we structured vehicle sounds.” — Jethro Dunn

    The team restructured vehicle audio into smaller, independent elements. Engine and exhaust sounds were separated to enhance spatial realism, and mechanical “sweeteners” were introduced at low acceleration to add life and responsiveness during slower movements.

    Shaping Player Perspective: Interior and Exterior Vehicle Sound

    When players moved inside a vehicle, soundscapes shifted to reflect enclosed acoustics. Manual adjustments ensured consistent transitions between interior and exterior perspectives, with positional tweaks placing engine noise appropriately whether driving, seated as a passenger, or operating a turret.

    Conveying Distance: Designing Distant and Ultra-Distant Vehicle Sounds

    Vehicle sounds were deliberately simplified at distance, becoming ambient rumbles to reflect real-world acoustic behaviour. For ultra-distant scenarios, low-frequency layers simulated convoys heard kilometres away, enhancing environmental awareness without cluttering the soundscape.

    Practical Choices: Avoiding Granular Synthesis

    Dunn noted that granular synthesis, commonly used in racing games for dynamic engine sounds, was intentionally avoided for military vehicles.

    “We didn’t use granular synthesis for these vehicles because we didn’t have the recordings, and we didn’t need that level of complexity.”

    Adding Mechanical Detail: Transmission Whine and Brake Squeals

    To enhance realism, layers such as transmission whine and brake squeals were incorporated, helping players interpret vehicle behaviour and reinforcing the mechanical character of military vehicles.

    Communicating Through Sound: Feedback in Arcade Racing

    In F1 Race Stars, sound effects prioritised clear communication over realism.

    “In arcade racing, players need to hear when something’s wrong before they even look at the screen.”

    Exaggerated mechanical noises signalled damage, while distinct cues marked repairs or performance drops—providing immediate, intuitive feedback in a fast-paced environment.

    Recording Challenges and Creative Solutions

    Capturing vehicle audio involved logistical challenges, from limited access to military hardware to managing motorsport recordings.

    “You can’t ask a military driver to do ten perfect laps for recording—you get what you get.”

    For smaller projects, Dunn recorded toy cars in controlled environments—demonstrating adaptability across varying project scopes.

    Reflections on Vehicle Sound Design

    Jethro Dunn’s lecture demonstrated how vehicle sound effects are shaped by technical awareness, efficient workflows, and responsiveness to gameplay needs. From spatial realism through engine and exhaust separation to mechanical sweeteners and clear gameplay cues, his approach highlights the practical decisions that define vehicle sound design across both realistic and stylised game environments.

  • Playing Along: When Music Is Part of the Game World

    “We talk about music that originates from within the diegesis — and not from some non-diegetic player outside of it.”
    — Axel Berndt

    In a guest lecture on game audio, Dr.-Ing. Axel Berndt examined the role of diegetic music — music that exists within a game’s fictional world and can be heard, performed, or even disrupted by its characters. This kind of music, Berndt argued, is not background or emotional subtext. It is part of the world itself.

    Berndt, is a member of the Center of Music and Film Informatics within the Detmold University of Music, working at the intersection of sound design, musical interaction, and adaptive systems. His lecture brought together commercial examples, music-theoretic distinctions, and design considerations to illustrate how music behaves differently when it belongs to the world rather than framing it from outside.

    Dr. -Ing. Axel Berndt

    Inside the World: What Makes Music Diegetic

    Diegetic music refers to music that originates within the game’s diegesis — its fictional environment. Berndt described it as everything “within this world”: sounds that characters can hear and react to, including wind, speech, and music performed or played through in-world devices.

    “If someone switches the radio on, triggers the music box, sings a song, or plays an instrument… their music is also diegetic.”

    Examples included a street musician in The Patrician, a pipe player at a party, and the bard at the start of Conquest of the Longbow. In Doom 3, a gaming machine plays music within the scene; in Oceanarium, a robot performs in a clearly defined virtual space. These are not aesthetic flourishes — they anchor music in the logic of the world.

    Berndt contrasted this with non-diegetic music, which accompanies a scene without being part of it — such as a film score swelling during a battle. “There is no orchestra sitting on an asteroid during the space battle,” he remarked, highlighting the artificiality of non-diegetic scoring in game environments that otherwise strive for realism.

    Sound That Can Be Interrupted

    Once music is part of the world, it becomes subject to physical space, interruption, and interaction.

    “The simplest type of interaction may be to switch a radio on and off, but there is much more possible.”

    Berndt categorised musical interactions as either destructive — disrupting a performance — or constructive, where player input enriches or alters the musical output. In Monkey Island 3, players must stop their crew from singing an extended shanty by choosing responses that are woven into the rhyme scheme. Each interruption is musical and interactive.

    “The sequential order of verses and interludes is arranged according to the multiple choice decisions the player makes.”

    Such scenes turn performance into a mechanic. Music is not a layer applied to gameplay — it is the gameplay.

    When Music Isn’t Polished — And Why That Matters

    Berndt emphasised that diegetic music should not always sound flawless. Live performance in reality includes irregularities: tuning fluctuations, missed notes, imperfect timing. Simulating this can enhance believability.

    “Fluctuations of intonation, rhythmic asynchrony, wrong notes — these things simply happen in life situations. Including them brings a gain of authenticity.”

    He cited the harmonica player in Gabriel Knight, whose wavering tone subtly reinforces the impression of a street musician with limited technical control. Imperfection isn’t failure — it is context-aware design.

    Berndt also warned against repetitive loops that expose the limits of a system. When the player leaves and re-enters a scene, and the same music starts again from the beginning, the world appears frozen. “We reached the end of the world,” he said. “There is nothing more to come.”

    To counter this, he advocated techniques such as generative variation, asynchronous playback, and music that continues even when not audible — preserving the impression of an autonomous, living environment.

    Games Where Music Is the Environment

    Berndt’s second category of diegetic music is visualised music — where players engage not just with music in the scene, but with music as the environment itself. This includes rhythm games like Guitar Hero, Dance Dance Revolution, and Crypt of the Necrodancer, where music structures time, space, and action.

    “What we actually interact with is music itself. The visuals are just a transformation — an interface that eases our visually coined interaction techniques.”

    In Audiosurf, players import their own tracks and race through colour-coded lanes shaped by the waveform. In Rez, players shoot targets that trigger rhythmic events. These games represent a shift from music as accompaniment to music as system.

    “The diegesis is the domain of musical possibilities. The visual layer follows the routines of the music.”

    Berndt emphasised that this kind of interaction demands careful timing, expressive range, and sometimes even simplification to make musical gameplay accessible.

    From Instruments to Systems

    Not all music-based interaction takes the form of traditional games. Electroplankton allowed Nintendo DS users to create sound patterns through direct manipulation — drawing curves, arranging nodes, or triggering plankton-like agents.

    “Interestingly, all these concepts don’t really need introduction. Give it to the players, let them try it out, and they will soon find out by themselves how it works.”

    Berndt distinguished between note-level interaction (e.g. triggering individual sounds, as in Donkey Konga) and structural interaction, where players influence arrangement, progression, or generative systems. Both approaches are valid, but they ask different things of the player — and of the designer.

    Designing with Music in Mind

    Berndt’s lecture underscored a recurring principle: if music is situated in the world, it should behave accordingly. It must continue when out of frame, shift based on player presence, and reflect changes in the environment. When music is visualised or systematised, it should offer feedback and form, not simply decoration.

    “Music as part of the world has to be interactive, too.”

    This is not a stylistic preference — it is a design commitment. When music is embedded in the rules of the world, it becomes not only more believable, but more meaningful. It can reflect character, reinforce consequence, and establish rhythm within both narrative and mechanics.

    Berndt’s examples — from Monkey Island to Rez, from ambient performance to interactive music toys — show how music can operate on multiple levels at once: as texture, mechanic, and presence. His lecture made clear that diegetic music in games is not a solved problem or a historical curiosity. It remains a rich site for experimentation and design.

  • Understanding Binaural Hearing: Insights from Professor Jens Blauert’s Guest Lecture

    Binaural hearing is fundamental to how we perceive sound in space, influencing everything from daily interactions to the way we experience music, film, and interactive media. In a compelling online guest lecture, Professor Jens Blauert, a leading researcher in psychoacoustics and spatial hearing, provided an in-depth exploration of the principles behind binaural perception. His extensive research has shaped the fields of spatial audio, binaural recording, and 3D sound reproduction. Best known for his influential book Spatial Hearing: The Psychophysics of Human Sound Localization, his insights are particularly valuable for sound designers working in film, virtual reality, game audio, and immersive media.

    Professor Jens Blauert

    The Relationship Between Physics and Perception

    One of the key distinctions Professor Blauert made in his lecture was the difference between the physical properties of sound and auditory perception. Sound, as a physical event, consists of mechanical waves traveling through a medium, whereas auditory perception arises when the brain processes these waves, constructing an auditory event. This distinction is essential for sound designers because reproducing the physical properties of a sound does not guarantee that it will be perceived as intended. The auditory system is not a passive receiver but an active interpreter, reconstructing sound based on cues such as timing, intensity, and spectral content.

    How Humans Localise Sound

    A major focus of the lecture was the way humans determine the position of a sound source. Interaural time differences occur when a sound reaches one ear before the other. The brain interprets this difference as an indication of direction, which is particularly useful for localising low-frequency sounds below 1.5 kHz. At higher frequencies, interaural level differences become more significant, as the head acts as a barrier, creating differences in loudness between the ears. Another critical factor in sound localisation is spectral filtering by the outer ear. The pinnae modify the frequency spectrum of incoming sounds depending on the direction from which they arrive, helping the brain determine elevation and distinguish between front and back sound sources.

    For sound designers, understanding these cues is essential when working with spatial audio and binaural rendering. In virtual reality and gaming, the careful manipulation of interaural time differences and interaural level differences ensures that sound sources are perceived as truly occupying a three-dimensional space.

    The Role of Other Sensory Inputs

    Spatial hearing is not an isolated process but is influenced by other sensory inputs, particularly vision and proprioception. Professor Blauert discussed the ventriloquism effect, where conflicting auditory and visual information results in the brain prioritising vision. This is why, in a film, dialogue appears to come from the mouth of an on-screen character, even if the sound is emitted from off-screen speakers.

    Head movements also play an essential role in localisation, as the brain refines auditory perception based on changes in sound cues over time. In virtual reality, integrating real-time head tracking with binaural audio processing enhances immersion, ensuring that spatial cues remain accurate as the listener moves.

    Reverberation, Reflections, and Spatial Awareness

    Reverberation and sound reflections also shape spatial perception. In natural environments, sounds bounce off surfaces before reaching the ears, adding information about distance and space. Early reflections, which arrive within the first few milliseconds, provide cues about room size and material properties. Late reverberation contributes to the sense of spaciousness and immersion.

    For sound designers, controlling reflections is crucial for shaping an environment’s acoustics. Artificial reverberation can make a space feel larger, more intimate, or more diffuse, but excessive reverberation can blur spatial cues, reducing intelligibility.

    The Cocktail Party Effect and Binaural Signal Detection

    The lecture also explored how the auditory system processes multiple overlapping sound sources. One of the most fascinating aspects of binaural hearing is the ability to focus on a particular sound source while filtering out others, a phenomenon known as the cocktail party effect. When multiple sounds arrive at the ears, the brain can separate them based on spatial location and timbre.

    People with hearing impairments, especially those with asymmetrical hearing loss, struggle in noisy environments because they lose this spatial filtering ability. For sound designers, this principle is fundamental to mixing dialogue, music, and effects. Ensuring that critical sound elements remain perceptually distinct is essential for clarity and intelligibility.

    Professor Blauert also explained that binaural perception is not only responsible for spatial hearing but also plays a role in reverberation suppression and timbre correction. When listening with both ears, the auditory system can reduce the perceived reverberation of a space, making sounds clearer. It can also compensate for frequency distortions caused by reflections. A simple experiment demonstrates this effect: if a listener closes one ear while in a reverberant environment, the space sounds more echoic, and the timbre of sounds changes. When both ears are used, the brain naturally suppresses excess reverberation and restores a more natural balance.

    For sound designers, this means that spatial mixing must account for how the brain processes sound, ensuring that artificially introduced reverberation does not interfere with localisation or speech intelligibility.

    Applications for Sound Design and Spatial Audio

    The principles covered in this lecture have direct applications in binaural audio, 3D sound design, and immersive media. Headphone-based binaural recordings create highly realistic spatial experiences, making them ideal for virtual reality, augmented reality, and gaming. In film and theatre, spatial mixing techniques enhance realism and guide audience attention. In architectural acoustics, an understanding of how reflections shape perception is crucial for optimising venues for speech clarity and music performance.

    The research presented by Professor Blauert also informs the development of hearing aids and assistive listening technologies, improving speech intelligibility for individuals with hearing impairments.

    Final Thoughts

    Professor Blauert’s lecture reinforced the importance of understanding how humans perceive sound rather than focusing solely on its physical properties. For sound designers, the key takeaway is that perception determines how spatial audio is experienced. A strong grasp of binaural hearing principles enables the creation of immersive, natural, and convincing soundscapes, ensuring that audio enhances storytelling, gameplay, and user experience.

    As the demand for interactive and immersive media grows, these concepts remain essential tools for crafting engaging auditory environments.

  • Understanding Aural Architecture: A Guest Lecture with Dr Barry Blesser and Dr Linda-Ruth Salter

    The experience of space is often thought of as a visual phenomenon, but our understanding of where we are is deeply tied to sound. In a thought-provoking guest lecture, Drs Barry Blesser and Linda-Ruth Salter explored the concept of aural architecture, discussing how sound shapes our perception of space and influences human interaction. Their insights challenge conventional thinking about hearing and space, bridging disciplines from acoustics and cognitive science to architecture, social anthropology, and Sound Design.

    D rBarry Blesser

    About the Speakers

    Dr Barry Blesser is a pioneering researcher in audio technology and spatial acoustics, best known for his contributions to digital reverberation and sound processing. As one of the key figures in early digital audio, he played a central role in the development of the first commercial digital reverb unit in the 1970s. His expertise spans psychoacoustics, signal processing, and the experiential aspects of sound perception. His book Spaces Speak, Are You Listening? (co-authored with Dr Linda-Ruth Salter) explores the relationship between sound and space, shaping discussions on aural architecture.

    Dr Linda-Ruth Salter is an interdisciplinary scholar whose work explores the intersection of space, culture, and human perception. With a background in philosophy, social science, and design, she has contributed to research on how architecture and auditory experiences influence human cognition. Her collaboration with Dr Blesser in Spaces Speak, Are You Listening? examines how sound and built environments shape social interactions and emotional responses.

    The Concept of Aural Architecture

    Aural architecture refers to the way sound interacts with a space and how we, as listeners, interpret and experience that interaction. Drs Blesser and Salter highlighted a crucial distinction: hearing space is not the same as hearing sound. While we might assume that knowing where we are is intuitive, the lecture invited us to consider a deeper question: how do we truly know where we are?

    Using historical and experimental examples, the speakers demonstrated that sensory input—especially sound—plays a vital role in spatial awareness. One striking example involved sensory deprivation experiments from the 1950s, where participants placed in silent, isolated environments began to hallucinate within minutes. This underscores how critical sound is for maintaining a coherent sense of place.

    For Sound Designers, this concept is fundamental when creating immersive experiences in film, games, and virtual reality (VR). In horror sound design, for instance, silence can be just as powerful as sound. By gradually removing background noise and narrowing the listener’s sense of space, Sound Designers can create an unsettling effect that plays with the brain’s need for spatial awareness.

    The Role of Sound in Spatial Perception

    Different senses contribute in unique ways to our understanding of space, but hearing is particularly powerful. Unlike vision, which depends on illumination and line of sight, sound travels around obstacles, fills enclosed areas, and provides constant feedback about an environment. This ability to hear space allows us to determine room size, surface materials, and even the presence of unseen objects.

    Drs Blesser and Salter illustrated this with a compelling thought experiment: if you were placed in a completely dark room but could still hear, you would likely be able to infer the shape and size of the space just by listening to how sound behaves. This principle is at the core of aural architecture, influencing everything from concert hall design to everyday experiences in urban and domestic settings.

    In Sound Design, this understanding is crucial when designing game audio environments. Many modern game engines use real-time spatialisation techniques such as occlusion filtering, where sounds are dynamically muffled or altered when obstructed by walls or objects. This not only makes the soundscape more realistic but also enhances gameplay by providing the player with important auditory cues.

    Another example is reverberation in post-production for film and television. When mixing dialogue recorded on a sound stage, Sound Designers often add convolution reverb to match the acoustics of the scene’s visual setting. Without this adjustment, the dialogue may feel disconnected from the environment, breaking immersion.

    The Impact of Culture and Cognition

    The lecture also explored cultural and cognitive aspects of auditory perception. Different cultures interpret sound in diverse ways, and our brains continuously rewire themselves based on how we use our auditory system. For example, musicians who have trained their ears for years can detect subtle variations in acoustics that others might not even notice. Similarly, some blind individuals develop an advanced ability to hear space through echolocation, using sound reflections to navigate their surroundings.

    The speakers pointed out that aural architecture is as much a cultural phenomenon as it is a scientific one. In some societies, specific sounds become deeply symbolic. The resonance of a cathedral, for instance, has historically been associated with religious experience, while the chime of a village bell once defined local identity in 19th-century France.

    For Sound Designers working in interactive media or theatre, understanding cultural soundscapes can enhance authenticity and immersion. When designing audio for a historical drama, for instance, awareness of period-accurate materials, such as wooden floors, stone walls, or open landscapes, allows designers to recreate accurate acoustic reflections, enhancing immersion.

    The Changing Nature of Soundscapes

    With advancements in technology, our relationship with sound and space is evolving. Modern electronic devices create virtual auditory environments that can transport our minds elsewhere, detaching us from our physical surroundings. The ubiquity of headphones, for example, allows individuals to curate personal soundscapes, but it also leads to functional deafness—a state where people can no longer hear the sounds that define their immediate environment.

    For Sound Designers, this has significant implications in VR, AR, and immersive media. One example is the use of dynamic object-based audio, such as Dolby Atmos or Ambisonics, which allows sounds to be placed in 3D space and adapt to listener movement. This ensures that spatial relationships between sound sources remain consistent, even as the user moves through a virtual or augmented environment.

    Another example is binaural audio mixing, often used in ASMR, virtual museum guides, and 3D audio storytelling. By recording with a dummy head microphone, Sound Designers can capture the way sound naturally interacts with human ears, providing a hyper-realistic listening experience that can transport users into another environment.

    The Responsibility of Aural Architects

    Drs Blesser and Salter concluded with a call for greater awareness in design, urging architects, engineers, and urban planners to consider aural architecture in their work. They introduced the concept of aural empathy—the ability to design with an awareness of how sound affects human experience.

    A key takeaway from the lecture was that sound is not just a by-product of space; it is an integral part of how we experience it. Thoughtfully designed spaces take into account how soundscapes influence mood, communication, and social interaction.

    For Sound Designers, this means thinking beyond just what a sound effect should be and instead considering how it should be experienced within a space. Sonic accessibility is another important aspect—for instance, ensuring that spatialised audio cues in video games or public environments assist users with different hearing abilities.

    Final Thoughts

    This lecture provided a fascinating lens through which to examine space, demonstrating that aural architecture is not merely a technical concern but a fundamental aspect of human perception. By incorporating auditory awareness into design, we can create richer, more engaging environments that truly reflect how people experience the world.

    For those working in Sound Design, these ideas reinforce the importance of treating space as an active element in an auditory experience. Whether designing immersive film soundtracks, crafting realistic game environments, or developing innovative AR applications, an understanding of aural architecture can elevate the quality of sound experiences.

    The next time you step into a space, take a moment to listen to it. What can the sound tell you about where you are? The answer may be more complex than you think.

  • Dubbed to Perfection: Graham Hartstone’s Guide to Enhancing Storytelling Through Sound

    Graham Hartstone, a highly respected dubbing mixer and former head of post-production at Pinewood Studios, shared his expertise in an online guest lecture. Drawing on his extensive career in film sound, which spans decades and includes work on major productions, he offered a wealth of insights into the art and technical precision of rerecording sound for film.

    Graham Hartstone

    The Evolution of Sound and Its Role in Storytelling

    Hartstone’s career began in 1961 as a cable operator, progressing through various roles in sound before ultimately leading the dubbing team at Pinewood. His experience includes working on iconic films such as the James Bond series and collaborations with directors like Stanley Kubrick and Ridley Scott. He reflected on the shift from analogue mixing techniques to the expansive digital tools available today, discussing how technological advancements have changed the sound mixing process.

    Throughout his career, Hartstone emphasised that sound must serve the narrative, with careful attention to dialogue clarity, atmospheric cohesion, and the interplay between sound effects and music. He discussed the importance of premixing, highlighting how dialogue, effects, and Foley must be balanced to create a seamless final mix. Foley, he stressed, should blend naturally rather than draw attention to itself. Using Aliens as an example, he described how even background movements were carefully crafted to maintain immersion without overwhelming the primary action.

    Collaborations, Challenges, and International Versions

    Hartstone shared experiences working with directors who had strong opinions on sound, such as James Cameron and Stanley Kubrick. Kubrick was known for personally directing foreign language dubs to maintain creative control, often insisting that his own team handle translations to ensure consistency across different languages. Hartstone recalled how Kubrick’s meticulous nature extended to every aspect of post-production, with dialogue edits often requiring multiple iterations to match the director’s high standards. Kubrick even insisted on making foreign dubs sound as close to the original English version as possible, ensuring that voice tone and performance retained the same impact.

    James Cameron was similarly demanding, particularly about technical precision in sound. Hartstone shared an example from Aliens, where Cameron required the sound of motion trackers to be carefully crafted to enhance suspense. He recalled how Cameron would repeatedly review sound effects, adjusting subtle details to make sure they perfectly complemented the tension of each scene. This attention to detail extended to mixing explosions and gunfire, where Cameron wanted the audience to feel every impact without overwhelming the dialogue.

    The challenges of working on large-scale productions also included meeting tight deadlines and working with evolving edits. Hartstone noted that in films like Blade Runner, changes were often made up to the last minute. He shared how the iconic ambient soundscape of Los Angeles in Blade Runner was built from unused Alien sound elements, giving the city a layered, futuristic atmosphere. He also recounted how Ridley Scott requested late-stage changes to music and sound effects after test screenings, requiring the mixing team to make quick adjustments to balance the soundtrack effectively.

    For international versions, Hartstone explained that dialogue premixes had to be prepared well in advance of final mixes to allow time for translation and dubbing. On GoldenEye, special care was taken to ensure the foreign dubs matched the English version’s intensity, particularly during action sequences. His team provided detailed mixing notes, ensuring that foreign versions retained the same dynamic range and impact. He also explained the additional complexities of preparing mixes for different distribution formats, including airline and television edits, which required removing or replacing strong language while maintaining natural speech flow.

    Practical Techniques for Mixing

    Hartstone provided a wealth of practical advice for sound mixers, focusing on achieving clarity, balance, and impact.

    Dialogue Mixing and Clarity

    He advised using high-pass and low-pass filters to enhance dialogue clarity, suggesting a high-pass filter at around 80Hz to eliminate unwanted low-end rumble and a low-pass filter at around 9kHz to reduce sibilance. He explained that dialogue should be prioritised in the mix, ensuring that off-screen lines remain intelligible by adjusting levels and adding subtle reverb to match distance perception.

    Hartstone also discussed the importance of perspective in dialogue mixing. He emphasised that the audio should match the framing of the shot—voices should not shift unnaturally in relation to the camera’s viewpoint. For example, close-up dialogue should be crisp and intimate, while wide shots should have a more open sound, reflecting the environment. When working with ADR (Automated Dialogue Replacement), he recommended blending it with the original production sound by matching room acoustics and microphone placement to avoid inconsistencies.

    Balancing Sound Elements and Surround Mixing

    Hartstone stressed the importance of dynamic balance between different sound elements. He warned against overusing compression, explaining that while it can help smooth out levels, excessive compression can make a mix sound unnatural. Instead, he recommended using automation and manual level adjustments to retain natural dynamics, especially for dialogue-driven scenes.

    For surround mixing, Hartstone advised positioning ambient sounds carefully to avoid distracting the audience. Dialogue and primary sound effects should remain anchored in the front channels, while environmental sounds and subtle atmospheric elements should be spread across the surround channels. He suggested that surround effects should be used sparingly in dialogue-heavy scenes but can be more pronounced in action sequences to enhance immersion.

    Layering Explosions and Action Sequences

    Hartstone shared techniques for mixing action-heavy films, particularly regarding explosions and gunfire. He explained that layering sound elements helps create depth and realism. For an explosion, he suggested layering three key components: a bass-heavy thump for impact, a mid-range crack for texture, and high-end debris for detail. He recommended ensuring that these layers are carefully mixed so that the low end does not overpower dialogue and other important sounds.

    He also discussed the importance of spatial placement for action scenes. For instance, gunfire should have directional placement in the mix to match the on-screen perspective. He recalled how, on James Bond films, the team carefully panned gunfire and bullet ricochets to follow the action, adding realism and depth to chase and fight sequences.

    Checking Mixes Across Different Playback Systems

    To ensure consistency, Hartstone recommended testing mixes on multiple playback systems, from large cinema screens to nearfield monitors. He suggested switching between full surround and stereo playback to detect phase issues or missing elements. He also noted that checking the mix at lower volumes can help identify problems with clarity, as important dialogue or sound effects may get lost when played at lower levels.

    Additionally, he highlighted the importance of attending final screenings to verify the mix in the intended playback environment. He recalled how, during a Blade Runner premiere screening, last-minute mix adjustments were needed to correct sound balance issues, reinforcing the importance of checking the final product under real-world conditions.

    Final Thoughts

    Graham Hartstone’s lecture provided a detailed exploration of film sound design, offering valuable lessons for professionals and enthusiasts alike. His expertise underscored how vital a well-crafted soundtrack is in shaping the audience’s experience, blending technical precision with creative storytelling.

  • David Chan on Game Audio: When It Is Done Right, No One Will Notice

    Game audio is an invisible practice, when executed well, players barely notice it. Yet, it is fundamental in shaping an engaging experience. In an insightful online guest lecture, David Chan, Audio Director at Hinterland Games, explored the philosophy and craft of video game sound design. Drawing from a career spanning over 37 titles, including Mass Effect, Knights of the Old Republic, and Splinter Cell, he detailed how sound can enhance immersion, create emotional impact, and bring virtual worlds to life.

    David Chan

    The Philosophy of Sound Design

    Chan described sound design as performing two essential roles: creating an illusion and reinforcing reality. He linked this to historical examples, such as stage performances that used wooden blocks to mimic galloping horses or metal sheets to simulate thunder. The same principles apply to games, where sound designers must craft worlds that feel authentic, even when they do not exist in reality.

    A clear example comes from Red Dead Redemption, where audio designers carefully reconstructed the sonic environment of the Old West. The ambient sound of the game—horses neighing, conversations on the streets, distant gunfire—contributes to a sense of time and place. Chan explained how these elements reinforce reality, ensuring that the world feels lived-in. He noted that the game’s soundtrack, inspired by spaghetti westerns, further supports this atmosphere, seamlessly integrating music with environmental sounds.

    How Sound Shapes a Scene

    One of the most striking examples Chan presented was how sound can completely change the mood of a scene. He demonstrated this by stripping the original audio from a video clip and replacing it with two different soundscapes:

    • The first version used subtle ambient sounds like birds chirping and distant city noise, creating a neutral, everyday setting.
    • The second version replaced these with an ominous drone and eerie music, transforming the same footage into something foreboding and tense.

    This exercise highlighted how sound designers influence perception and steer player emotions without altering the visuals.

    A more extreme example of this approach comes from Splinter Cell, where Chan and his team had to create the illusion of a prison riot without actually animating one. Due to technical limitations, they could not show hundreds of rioting prisoners on-screen. Instead, they relied on audio cues—distant shouting, the clanging of metal doors, and muffled alarms—to make players believe chaos was unfolding nearby. As the player moved into enclosed spaces, the soundscape changed, becoming quieter and more muffled, reinforcing the illusion that the riot was occurring just out of sight.

    Designing Sound for Fictional Worlds

    One of the key challenges in game audio is developing sounds for fantasy and science fiction worlds. Chan spoke at length about Star Wars: The Old Republic, a game set in the Star Wars universe but in an era not explored in the films.

    He explained that while they aimed to remain faithful to the franchise’s iconic sounds, many of the game’s effects were newly created. For instance, the game introduced new droids that needed to sound as if they belonged in Star Wars, without directly copying R2-D2’s beeps and whistles. The sound team designed robotic sounds that felt authentic to the universe but were built from scratch.

    Another challenge was designing energy weapons for the game’s melee combat—something rarely seen in the Star Wars films. The team had to develop a sound signature that fit within the established audio landscape while remaining distinct from traditional blaster sounds. Chan saw it as a success when players assumed the game had simply reused sounds from the films, when in reality, much of the audio was entirely new.

    In Prey, Chan tackled a different challenge: designing sounds for organic weapons. Unlike traditional sci-fi firearms, these weapons were hybrids of living creatures and technology. One example was a grenade-like alien that the player had to rip apart before throwing. To make this sound believable, the team blended:

    • Wet, organic textures to give the impression of tearing flesh.
    • Squelching and bubbling effects to suggest the creature was still alive.
    • Mechanical clicks and pings to remind the player that it was still a weapon.

    This careful layering of sounds helped create an unsettling but intuitive experience for players.

    Building a Scene with Sound

    Chan provided a detailed breakdown of his sound design process using a scene from Prototype. He demonstrated how game audio is constructed layer by layer:

    1. Environmental Ambience – The first layer consisted of background sounds such as distant city noise, wind, and subtle echoes, setting the foundation for the world.
    2. Character Actions – Next, footsteps, breathing, and interactions with the environment were added to reinforce the character’s presence.
    3. Emotional Elements – Music and additional sound cues were introduced to enhance tension, guiding the player’s emotions.
    4. Final Mix – Once all elements were combined, the scene felt alive and convincing, despite being constructed entirely from separate sound sources.

    This method is essential in games, where every sound must be placed with intention. Unlike film, where microphones capture real-world sounds during production, game soundscapes are built from scratch.

    The Risks of Distracting Sound Design

    While sound design enhances immersion, poorly implemented audio can have the opposite effect. Chan discussed how reusing sounds from other games can break immersion. He pointed to Team Fortress 2, which reused audio effects from Half-Life, making the soundscape feel out of place.

    He also shared humorous examples, such as a reimagined Super Mario Bros. scene where realistic voice acting was added to Mario’s jumps, falls, and collisions. The exaggerated grunts and pain sounds turned the classic game into something unintentionally comedic, showing how audio choices can completely shift a game’s tone.

    Another example came from The Elder Scrolls IV: Oblivion, where a voice line was accidentally repeated in the same conversation. These small mistakes, while often unintentional, can pull players out of the experience and serve as a reminder that they are in a game.

    The Human Side of Game Audio

    Chan also discussed the role of voice acting in game sound. He played outtakes from recording sessions, showing how voice actors experiment with different tones and deliveries. He noted that good voice performances must match the world—whether it is gritty realism in Watch Dogs or over-the-top fantasy in Jade Empire.

    He also shared a humorous example from MDK2, where an alien species communicated by expelling gas—a creative but comedic take on alien speech design. While some sounds need to be grounded in reality, others allow for creative and exaggerated approaches.

    Final Thoughts

    David Chan’s lecture provided an insightful look at the complexities of game audio, from crafting subtle background sounds to designing entire worlds through sound alone. His key message was clear: Great game audio should be felt, not noticed.

    When done well, it deepens the player’s immersion, enhances emotions, and makes virtual worlds more believable. Whether creating the ambience of the Old West, the tension of a sci-fi battle, or the chaos of an unseen riot, the principles he shared continue to shape the way game audio is approached today.

  • Ben Minto’s Guest Lecture: The Complexity and Craft of Runtime Sound Design in Video Games

    Ben Minto, Audio Director at DICE in Sweden, recently delivered an engaging guest lecture on the intricate world of runtime video game sound design. With a career spanning over 15 years in game audio, including work on Star Wars Battlefront and Battlefield 4, Minto shared insights into the evolution of interactive sound, the technical and creative challenges of implementing audio in real time, and the balance between realism and stylisation in modern video games. His talk provided fascinating insights into the process of creating dynamic, responsive soundscapes, where audio is not just a background element but a crucial part of gameplay and player immersion.

    Ben Minto

    From Simple Playback to Dynamic Sound Design

    Minto reflected on how game audio has evolved from its early days, where sound was handled using two basic types: one-shot sounds and looping sounds. Previously, sound was mapped directly to game events, meaning a door opening would always trigger the same sound effect. Over time, game audio has moved towards a more interactive, system-driven approach, where runtime parameters influence how sounds are played.

    Instead of a single “door opening” sound, modern games now generate variations based on factors such as who opened the door, how quickly it was moved, and whether it had been used recently. This shift extends to more complex systems like weapons, explosions, and vehicles, where sounds are constructed from multiple component layers, ensuring they react dynamically to gameplay conditions.

    Case Study: The Explosion System in Battlefield 4

    Minto detailed how Battlefield 4 moved away from pre-recorded explosion sounds and instead dynamically constructed them from multiple elements. The explosion system in the game considers various factors, including the initial crack, the main body of the explosion, reflections and echoes based on the surrounding environment, and additional sounds caused by debris. The way an explosion sounds is also influenced by the player’s distance from the event, with close-up explosions featuring sharper, high-energy transients and distant ones creating a rolling, thunderous effect.

    The environmental setting also plays a key role, with explosions in urban environments producing sharp, slapback echoes while those in forests have a more diffuse, drawn-out reverb. Destruction layers add further realism by introducing the appropriate material sounds, such as metal debris, shattered glass, or splintering wood, depending on what has been damaged. By using this method, Battlefield 4 ensures that no two explosions sound exactly the same, making each in-game encounter feel distinct and grounded in its environment.

    Field Recording and “Embracing the Dirt”

    Minto emphasised the importance of authentic field recording in capturing believable soundscapes. The team at DICE combines high-fidelity recordings with those made using everyday devices like smartphones and handheld recorders. This approach, which he refers to as “embracing the dirt,” acknowledges that imperfections in sound recordings often add to their authenticity.

    For example, explosions recorded with professional microphones provide clean, detailed transients, while those captured with handheld recorders or consumer devices introduce compression, clipping, and saturation, mimicking how explosions might sound on news footage or personal recordings. This method was particularly effective in Battlefield 4, where the audio aesthetic was influenced by real-world military footage captured on handheld cameras.

    Dynamic Range and Player Experience: “War Tapes” Mode

    Minto also discussed the HDR (High Dynamic Range) audio system used in Battlefield 4, which dynamically prioritises important sounds. In fast-paced combat, players rely on audio cues to stay aware of their surroundings. The HDR system ensures that critical sounds like gunfire and footsteps are emphasised while background noise is adjusted in real time to prevent clutter.

    The team also implemented player-adjustable sound profiles, including the “War Tapes” mode, which heavily compresses and saturates the sound for a raw, documentary-like aesthetic. Other modes were tailored for home cinema systems and standard TV speakers, allowing players to adjust the dynamic range based on their listening environment.

    The Role of Foley in Game Audio

    Unlike traditional Foley in film, where sounds are added in post-production, game Foley must be implemented as modular elements that adapt to in-game actions. The sound design approach varies depending on the project. For Mirror’s Edge, Foley was recorded in a highly controlled studio environment, resulting in clean, precise sounds. In contrast, Battlefield used a more organic approach, recording footsteps and clothing movements outdoors to capture the natural imperfections of real-world sound.

    DICE’s Foley system separates different elements into multiple layers, including upper body fabric movement, torso and equipment rustling, boot sounds, and surface interactions such as gravel, snow, or metal. By combining these layers in real time, the system creates a responsive, realistic movement system that changes based on the character’s actions and surroundings.

    The Future of Game Audio

    Minto concluded by discussing the future of runtime sound design, highlighting advancements in procedural sound synthesis, frequency-based mixing, and AI-assisted adaptive soundtracks. He emphasised the importance of collaboration across disciplines, noting that sound designers must work closely with animators, programmers, and level designers to create truly immersive audio experiences.

    One of his key takeaways was the importance of curiosity and adaptability in game sound design. Aspiring sound designers should experiment with different recording techniques, explore procedural sound methods, and challenge traditional workflows to push the medium forward.

    Conclusion

    Ben Minto’s lecture provided a detailed look into the evolving world of video game sound, highlighting the technical expertise and creative problem-solving required to craft dynamic and immersive audio experiences. His insights underscored that sound is not just an add-on to games but a fundamental part of storytelling, player immersion, and emotional engagement. As game worlds become increasingly complex and interactive, sound will continue to shape the way players experience and engage with virtual environments.

  • Making Waves: Dr Nina Schaffert on Sonification in Rowing

    Dr Nina Schaffert, a postdoctoral researcher at the University of Hamburg, delivered an engaging online lecture on the role of sonification in high-performance rowing. The session provided valuable insights into how sound can serve as an acoustic feedback mechanism to enhance elite athletes’ performance.

    Sofirow in use

    Biomechanical Feedback in Rowing

    Dr Schaffert outlined the importance of biomechanical diagnostics in elite rowing, where mobile measurement devices capture dynamic and kinematic parameters such as forces applied by athletes, boat speed, and acceleration. These data points are critical in supporting coaches as they refine technique and optimise training regimens.

    Traditionally, this feedback is presented visually, often through graphical displays. However, focusing on a screen while rowing is impractical, especially in changing outdoor conditions. Dr Schaffert noted that rowers naturally rely on acoustic cues, such as water splashes and boat movement, to assess their performance. Building on this, her team explored whether artificially generated sonification could provide real-time auditory feedback to support technique adjustments.

    What is Sonification?

    Sonification converts data into sound, allowing information to be communicated through auditory cues instead of visual representations. This method is particularly useful in situations where visual attention is occupied, enabling real-time feedback without requiring the user to look at a screen. Unlike traditional auditory feedback, which relies on verbal instructions or pre-recorded sounds, sonification generates dynamic audio based on real-time data, making it an interactive form of feedback.

    Different approaches exist within sonification. Parameter mapping sonification, the most commonly used, assigns data values to sound properties such as pitch or volume. Model-based sonification creates sounds based on physical models of movement, mimicking natural acoustic responses. Audification translates raw data directly into sound waves, making patterns perceptible through listening rather than visual analysis.

    Dr Schaffert’s research applies parameter mapping sonification, translating rowing boat acceleration into sound. This makes subtle movement variations audible, allowing athletes to refine their technique.

    Sonification in Rowing: Communicating Movement Through Sound

    In rowing, acceleration varies across the stroke cycle. A rowing stroke consists of two primary phases: drive and recovery. The key transitions—catch, where the oar enters the water, and finish, where it exits—significantly affect acceleration. Sonification maps these variations to sound, enabling athletes to perceive them intuitively.

    Dr Schaffert’s team tested this approach during on-water training with the German national rowing team. The system transformed real-time acceleration data into sound sequences delivered via loudspeakers or earphones. By listening to these sounds, rowers identified inconsistencies in their strokes, particularly during the recovery phase. Adjusting their technique in response to the sound led to smoother movement and increased boat speed.

    Beyond Rowing: Applications in Other Fields

    Sonification has been successfully applied in various domains beyond rowing. In sports training and performance enhancement, it has been used in speed skating, swimming, tennis, and golf. In speed skating, auditory feedback helps maintain optimal rhythm and stride length. In swimming, stroke consistency has been improved by mapping stroke rate and force to auditory signals. In tennis, racket movement has been sonified to enhance swing accuracy. In golf, putting and swing techniques have benefited from auditory cues linked to club speed and angle.

    Beyond sports, sonification supports medical rehabilitation, scientific research, and accessibility. In stroke recovery, auditory feedback aids movement coordination, while rhythmic cues improve gait stability for individuals with Parkinson’s disease. Prosthetic limb users refine control and movement patterns through sonified feedback. In scientific analysis, space telescope data has been converted into sound to reveal celestial phenomena, earthquake data has been sonified to detect tremors, and MRI and EEG data have been made audible for brain activity analysis. Sonification also enhances accessibility, with screen readers and navigation tools providing auditory cues for visually impaired users, while complex graphs and charts are transformed into sound for auditory data interpretation.

    Sofirow: Acoustic Feedback for Rowers

    To apply sonification in training, Dr Schaffert’s team developed Sofirow, a system designed to provide real-time auditory feedback based on biomechanical data. It measures boat acceleration with a micro-electromechanical sensor, converts the data into sound, and transmits it wirelessly to rowers and coaches.

    Sofirow translates acceleration changes into distinct sound variations, allowing rowers to hear their boat’s motion in real time. The system communicates key performance indicators, including boat speed, acceleration, and deceleration. If a rower moves too abruptly during recovery, the sound reflects this instability, prompting a smoother execution. Conversely, an efficient stroke produces a stable, consistent sound.

    A crucial function of Sofirow is improving the recovery phase. The system highlights when a rower disrupts the boat’s glide by moving too forcefully, allowing them to adjust their approach for minimal drag. Timing at the catch is another focal point, ensuring strokes are synchronised to maintain momentum without unnecessary deceleration.

    The system was tested during multiple training sessions with the German junior and senior national rowing teams. Sonification was introduced in alternating intervals, with sections of training both with and without sound. Results demonstrated that when auditory feedback was present, rowers achieved a more consistent technique and increased boat speed. Acceleration data revealed smoother transitions and reduced deceleration at key points in the stroke cycle.

    Athletes found the auditory feedback intuitive and effective in improving coordination. Dr Schaffert presented recordings of Sofirow’s output, demonstrating how variations in movement execution could be heard through pitch and tone changes.

    Future Possibilities for Sonification in Sports

    Dr Schaffert highlighted the expanding role of sonification in sports science, where advancements in machine learning, real-time data processing, and interactive feedback systems are transforming athletic training. One area of development is cycling performance, where real-time auditory cues on pedalling mechanics have been shown to improve efficiency and endurance. By integrating wearable sensors that monitor cadence and power output, sonification enables cyclists to make immediate adjustments to optimise their form, reduce fatigue, and maintain consistent performance over long distances.

    In racket sports such as squash and tennis, researchers have explored how auditory feedback can assess and refine shot precision. Systems that analyse racket-ball impacts can generate sound cues to help players adjust their stroke technique. This feedback allows athletes to develop greater control and consistency in their shots without relying solely on visual analysis. Similarly, in rowing and endurance sports, sonification can reinforce correct pacing by providing rhythmic auditory signals that help athletes synchronise movement with optimal stroke or stride rates, improving efficiency and reducing energy waste.

    The integration of wearable sonification technology is opening new possibilities for personalised training. Smart garments embedded with motion sensors can detect movement patterns and muscle activation, translating this data into sound cues that guide athletes in refining technique. These advancements are particularly relevant in sports requiring precise biomechanics, such as swimming, weightlifting, and gymnastics. With continued progress in real-time data processing, sonification could become a standard training tool, offering immediate and adaptive feedback to help athletes improve performance, prevent injuries, and achieve greater consistency in their movement execution.

  • Sounds Like a Combo: Jed Miclot’s Killer Approach to Game Audio

    Jed Miclot, Senior Sound Designer at Double Helix Games (now part of Amazon Game Studios), delivered an insightful online guest lecture on the sound design of Killer Instinct for Xbox One. In this engaging session, he provided a detailed breakdown of his creative and technical approach to crafting the game’s dynamic and immersive audio experience.

    Jed Miclot

    From Film to Games: Miclot’s Journey into Sound Design

    Miclot began by sharing his professional background, highlighting his transition from film post-production to video game sound design. Having worked on Harry Potter and other film projects, he eventually shifted his focus to interactive media, drawn by the challenge of designing sound for dynamic gameplay scenarios.

    Building a Unique Sonic Identity for Killer Instinct

    One of the core themes of the lecture was the importance of creating a distinct audio identity for each character in Killer Instinct. Miclot explained how he designed unique sound palettes that reflected each fighter’s personality, abilities, and fighting style.

    Jago, the Tibetan monk fighter, features martial arts-inspired sonic elements that reflect his disciplined yet powerful combat style. His movements are accompanied by crisp martial arts strikes, recorded using real wooden staffs, hand-to-hand impacts, and air displacement effects to simulate the speed of his attacks. To heighten realism, Miclot layered subtle breathing effects and controlled exhalations, making each attack feel deliberate and refined.

    Glacius, an alien composed of ice, required frozen textures and resonant impacts to capture his otherworldly nature. To achieve this, Miclot recorded frozen fabric being twisted and broken, ice cubes cracking in water, and glass-like resonance using contact microphones on frozen metal objects. His attacks, which involve ice shards and liquid nitrogen-inspired transformations, were enhanced by recording icicles being shattered and the sound of dry ice sublimating.

    For Sabrewulf, the werewolf, a blend of organic growls and Foley elements such as breaking wood and cloth ripping emphasized his primal nature. Miclot layered real wolf growls, lion roars, and bear vocalizations, processed to create a hybrid beast-like voice. His claw attacks were enhanced using recordings of splintering wood and ripping fabric, simulating the forceful tearing of his enemies.

    Spinal, the skeletal pirate, was brought to life through creaking bones and wooden textures to enhance his eerie presence. Miclot recorded old wooden floorboards creaking, bones knocking together, and rattling chains to create an undead, cursed aesthetic. Spinal’s vocalizations were constructed using manipulated human screams, whispery ghostly echoes, and reversed percussion elements.

    Foley Recording and Creative Sound Sourcing

    Miclot’s approach to Foley embraced experimentation with physical objects and environmental interactions to craft a rich and immersive soundscape. To enhance the weight and impact of heavy-footed characters like Sabrewulf, he recorded the sound of pumpkins being smashed, allowing the mix of soft pulp and hard shell impacts to produce a visceral quality that made movements feel raw and animalistic. For Glacius, Miclot soaked an old pair of jeans in water and froze them, manipulating the fabric once solid to capture the crisp crackling of frozen textures. This method proved so effective in simulating ice fractures that it even led to confusion among coworkers when they discovered frozen jeans in the office freezer.

    To enhance the eerie atmosphere of Spinal’s stage, Miclot recorded his girlfriend’s snoring while she was unwell, capturing deep, guttural breaths that he later pitched down to resemble an eerie, spectral presence. He also manipulated the sound of air shifting in a toilet bowl, producing unsettling moaning effects that contributed to the ghostly ambiance of Spinal’s environment.

    For Orchid’s electrical attacks, Miclot recorded a real Tesla coil generating powerful electrical discharges, using its raw, high-voltage arcs to provide an authentic crackling intensity. He controlled the coil’s amplitude and rate of sparks in real time, capturing variations that could be used dynamically during combat sequences. Similarly, for Sadira’s web-based attacks, he needed a sound that conveyed both elasticity and tension. Stretching duct tape across a long surface and peeling it at different speeds allowed him to mimic the sticky, sinewy strands wrapping around enemies, creating a uniquely organic yet unnerving sound.

    Innovative Sound Techniques: Layering and Positional Audio

    A key aspect of Killer Instinct’s audio design was its innovative approach to impact sounds. Rather than relying on a single, static sound effect, Miclot designed each impact to be dynamic and multi-layered, enhancing spatial awareness and immersion. When a character is slammed to the ground, the sound is composed of multiple elements, including positional slapback echoes that create a sense of depth and space.

    Miclot demonstrated how this system worked using Orchid’s backflip slam, a move where the character is thrown to the ground with a heavy impact. Instead of a single sound event, the slam triggered seven different sound layers, including a shockwave layer, multiple slapback echoes, and a low-frequency boom that played through the subwoofer to reinforce the force of the impact.

    For Glacius’s ice-based attacks, different layers of sound simulated the fracturing and shifting of frozen structures. When Glacius smashes an enemy with an ice attack, multiple sound components activate: an initial impact recorded using frozen jeans snapping, a delayed crackling sound simulating stress fractures in the ice, and a distant slapback echo mimicking sound reflections off frozen surfaces.

    This dynamic approach was also applied to environmental destruction. When objects in the stage break, multiple sound layers are triggered based on how close the player is to the destruction. If debris falls in the background, the slapback echoes adjust dynamically, making it feel as though the sound is traveling across the space. Miclot’s use of adaptive layering and positional audio ensured that every attack felt spatially alive, adjusting dynamically whether a character was fighting in a confined, echo-heavy environment or an open battlefield.

    Adaptive Music: Enhancing Gameplay Feedback

    Miclot also discussed the role of Killer Instinct’s dynamic music system, which was designed in collaboration with composer Mick Gordon. Unlike traditional game scores that loop continuously, Killer Instinct’s soundtrack adapts to player actions. The music shifts intensity when a player achieves a high combo streak, providing real-time feedback on gameplay performance. A granular processing effect momentarily distorts the music when a combo breaker is performed, reinforcing the action’s impact. If players stop fighting for six seconds, the music transitions to classic themes from the original Killer Instinct soundtrack. During an ultra combo, each successful hit triggers a sequence of musical notes tied to the character’s theme, turning the final blows into a rhythmic spectacle.

    Final Reflections

    Miclot’s guest lecture provided an in-depth look at the intricacies of fighting game sound design. His work on Killer Instinct showcased how experimental Foley, creative recording techniques, and adaptive audio implementation can enhance a game’s engagement. By sharing practical insights and demonstrating the thought process behind each sound, his lecture offered valuable knowledge for those looking to push the boundaries of game audio design.

  • Stepping to the Beat: Benoit Tigeot’s Journey in Dance Game Sound Design

    Benoit Tigeot delivered an engaging online lecture on his experiences working on the Just Dance series and the intricacies of sound design in dance video games. His talk provided an in-depth look at the challenges and creative processes involved in crafting immersive audio for an interactive, music-driven game.

    BenoitTigeot

    From Live Sound to Game Development

    Benoit’s journey into sound design began with work on live shows, concerts, and exhibitions, which provided him with a strong foundation in audio engineering. After completing his studies in France, he gained experience in television production, animation dubbing, and studio recording before transitioning into video game audio. His background in live and recorded sound gave him a unique perspective when he joined Ubisoft to work on Just Dance.

    Adapting to Game Audio

    Despite having no prior experience in game audio, Benoit quickly adapted to the demands of interactive sound design. He worked on multiple Just Dance titles, learning how to integrate music and sound effects into gameplay while ensuring high-quality production standards. The fast-paced development cycle required him to balance creativity with efficiency, as each version of Just Dance was produced in a matter of months.

    The Sound Design Workflow

    Benoit outlined the workflow for sound design in Just Dance, highlighting key stages such as:

    • Track Preparation: Receiving licensed music, ensuring audio quality, and making necessary edits, including removing inappropriate language. For example, in Black Eyed Peas’ songs, multiple words were edited out using backward reverb and other subtle audio modifications to keep the track family-friendly while maintaining its musicality.
    • Marker Placement: Adding timing markers to synchronise choreography, animations, and gameplay elements. Benoit emphasised the importance of precision, as even a millisecond difference could impact the timing of dance moves and scoring.
    • Sound Effects (SFX) Design: Creating introductory and concluding sound effects for each song, as well as UI and gameplay sounds. In Just Dance Japan, additional sound effects were incorporated at the beginning and end of tracks to enhance the user experience. The sound team also created unique effects for different dance modes, such as battle mode, where transitional audio had to blend seamlessly between competing tracks. Over 150 different SFX variations were tested to find the right balance between energy and smooth musical transitions.
    • Integration and Testing: Implementing audio into Ubisoft’s proprietary engine, collaborating with developers and artists, and ensuring synchronisation across multiple platforms. Benoit described how the team used text-based scripting in Sublime Text to adjust pitch, loop points, and volume, allowing for quick iteration and adjustments across the game. He also discussed how the team recorded crowd reactions and player feedback sounds in a dedicated studio space to ensure an immersive experience.

    Challenges in Dance Game Audio

    Working on Just Dance presented unique challenges, including:

    • Multi-platform Development: Adapting audio for different consoles and ensuring consistency across devices.
    • Cross-Studio Collaboration: Coordinating with teams worldwide, including those in France, India, and the UK.
    • Real-time Testing: Evaluating sound integration in a dynamic, open-plan workspace filled with music and dance rehearsals. Benoit noted that sound designers had to contend with a noisy environment, making it difficult to hear and refine subtle audio details.
    • Genre Adaptability: Designing sound for a wide range of musical styles while maintaining a cohesive experience. He explained how the team had to ensure that different styles—ranging from electronic dance music to country—had consistent and engaging audio treatments without overwhelming players with excessive effects.

    Reflections on Sound Design in Just Dance

    Benoit’s lecture provided a valuable look at the evolution of Just Dance’s audio technology. He discussed the transition to a new game engine, which improved workflow efficiency and allowed for greater creative flexibility. His work on developing in-game sound effects, enhancing music transitions, and refining player feedback mechanisms contributed significantly to the game’s audio experience. For instance, in Just Dance’s battle mode, the team spent weeks fine-tuning SFX to ensure that energy levels were maintained across song transitions without jarring interruptions. Additionally, subtle effects such as footstep sounds, applause, and even costume rustling were layered in to enhance immersion.

    For aspiring sound designers, Benoit’s talk underscored the importance of adaptability, collaboration, and technical proficiency. His ability to bridge creative and technical aspects of sound design made him a key contributor to one of Ubisoft’s most successful franchises. He also highlighted how working in a rhythm-based game required constant iteration, as any mistake in beat markers or mixing could significantly impact the player’s experience. The balance between technical precision and creative storytelling through sound remains an essential aspect of game audio development.

    Benoit’s lecture offered a fascinating glimpse into the behind-the-scenes work that brings rhythm-based games to life. His experiences serve as an inspiration for those interested in audio design for interactive media, highlighting the rewarding challenges of working in the field of game sound.