Author: iainmcgregor

  • Bumpers, Bells, and Beats: The Dynamic World of Interactive Audio with David Thiel

    The world of game audio presents unique challenges and opportunities, and few individuals have navigated this space with as much depth and insight as David Thiel. In an online guest lecture, Thiel shared his extensive experience in interactive audio, covering its evolution, principles, and creative approaches. With a career spanning over four decades, his work has influenced interactive entertainment, from early arcade machines to modern gaming environments.

    David Thiel

    Interactive Audio vs Linear Audio

    Thiel began by distinguishing between interactive audio—used in games—and linear audio, which is typical of film and television. Unlike linear audio, where sounds are meticulously timed to match fixed visuals, game audio must be dynamic. It adapts in real-time based on player interactions, requiring a more flexible and responsive approach to composition and sound design.

    One key challenge he highlighted is unpredictability. In a film, every sound effect and piece of music is placed with precise timing. In contrast, game audio must account for numerous possibilities—players might trigger events in different sequences or at varying speeds. This means that game sound designers must think beyond static cues, ensuring that every sound conveys meaning while enhancing immersion. Thiel illustrated this with an analogy: Imagine posting a movie where you know which events will occur but have no idea when or in what order they will happen. He then expanded on this by explaining how game audio is akin to composing for an orchestra where each instrument plays independently based on player input, making real-time adaptation essential.

    Making Game Audio Meaningful

    One of Thiel’s core principles is that game audio should always provide useful information to the player. Sounds should not just be aesthetically pleasing but should also enhance gameplay. For example, when a player fires a weapon in a game, the sound can communicate crucial details such as the type of gun, whether it is running out of ammunition, or if the shot has hit a target. He elaborated on this concept by breaking down the sound design process for firearms: A shotgun blast should feel weighty and reverberate differently in an open space versus a confined corridor, while an energy weapon should have an otherworldly charge-up sound. Additionally, missed shots and ricochets can provide players with subtle cues about their accuracy, reinforcing the importance of audio feedback.

    Another fundamental aspect is variation. If a game reuses the same audio sample repeatedly, players may quickly lose their sense of engagement. Thiel demonstrated how game audio can introduce subtle variations based on contextual factors, such as the shooter’s position, the remaining ammunition in a weapon, or environmental influences. He provided an example from Borderlands 2, where he spent over 1,000 hours playing and noted how the game’s procedural gun system extended to audio, ensuring that weapons sounded unique based on their make and function. Each gun has a different reload sound sequence, creating deeper immersion and ensuring that players can distinguish between weapons purely through audio cues.

    Additionally, Thiel discussed the importance of environmental sounds in enhancing game immersion. He explained how in Winter Games (1985), all sound effects were synthesised in real-time, yet they managed to convey the distinct feel of ice skating. By manipulating pitch and timbre, the sound team created convincing audio cues that responded dynamically to skater movements.

    The Role of Music in Interactive Audio

    Music in games also requires a different approach compared to linear media. Thiel recounted his early experiences in the 1980s, where hardware constraints required music to be generated in real-time using algorithms. Though modern hardware allows for pre-recorded music with high production values, he highlighted the benefits of runtime-generated music, such as the ability to synchronise musical cues dynamically with gameplay.

    A particularly engaging example was his work on pinball machines. In Monday Night Football pinball, musical motifs and drum beats were triggered by player actions, enhancing gameplay feedback. When the player scored a goal, a celebratory fanfare played, rising in pitch with each successive goal, reinforcing the excitement. Similarly, synchronised drum fills were used when the ball passed through a spinner, making player actions musically rewarding. Another notable example was from Torpedo Alley, where a cowbell sound layer was introduced when players entered a time-limited game mode, ensuring they knew they had a short window to act. The cowbell was musically integrated, but also acted as a warning that the mode would soon expire, influencing player behaviour.

    Thiel also explored how interactive music could adapt to player performance. He referenced a pinball machine where successfully hitting targets would cause the background music to shift in pitch, making victories more rewarding. Each successive successful shot raised the key of the soundtrack, creating a musical escalation that heightened player excitement.

    Challenges in Speech and Sound Effects

    Thiel also touched on the complexities of speech and sound effects in games. While modern storage capacities allow for extensive voice recordings, game dialogue must be carefully managed to maintain clarity and engagement. He shared insights into ‘speech wrangling’—the process of organising, editing, and integrating thousands of voice clips in a way that is useful to game developers.

    Sound effects, meanwhile, are not simply lifted from libraries but are often layered and modified to enhance realism. Thiel illustrated this with an explosion sound effect: Rather than using a single sample, he combined elements such as a low-end impact, a sharp transient, and a synthesised decay to create a more impactful and informative effect. He also explained how the iconic sound of the Ark of the Covenant in the Indiana Jones pinball machine was created using a manipulated orchestral harp sound called ‘Psycho Drone’—an example of how concept sometimes takes precedence over traditional realism in sound design.

    Additionally, Thiel described how synchronised sound cues could be used to communicate time-sensitive objectives. In a pinball machine, for example, the sound of a looping crowd chant helped signal an urgent task. Players needed to hit a specific target before the chant faded, using sound as a direct gameplay indicator.

    Mixing and Mastering for Different Environments

    A crucial part of game audio design is ensuring that sounds are balanced correctly in different playback environments. Thiel noted the differences between public spaces (such as arcades or casinos) and private listening setups (home gaming, mobile devices). In noisy public settings, audio cues must be clear and loud, often using aggressive mixing techniques such as ducking (reducing background volume when key sounds play). In contrast, home environments allow for more subtle layering, offering richer detail and depth.

    The Passion Behind Game Audio

    Thiel concluded his talk with reflections on the industry’s approach to audio. Despite the immense progress in gaming technology, he observed that audio still receives a smaller share of development resources compared to graphics. However, for those passionate about sound, game audio remains a deeply rewarding field that requires creativity, problem-solving, and technical expertise.

    This lecture provided a fascinating exploration of interactive audio, offering both historical perspectives and practical insights. Whether you are an aspiring game audio designer or simply interested in the intricacies of interactive sound, Thiel’s knowledge and experience shed light on the challenges and artistry of game audio creation.

     

  • Echo Location: Navigating Sonic Interaction Design with Professor Myounghoon Jeon

    Professor Myounghoon “Philart” Jeon, a professor at Virginia Tech, recently delivered an engaging online guest lecture on sonic information design, where he explored the intersection of auditory perception, cognitive science, and interactive sound design. His research spans auditory displays, human-computer interaction, and affective computing, with applications in assistive technologies, automotive interfaces, and interactive performance. Throughout the lecture, he shared detailed insights into the process of designing and evaluating auditory cues, explaining how specific sound design choices impact usability, accessibility, and engagement.

    Myounghoon "Philart" Jeon

    The Evolution of Sonic Information Design

    Professor Jeon introduced sonic information design as a field that integrates sonification, auditory displays, auditory user interfaces, and sonic interaction design. While sound design has historically been guided by artistic intuition, his work highlights a shift towards scientific, data-driven approaches. This transition ensures that auditory interfaces are both intuitive and efficient, optimising interaction in hands-free, visually demanding, or multi-tasking environments.

    One example of this approach is his development of “Spindex” (Speech Index), an auditory menu navigation system that enhances efficiency by using compressed speech cues instead of full words. Instead of users listening to long, spoken menu options, Spindex provides shortened speech cues, allowing them to scan options quickly. Through user testing, he found that people could navigate menus more effectively when exposed to a combination of compressed speech and indexed categories, rather than traditional text-to-speech output. The decision to use speech compression without pitch alteration ensured that the information remained intelligible while increasing the speed of interaction.

    Applications of Auditory Displays

    Professor Jeon discussed a range of applications where sound enhances usability and accessibility, particularly in assistive technology, automotive sound design, and interactive exhibitions. One of his most practical and tested projects focused on indoor navigation for visually impaired users. His team developed a wearable navigation system that incorporates ultrasonic belts providing both tactile and auditory feedback. The sound design choices involved creating gradual frequency shifts to indicate proximity to obstacles. Low-pitched tones signalled distant objects, while higher-pitched tones and increasing intensity indicated closer obstructions, ensuring users could interpret spatial information efficiently.

    His work in automotive auditory interfaces examined how sound can improve situational awareness for drivers. One project involved designing warning systems for railway level crossings, where drivers might overlook visual alerts due to distraction. His team conducted experiments using different auditory cues, testing whether short, rhythmic pulses or long, sweeping alerts were more effective at conveying urgency without overwhelming the driver. Findings showed that spatialised auditory warnings, where sounds were positioned to indicate the direction of an approaching train, helped drivers respond more accurately than traditional beeping tones.

    Professor Jeon also highlighted his work on interactive sonification in public exhibitions, including the Accessible Aquarium project, which used computer vision to track fish movements and convert them into sound and music. The sound design process for this project involved defining sonic mappings that correlated with fish speed, size, and position. Large fish were assigned deep, resonant tones, while smaller fish produced higher-pitched sounds. The system was further refined by introducing dynamic panning, so the audio reflected the fish’s position within the tank, allowing visually impaired visitors to perceive their movements in real-time.

    The project was later expanded by introducing audience interaction through motion-tracking technology. Visitors could use arm movements to mimic fish, triggering musical patterns that followed their gestures. The decision to incorporate layered harmonic structures ensured that overlapping user-generated sounds remained cohesive rather than chaotic, maintaining an aesthetically pleasing experience while preserving informational clarity.

    Designing Effective Auditory Cues

    Throughout the lecture, Professor Jeon provided detailed insights into sound design decision-making, particularly in branding, interaction design, and auditory icons. In his work with LG Electronics and Samsung, he developed sound profiles for home appliances, ensuring that product sounds were both functional and emotionally resonant. His research explored how users interpret different tonal qualities and how sound frequency influences perceived urgency and pleasantness. In one experiment, he tested whether major-key melodic notifications were perceived as more friendly and reassuring than atonal, percussive alerts.

    Another innovative area of his research involved the development of lyricons (lyrics-based earcons), a novel approach where melodic speech reinforces functional commands. Instead of using generic tones, this system integrated spoken words into short musical motifs, making auditory cues more memorable. For example, turning a device on or off could be represented by a short, ascending or descending melodic phrase, rather than a simple beep. His studies demonstrated that users recalled lyricon-based auditory cues more accurately than traditional earcons, highlighting the potential of music as a tool for reinforcing interaction memory.

    In his dance-based sonification research, Professor Jeon explored how motion-capture technology can translate body movements into real-time music generation. His team designed a system where dancers wore infra-red motion sensors, allowing spatial position and gesture dynamics to control auditory parameters. The sound mappings were carefully structured so that slow, fluid movements produced soft, sustained tones, while sharp, rapid gestures triggered percussive elements. By fine-tuning these interactions, the system ensured that each performance remained expressive yet predictable, allowing dancers to intentionally shape the evolving musical landscape.

    The Future of Sonic Interaction

    Looking forward, Professor Jeon discussed how artificial intelligence, machine learning, and real-time sound generation are shaping next-generation auditory interfaces. One of his projects in this area involves music-based social robots for children with autism, where robotic agents use music to enhance social communication. The system was designed with emotion-sensitive audio cues, allowing the robot to modulate its voice and musical output based on the child’s mood. His team experimented with different musical scales and rhythmic patterns, determining that gentle, repetitive melodic structures were the most effective at capturing attention without overwhelming the child.

    His lecture provided a comprehensive and technically rich exploration of sonic information design, demonstrating how scientific principles, auditory perception, and interactive sound technologies continue to shape human-computer interaction. By combining rigorous research with creative experimentation, his work highlights the growing impact of auditory interfaces in accessibility, engagement, and multisensory experiences across multiple fields.

     

  • Selling the Airwaves: Bruce Williams on Crafting Radio Ads That Stick

    Bruce Williams delivered an insightful online guest lecture, offering a detailed look into his extensive career in audio production and radio commercials. With decades of experience spanning from the early days of analogue to the modern digital landscape, his insights provided valuable knowledge about the evolution of the industry and the techniques essential for producing high-quality radio commercials.

    Bruce Williams

    How to Make Effective Radio Commercials

    A key focus of the lecture was Williams’ expertise in producing radio commercials. He discussed the process of scripting, recording, and editing, emphasising the importance of timing, voice modulation, and background music. He shared practical techniques for creating compelling advertisements that effectively convey a message in a limited time frame.

    Understanding the Listening Audience

    Williams highlighted the importance of considering the audience when crafting a commercial. A well-produced ad must resonate with its listeners by using appropriate tone, language, and pacing. He stressed:

    • Tailoring the Tone and Language: A commercial aimed at a younger audience might use a casual, energetic tone, whereas one for a professional service may require a more formal and authoritative delivery.
    • Considering Listening Context: Listeners in a car, at home, or in a busy environment may have different levels of attention. Ensuring clarity and avoiding excessive complexity helps retain engagement.
    • Matching Music and Sound Effects to Audience Expectations: Different genres and styles of background music can evoke specific emotions that resonate with certain audience groups.
    • Focusing on Call to Action: A clear, compelling directive ensures the listener knows what step to take next, whether visiting a website, making a call, or attending an event.

    Managing Workload and Production Time

    Williams discussed the fast-paced nature of commercial radio production and the efficiency required to meet tight deadlines. He noted that he often had to produce multiple adverts per day, sometimes as many as ten or more, depending on demand.

    • Simple adverts – Those with a single voiceover and minimal sound effects could be completed in 30 minutes to an hour.
    • Complex adverts – Those requiring multiple voice actors, intricate sound design, and music synchronisation could take several hours to perfect.
    • Meeting tight deadlines – In a high-paced radio environment, some commercials had to be turned around within the same day, requiring streamlined scripting, efficient recording, and quick but precise editing.

    The Role of Timing in Commercials

    Timing plays a major role in making radio commercials effective. Williams emphasised that pacing, pauses, and synchronisation with background elements can greatly enhance engagement and clarity.

    • Matching Voiceover Speed to Content: The pace of speech should align with the commercial’s objective. High-energy promotions may require a quicker delivery, while more informative or emotional ads benefit from a slower, deliberate approach. A rushed voiceover can overwhelm listeners, while a sluggish delivery may lose their interest.
    • Pausing for Impact: Strategic pauses allow listeners to absorb key points and create emphasis where necessary. Well-placed breaks in dialogue can add dramatic effect and ensure important details are not lost in rapid narration.
    • Synchronising Music and Sound Effects: Background elements should be carefully timed with the voiceover. Music transitions and sound effects must be placed to complement rather than compete with the spoken message, ensuring a seamless and engaging experience.
    • Adhering to Time Constraints: Given that commercials must fit within precise durations, efficient scripting and editing are essential. Removing filler words, tightening sentences, and ensuring smooth transitions help maintain clarity while meeting broadcast length requirements.

    Selecting the Right Voice and Delivery Style

    The voiceover in a commercial plays a major role in setting the tone and evoking the desired emotional response. Williams highlighted how different vocal styles can influence the effectiveness of a commercial. A warm and friendly voice might be ideal for a family-oriented brand, while a dramatic and authoritative voice might suit public service announcements.

    Beyond voice selection, he emphasised proper pacing, intonation, and emphasis on key words. The delivery should feel natural, avoiding monotony or exaggerated enthusiasm. Williams recommended recording multiple versions and selecting the most engaging and well-paced delivery.

    Balancing Music, Sound Effects, and Voice

    Another important aspect of commercial production is the careful blending of music and sound effects without overwhelming the voiceover. Williams described how proper use of background music can enhance the commercial’s impact while maintaining clarity in the spoken message.

    • Music Selection: Choosing the right track reinforces the commercial’s tone. Upbeat music works well for energetic and promotional spots, while softer instrumentals can support emotional or reflective messaging. The tempo should complement the pace of the voiceover rather than competing with it. It is also important to avoid music with heavy vocals that could interfere with speech clarity. Williams suggested testing multiple tracks with the voiceover to ensure a seamless blend before finalising the selection.
    • Sound Effects: Used sparingly, sound effects should reinforce key points and create an engaging experience without being distracting. For example, a car commercial might use the sound of an engine revving at the start to establish context, while a food advertisement could feature subtle sizzling sounds to evoke sensory engagement. Overuse of sound effects can clutter the mix and reduce effectiveness, so strategic placement is necessary.
    • Volume Control: The voiceover should always remain the focal point, with background elements balanced appropriately. Music and sound effects should support, rather than compete with, the spoken message. Williams recommended a slight dip in music volume when important dialogue is delivered and a gradual rise during transitions to maintain a smooth flow.

    Technical Considerations and Audio Processing

    High production quality ensures the commercial sounds polished and professional. Williams covered some key technical aspects, including:

    • Equalisation (EQ): Adjusting frequencies is essential for ensuring clarity and preventing muddiness. For example, reducing low-end frequencies (below 100 Hz) can prevent excessive bass buildup, while slightly boosting the mid-range (2-4 kHz) can enhance speech intelligibility. Additionally, removing any unnecessary high frequencies (above 12 kHz) can eliminate unwanted hiss or harshness in the recording.
    • Compression: Maintaining consistent volume levels keeps a commercial clear and professional. Compression evens out the loud and soft parts of the recording, preventing excessive peaks that could distort the sound. A moderate compression ratio (such as 3:1 or 4:1) with a threshold set to capture only the loudest peaks ensures a balanced sound without making the voiceover sound unnatural or overly processed.
    • Noise Reduction: Eliminating background noise is vital to maintaining a clean recording. Williams recommended using noise reduction tools to remove hums, hisses, and low-level room noise while being cautious not to over-process the audio, which could create an unnatural, robotic tone. Recording in a controlled environment, such as a soundproof booth, is the best way to minimise background noise from the outset.

    Williams’ insights provided a comprehensive guide to crafting compelling radio commercials. His experiences and advice offered valuable techniques for anyone looking to enhance their skills in audio production and advertising.

     

  • Sound Advice: John Rodda’s Insights into Production Mixing

    John Rodda’s online guest lecture offered an engaging and in-depth exploration of the world of production sound mixing, drawing from his extensive experience across film and television. With a career spanning 35 years and work in over 40 countries, John has established himself as a leading figure in the industry, contributing to productions ranging from documentaries and dramas to major feature films. His lecture provided a rare glimpse into the craft, techniques, and challenges of capturing high-quality audio on set.

    John Rodda

    A Journey Through Sound

    John began by sharing his journey into sound mixing, highlighting how his background in theatre and electronics laid the foundation for his work in film and television. His early experiences included building computers in the late 1970s and working on corporate films and news coverage before transitioning into drama and feature films. He detailed how he navigated the industry at a time when union regulations created significant barriers for newcomers, requiring perseverance and adaptability to succeed.

    Key Roles in Production Sound

    John emphasised the collaborative nature of sound production, highlighting the distinct but interdependent roles within the department:

    • Production Sound Mixer: Oversees all aspects of sound recording on set, ensuring high-quality dialogue capture. They operate the primary recording equipment, balance microphone levels, and collaborate with the director to maintain the intended audio aesthetic. Additionally, they liaise with post-production teams by providing properly labelled sound files and detailed reports.
    • Boom Operator: Responsible for positioning the boom microphone to capture dialogue while staying out of the frame. They must anticipate actor movements, adjust positioning accordingly, and minimise unwanted noise. Boom operators often work in challenging conditions, ensuring optimal sound capture in dynamic filming environments.
    • Sound Assistant: Supports both the mixer and boom operator by setting up equipment, managing cables, placing wireless microphones on actors, and troubleshooting technical issues. They also help maintain sound logs and ensure the smooth operation of the sound department throughout filming.

    Each of these roles contributes to delivering clear, high-quality audio, ultimately enhancing the storytelling experience.

    Adapting to Industry Changes

    John reflected on the evolution of sound recording technology, from mono Nagra tape recorders to sophisticated multi-track digital systems. He discussed how advancements such as wireless microphones and timecode synchronisation have improved sound recording flexibility while accommodating modern filmmaking techniques, including multi-camera setups and wide-and-tight shot combinations. Current industry hardware has significantly improved efficiency and reliability, with modern digital recorders offering multi-track recording, high-resolution audio, integrated timecode systems, and advanced metadata management, enabling seamless file transfers to post-production. Wireless microphone systems now feature extended range, improved RF stability, and digital encryption, enhancing dialogue capture even in challenging environments. Additionally, timecode synchronisation tools ensure frame-accurate alignment between cameras and audio recorders, streamlining workflows and making location sound recording more adaptable for complex setups.

    Challenges and Solutions in Sound Mixing

    John provided practical examples of overcoming sound challenges on set. While working on Downton Abbey, he had to radio mic every actor to meet the director’s preference for unrestricted camera movement. The historical costumes posed additional difficulties in concealing microphones without compromising sound quality. To mitigate these issues, he collaborated with the wardrobe team and developed discreet mic placements that preserved clarity while remaining hidden.

    Another notable example involved a dinner scene, where the clinking of silverware risked overpowering dialogue. John strategically positioned boom microphones and used lavalier mics hidden within costumes to isolate voices while maintaining natural ambiance.

    Similarly, while working on Shackleton, extreme cold conditions threatened equipment functionality. He employed insulated batteries and performed regular system checks to ensure uninterrupted recording.

    For Airport, John devised a wireless timecode system that allowed independent sound recording, enabling him to position himself optimally while the camera moved freely in a busy airport setting.

    Memorable Projects and Industry Recognition

    John shared stories from notable projects, including The Fifth Estate, Longitude, and Shackleton. Longitude, a historical drama, posed unique challenges in capturing the sound of intricate mechanical clockwork, which was integral to the story. In The Fifth Estate, which dealt with the WikiLeaks controversy, he had to navigate fast-paced newsroom settings and international locations, ensuring clear dialogue in constantly shifting environments. His ability to adapt to different genres and production styles has earned him industry recognition, including a BAFTA for Airport and a nomination for Paddington Green. John also spoke about his time on 24: Live Another Day, where he balanced complex action sequences with high-pressure recording environments, demonstrating how experience and quick thinking are essential for a sound mixer.

    Advice for Aspiring Sound Professionals

    John advised aspiring professionals to develop technical skills, gain hands-on experience, and build strong working relationships within the industry. He stressed that attention to detail is key, as minor sound issues can become major post-production problems. He recommended learning about different recording techniques, experimenting with mic placement, and understanding the physics of sound to become a well-rounded professional.

    He also highlighted the importance of being adaptable and proactive. On sets where unexpected technical issues arise, being able to think on one’s feet and offer quick solutions is invaluable. He recalled an instance on 24 when a hidden microphone placement failed during a take, requiring an immediate, seamless backup solution to avoid disrupting the shoot.

    Additionally, he encouraged those entering the field to shadow experienced professionals, seek mentorship opportunities, and remain up to date with industry advancements. Sound recording techniques and equipment continue to evolve, and staying informed about the latest innovations ensures ongoing career growth.

    Conclusion

    John Rodda’s lecture provided invaluable insights into the world of production sound mixing. His extensive experience and practical knowledge underscored the critical role of sound in storytelling. As technology continues to evolve, his insights serve as a testament to the enduring importance of high-quality sound in film and television. For those looking to enter the field, his expertise offered both inspiration and guidance, reinforcing the idea that persistence, adaptability, and a strong technical foundation are crucial to success.

     

  • There and Back Again: The Foley Journey of John Simpson

    The magic of cinema extends far beyond what appears on screen. The immersive power of film owes much to sound, particularly the subtle, often unnoticed details that breathe life into scenes. At the heart of this auditory craft is Foley, a specialised discipline within sound design that recreates everyday sounds to enhance the cinematic experience. From the rustling of fabric to the crunch of footsteps on gravel, Foley artists bring a level of realism and texture that elevates storytelling.

    John Simpson

    John Simpson’s Path into Foley

    A distinguished Foley artist, John Simpson, shared insights into the evolving landscape of the craft. With a career spanning decades, his journey into Foley was, like many others, serendipitous. Initially a Foley recordist, his early work took place in an era when Foley was far less complex than it is today. At that time, Foley was not a comprehensive soundscape but rather a tool for editors to fill in the gaps left by automated dialogue replacement (ADR). Soundtracks were often constructed from a limited number of layers, with minimal dedicated Foley elements. However, as film audio technology advanced and stereo soundtracks became standard, Foley took on a more significant role in shaping cinematic experiences.

    Bringing Iconic Films to Life

    John Simpson’s extensive film credits include work on major productions such as Mad Max: Fury Road, The Adventures of Tintin, Happy Feet, King Kong, The Lego Movie, and The Hobbit trilogy. His expertise has contributed to some of the most visually and sonically compelling films of recent times, adding depth and authenticity to their soundscapes. His ability to craft distinctive auditory textures has made him a highly sought-after Foley artist in the industry.

    The Art of Sound Creation

    Simpson detailed some of the unique approaches he has taken in his work. For The Adventures of Tintin, he described the challenge of creating exaggerated yet believable sounds for animation, including the intricate layers needed for the dog Snowy’s movements. He also explained how he and his team created the sound of ship sequences by recording inside a Foley room, using a specially built box to enclose a microphone and simulate the enclosed resonance of a ship’s interior.

    In Happy Feet, Simpson recalled working extensively on the penguin characters’ movements. To replicate the sound of their feet sliding on ice, he used his fingers on different textured gloves and employed frozen fish to achieve realistic wet movements. The Foley team also created unique water effects by stomping around in a bathtub. Additionally, for the character’s dance sequences, he used wooden boards and various shoe types to capture the different weights and styles of tap dancing.

    Crafting the Sounds of Middle-earth

    For The Hobbit films, he described the meticulous work involved in bringing the sounds of Middle-earth to life. One of the most memorable tasks was recreating the sound of Bilbo running through Smaug’s treasure hoard. This involved pouring and shifting buckets of metal coins across the floor and layering multiple elements, including washers, chains, and lightweight metal pieces, to achieve depth and variation. In addition, he highlighted the use of cloth and military-style rustling to enhance battle sequences. He also mentioned that much of the squishy, organic sounds of creatures in The Hobbit were recorded long before the film, creating a library of textures used in later productions. For dragon movements, he described using leather straps, adding weight by dragging them across various surfaces.

    Experimentation and Innovation

    Experimentation remains at the core of Foley. Simpson recalled a scene in King Kong that required simulating the movement of Kong’s enormous hands gripping the Empire State Building. Instead of relying solely on standard props, he used a large copper pot with padding inside to mimic the deep resonance of Kong’s fingers moving across the structure. He also shared how sounds for the ship sequences in King Kong were recorded by stomping around in different types of boots and walking across various wooden surfaces.

    For The Lego Movie, he described how the character MetalBeard’s mechanical movements were enhanced with retractable vacuum cords, chains, and various metallic elements to create an organic yet plastic sound. He also explained how he carefully mixed different Lego brick sounds at various angles and pressures to ensure authenticity while keeping the movements dynamic and engaging. He mentioned how he used garage sales and second-hand stores to find items that could be creatively repurposed for unique sounds.

    For Walking with Dinosaurs, Simpson shared how he approached the challenge of creating dinosaur footsteps. Boxing gloves were used to strike damp sand, providing a weighty, natural sound. To add layers of movement, leather straps and thick ropes were manipulated to simulate the shifting of large creatures. Additionally, he recorded various cloth and harness movements to replicate the creaking of dinosaur skin and muscle shifts. The roaring of creatures was sometimes constructed using unconventional means, such as dragging large, heavy objects across surfaces to create deep, guttural tones.

    Recording Techniques and Unique Methods

    Simpson also experimented with microphone placement to capture unique sounds. For heavy, weighty footsteps, he buried microphones underground and recorded stomping overhead. To simulate the distant echo of footsteps in deep caves, he used long metal pipes and recorded sounds reverberating through them. Additionally, he used hydrophones to capture underwater movements, such as recording splashing and bubbling sounds for ocean-based scenes.

    The Future of Foley

    Beyond feature films, Foley plays a crucial role in television, video games, and even virtual reality experiences. The craft continues to adapt alongside technological advancements, ensuring that sound remains an integral part of storytelling, no matter the medium. While Foley often goes unnoticed by audiences, its absence would be keenly felt, as it provides the subtle authenticity that draws viewers into the worlds they see on screen.

    This lecture highlighted the dedication and ingenuity required in the field of Foley. The work of Foley artists, often overlooked, remains a cornerstone of cinematic storytelling. As long as there are stories to be told, Foley will continue to shape the way audiences experience them, adding depth, realism, and emotional resonance to every scene.

     

  • The Sound Design of Fantastical Elements: Insights from Brad Meyer

    The world of video games is filled with sounds that transport players to new dimensions, from the roar of mythical creatures to the hum of futuristic technology. Behind these sounds are dedicated professionals like Brad Meyer, Audio Director at Sucker Punch Productions, who delivered an engaging guest lecture on fantasy sound design.

    Brad Meyer

    With extensive experience in the industry, Meyer has worked on a variety of games, including Jurassic Park, Spider-Man: Web of Shadows, and Infamous: Second Son. Throughout his talk, he offered invaluable insights into the creative process of designing sounds for fantastical worlds, emphasising grounding fantasy in reality to create more relatable and engaging audio experiences.

    Bringing the Fantastic to Life Through Sound

    Fantasy in games is often associated with dragons, magic, and mythical landscapes. However, as Meyer pointed out, fantasy sound design extends beyond these stereotypes, encompassing anything that is imagined and does not exist in the real world. Even games with realistic settings use elements of fantasy, whether in the form of futuristic interfaces, enhanced environments, or supernatural abilities.

    Meyer stressed the importance of using real-world sounds as the foundation for fantastical audio effects. By incorporating recognisable sonic elements, such as animal calls, mechanical noises, or environmental recordings, designers create sounds that feel authentic while still supporting the game’s imaginative elements.

    For instance, in Jurassic Park 3: The DNA Factor, Meyer and his team used a mix of bird calls, walrus vocalisations, and cat growls to construct dinosaur sounds that felt realistic despite the fact that no one knows what dinosaurs actually sounded like. Similarly, in Spider-Man: Web of Shadows, he layered compressed air bursts and rope creaks to create the sensation of Spidey’s web-slinging.

    The Process of Crafting Unique Sounds

    Throughout the lecture, Meyer shared several examples of how he and his team develop sounds through experimentation, layering, and creative thinking.

    • Using Unconventional Materials – While designing the sound of Frogger’s tongue, Meyer found inspiration in party whistles, using their retracting motion to simulate the flick of a frog’s tongue. By varying the speed and intensity of the whistle, different versions of the sound were created to match in-game actions.
    • Inventing New Recording Techniques – To create the sound of moving concrete debris in Infamous: Second Son, Meyer built a tumbling machine by filling a padded rubbish bin with rocks and rolling it around. The method allowed for recording a continuous shifting effect, which was later manipulated to fit different gameplay sequences.
    • Manipulating Sound Through Processing – In Infamous, neon powers were designed using fluorescent tube recordings, which were then modified with spectral filters to create a futuristic energy effect. By applying different reverb and pitch-shifting techniques, the sound was given a more otherworldly quality.
    • Blending Organic and Synthetic Sounds – The sound of a dragon’s wings in a fantasy game may be created by layering recordings of bird wing flaps, slowed-down flag waving, and subtle jet engine noise to provide depth and power. Time-stretching and low-pass filters can be applied to emphasise the size of the creature.
    • Recording and Modifying Everyday Objects – To mimic the sound of a robotic exosuit, designers may record hydraulic lifts, servo motors, and mechanical gears, then process them using granular synthesis and distortion effects to enhance their sci-fi aesthetic.
    • Creating Footsteps for Different Surfaces – Footsteps on gravel may be created by recording boots on loose stones, while icy terrain can be simulated by crushing cornstarch or frozen lettuce to create crisp crunching sounds. Layering multiple recordings at different distances can provide more depth and realism.
    • Using Water for Unusual Effects – To produce eerie, otherworldly sounds, designers can use hydrophones to record underwater gurgling, later modifying the pitch and layering additional elements. This technique has been used for deep-sea creatures, alien environments, and magical spells.
    • Layering Human and Animal Sounds – To create monstrous growls or supernatural voices, sound designers often blend human vocalisations with recordings of animals such as tigers, wolves, and pigs. Adjusting the pitch and applying formant shifting can transform these elements into something unrecognisable yet believable.
    • Using Foley Techniques for Impact Sounds – Sword clashes may be recorded using real metal objects, but layering them with additional materials such as celery snaps or baseball bat impacts adds weight and crunch. Similarly, explosions often incorporate recordings of firecrackers, fireworks, and even slowed-down balloon pops to create the necessary sonic texture.

    The Importance of Collaboration and Experimentation

    Meyer highlighted that sound design is a process of constant learning and experimentation. He encouraged students and professionals alike to embrace failure, as each unsuccessful attempt teaches valuable lessons that contribute to better sound design in the future.

    Additionally, he emphasised the importance of community in the sound design industry. Engaging with other professionals, sharing techniques, and attending meetups can lead to fresh ideas and innovation. In Seattle, where Meyer is based, game audio professionals regularly gather to discuss their work, reinforcing the value of collaboration.

    Guidance for Aspiring Sound Designers

    1. Ground fantasy sound design in reality – Using real-world recordings makes fantastical sounds more relatable and engaging.
    2. Experiment with unconventional sources – Anything from household objects to wildlife recordings can become the basis for unique sound effects.
    3. Keep an organised sound library – Effective cataloguing of sounds ensures efficiency in future projects.
    4. Do not be afraid to fail – Trial and error is part of the creative process.
    5. Engage with the sound design community – Collaboration and networking can lead to new opportunities and insights.

    Meyer’s talk provided a fascinating look into the art and science of crafting compelling audio for games. Whether designing the roar of a dragon or the hum of a futuristic machine, the secret lies in finding inspiration in the real world and shaping it into something new.

    For those interested in pursuing sound design, his advice is clear: stay curious, experiment fearlessly, and never stop learning.

     

  • Stepping into Sound: Insights from Dr Vanessa Ament’s Lecture on Foley

    Dr Vanessa Ament, an acclaimed Foley artist and author of The Foley Grail, shared her insights in a fascinating lecture that covered everything from the nuances of Foley artistry to the philosophy behind sound in film. The Foley Grail is widely recognised as a definitive guide to the craft, offering a comprehensive exploration into the techniques, history, and significance of Foley in cinema.

    Dr Vanessa Theme Ament

    The Power of Sound in Storytelling

    Dr Ament underscored how sound shapes emotional responses, sets the tone, and supports the narrative. She pointed to films like Dirty Rotten Scoundrels, where carefully placed silence enhances audience engagement, avoiding unnecessary auditory clutter. She also referenced The Color Purple, where subtle ambient sounds and quiet moments in key emotional scenes amplified the depth of character interactions, making the audience feel more intimately connected to the story.

    Beyond silence, she highlighted how specific sounds can evoke emotional shifts. In Edward Scissorhands, the delicate snipping noises of the protagonist’s scissors were not just functional but reflective of his emotional state—gentle and rhythmic in moments of tenderness, erratic and sharp in times of distress. This attention to sonic detail, she explained, enhances storytelling in a way that audiences often register subconsciously. By using these examples, Dr Ament reinforced the power of sound as an unseen yet essential component of cinematic storytelling.

    Foley as a Craft

    Foley is not just about adding footsteps or the rustling of fabric—it is about enhancing the believability of a character’s movements and interactions with their environment. Dr Ament explained how the best Foley is indistinguishable from production sound, ensuring seamless integration. In her discussion, she highlighted the difference between various approaches, particularly contrasting the Hollywood tendency for hyperrealism with more nuanced approaches in other parts of the world.

    Dr Ament’s Foley work exemplifies the creativity needed to craft immersive and convincing cinematic soundscapes. In Die Hard 2, she and her team crushed VHS tape cases underfoot to authentically replicate the sound of crunching snow, ensuring that each step taken by the characters felt natural and immersive.

    In Predator, the challenge was to give Arnold Schwarzenegger’s movements a sense of weight and power. To achieve this, Dr Ament used a combination of leather straps and metallic elements to create the sounds of his gear shifting with every step. This meticulous approach enhanced the character’s physical presence, ensuring that audiences felt the weight of his every movement.

    Beyond these, Dr Ament has employed unconventional techniques tailored to specific films. In Die Hard, she used a combination of cracked walnuts and frozen bell peppers to create the distinct sound of breaking bones during the film’s intense fight sequences. For The Addams Family, she layered fabric swishes and creaks to bring authenticity to Morticia Addams’ flowing gown, ensuring that every movement felt as elegant and eerie as Angelica Huston’s performance. Additionally, in Total Recall, she used compressed air bursts and manipulated rubber materials to enhance the futuristic, mechanical quality of the film’s synthetic environments and action-heavy sequences. These examples demonstrate how Foley is an indispensable tool in enhancing storytelling through sound.

    Working with Actors’ Performances

    One of the more compelling parts of Dr Ament’s talk was her exploration of how an actor’s physicality influences Foley. She spoke about working on Batman Returns, where Michelle Pfeiffer’s precise and deliberate movements as Catwoman allowed for equally meticulous Foley work. In contrast, Danny DeVito’s Penguin, though an interesting challenge, required more consistency in grotesque and exaggerated sounds rather than delicate nuances.

    Dr Ament used wet rags manipulated with precision to create the grotesque, slimy textures that defined Danny DeVito’s Penguin. This technique helped reinforce the unsettling nature of the character, making his movements feel more visceral and authentic. The Penguin’s waddling gait was accentuated by dampened fabrics, ensuring that every step carried an additional sense of discomfort and unease.

    Additionally, for the same film, various materials such as stiff rubber and leather were used to capture the distinct sound of Catwoman’s costume, bringing an additional layer of realism to Michelle Pfeiffer’s precise, feline movements. Every flick of her whip and the sleek motion of her tight-fitting suit required sonic precision to maintain the character’s agile and controlled presence. Dr Ament ensured that even the subtlest swish of fabric complemented Pfeiffer’s physicality, enhancing the illusion of fluidity and grace in Catwoman’s movement.

    The Influence of Backgrounds and Training

    Dr Ament discussed how a Foley artist’s personal background can shape their approach to sound, influencing the way they perceive and create auditory experiences. Coming from a performance background herself, she highlighted how musicians often have an acute sensitivity to rhythm, tempo, and tonal variation, which translates seamlessly into the nuanced timing of Foley sounds. Dancers, on the other hand, bring a deep understanding of movement and physicality, allowing them to interpret the kinetic energy of on-screen characters with precision.

    She also noted that artists with a fine arts education tend to approach Foley from a sculptural perspective, treating sound as a three-dimensional entity that interacts dynamically with visual storytelling. Additionally, she emphasised that some of the best Foley artists and sound designers emerge from musical backgrounds, where their appreciation for space, resonance, and dynamics enables them to craft sonic environments that are both immersive and expressive. Dr Ament underscored that this diversity in training enriches the field, allowing for a more varied and innovative approach to Foley work.

    The Evolution of Sound Design

    Comparing classic soundtracks with modern blockbusters, Dr Ament was candid in her critique of contemporary sound design trends. She highlighted how many recent films opt for an overwhelming auditory assault, where layers of sound effects, music, and dialogue compete for attention rather than complementing each other. This, she argued, often leads to sensory overload, diminishing the audience’s ability to engage with the film on a deeper emotional level.

    She contrasted this with earlier approaches where sound designers exercised greater restraint, allowing for moments of silence and subtle audio cues to build tension and heighten suspense. For example, in Predator, strategic use of environmental sounds and quiet moments amplified the sense of unease before action sequences, making the soundscape an active part of the storytelling rather than an indiscriminate barrage of noise. Similarly, in Die Hard, selective use of reverb and distant echoes added a sense of scale to the confined spaces of Nakatomi Plaza, reinforcing the intensity of John McClane’s experience without overwhelming the audience.

    Dr Ament noted that while digital advancements have simplified layering sound, they also pose the risk of overuse, reducing the clarity and impact of a film’s auditory landscape. She suggested that modern filmmakers could benefit from revisiting classic films to appreciate how purposeful restraint in sound design can create a more immersive and emotionally resonant experience.

    Global Perspectives on Foley

    Dr Ament has conducted extensive interviews with Foley artists from around the world, uncovering innovative practices that differ from Hollywood’s established methods. She described how some European Foley artists prefer to record sound effects outdoors for authenticity, capturing the natural resonance of footsteps on varied terrain or the organic rustling of leaves. Others incorporate real-world spaces into their recordings, using locations such as abandoned buildings, underground tunnels, or historic courtyards to enhance the authenticity of their sounds.

    She also highlighted the differences in approach across regions, such as how Scandinavian Foley artists often integrate the natural acoustics of forests and icy landscapes into their recordings, while Japanese practitioners frequently employ traditional materials and handcrafted props to achieve unique textures. Additionally, some European studios encourage improvisation by bringing actors into Foley sessions, allowing them to physically engage with props to create more naturalistic performances.

    Dr Ament’s research underscores the vast diversity of Foley techniques worldwide, demonstrating how each region’s cultural and environmental influences shape the soundscapes of cinema in distinctive ways.

    Final Thoughts

    Dr Vanessa Ament’s lecture offered a compelling exploration of sound design and Foley, highlighting craftsmanship, industry challenges, and the evolving role of sound in cinema. For anyone interested in film, her insights serve as a reminder that sound is not just an accompaniment to visuals—it is a storytelling force in its own right.

    She emphasised that effective Foley seamlessly blends into a film, subtly enhancing the experience without drawing attention to itself. As the industry continues to evolve, the challenge remains to balance technical advancements with artistic integrity, ensuring that sound continues to serve the story rather than overwhelm it.

     

  • Careers Support for Sound Design Students: Kelly Dawson, Career Development Consultant

    My name is Kelly Dawson, and I am the Career Development Consultant for the School of Computing, Engineering and the Built Environment. I help students and graduates to explore their career options, find work, prepare their CV, cover letter or applications for roles and to improve their interview performance.

    Kelly Dawson

    Accessing careers support

    If you would like to talk to a Career Development Consultant, you can book in for a 30-minute careers appointment here on MyFuture. On campus and online appointments are available. You have access to careers support while you are a student at Edinburgh Napier University and also for two years after graduation.

    Resources

    You can access a variety of career resources on our CareerHub page here. There are tools to help you write your CV and cover letter, practice for interviews and assessment centres as well as psychometric tests.
    We have our new CV and Cover Letter Guide which is full of advice and examples.
    You can browse internships or graduate jobs here on the Opportunities page on MyFuture too.

    Events

    We frequently run events on campus and you can view these on our Events page.
    Keep an eye out in October for our Tech Careers Fair too – this is also open to graduates!

    Job Sites

    I would recommend looking beyond generic job boards for graduate roles and internships as sound design opportunities often appear in specialist places:

    Key Job Boards & Career Resources

    Contact Details

    If you would like to ask any questions, you can contact me by email on k.dawson2@napier.ac.uk. You can also book in for a careers appointment with me on MyFuture.

  • Exploring Sound Design for Animation with Dr Damian Candusso

    We had the privilege of hosting an insightful online guest lecture with award-winning sound designer Dr Damian Candusso. Renowned for his work on films such as The Lego Movie, Happy Feet, and Legend of the Guardians, Dr Candusso shared his experiences in crafting immersive auditory landscapes for animation.

    Dr Damian Candusso

    The Journey into Sound for Animation

    Dr Candusso began by discussing his career trajectory, highlighting his early experiences working on hand-drawn 2D animation. He explained how his role encompassed the entire sound production process—from dialogue recording to Foley, sound effects design, and final mixing. His career then progressed into 3D animation, CGI, and stop-motion, each presenting its own unique challenges in sound design.

    The Art of Bringing Animation to Life

    Unlike live-action films, animation lacks any natural location sound, making it the sound designer’s responsibility to construct an entire sonic world from scratch. Dr Candusso described this as an opportunity to ‘play God,’ using sound to bring animated characters and environments to life. He shared insights into creating organic and believable soundscapes, even when working with fantastical or otherworldly settings.

    A key takeaway from the session was Dr Candusso’s emphasis on originality. While sound libraries can be useful, he strives to record and manipulate his own material to create distinctive sound effects. He noted how audiences quickly recognise overused stock sounds, which can detract from immersion.

    A Deep Dive into Major Film Projects

    Dr Candusso provided fascinating behind-the-scenes insights into some of his most well-known projects:

    • Happy Feet: This Oscar-winning animated film required a vast library of sound effects to recreate the icy Antarctic environment. Dr Candusso and his team recorded actual ice-breaking sounds using liquid nitrogen, as well as penguin crowd noises sourced from scientists in Antarctica. Foley work played a crucial role in achieving authenticity, particularly in the movement of feathers and flippers.
    • Legend of the Guardians: As Australia’s first stereoscopic 3D animated feature, Legend of the Guardians posed unique challenges in sound spatialisation. Dr Candusso discussed the difficulty of designing sound for slow-motion action sequences, particularly in conveying the movement of objects through a 3D space. His work on this project sparked his research into sound perception in stereoscopic films.
    • The Lego Movie: The film’s sound design was a balance between realism and maintaining the distinct plastic nature of Lego bricks. Dr Candusso experimented with actual Lego sounds but recognised that excessive plastic clicks could become irritating. By blending realistic mechanical sounds with carefully selected Lego noises, he crafted a dynamic yet authentic soundscape. Notably, he used a child’s broken toy car to create the distinctive sound of Lord Business’s mechanical limbs.

    The Role of Technology and Remote Collaboration

    Advancements in broadband technology have enabled remote collaboration, which has significantly changed the sound production workflow. Dr Candusso highlighted how, despite being based hundreds of kilometres from Sydney, he seamlessly collaborates with sound teams worldwide. He also discussed his custom-built microphones and recording techniques, demonstrating how innovation plays a vital role in his creative process.

    Practical Sound Design Techniques

    Dr Candusso shared several hands-on sound design techniques during his lecture, explaining how to create unique and immersive sounds using everyday materials. Here are some standout examples:

    • Penguin Flippers (Happy Feet) – To recreate the sound of penguin wings flapping, Dr Candusso used exotic bird feathers from costume stores. Different colours and sizes were chosen to vary the weight and movement sounds.
    • Ice Cracking (Happy Feet) – Large pieces of wood were frozen with liquid nitrogen and then shattered with a hammer to mimic the sound of icebergs breaking apart.
    • Mechanical Transformations (The Lego Movie) – The extension sounds for Lord Business’s mechanical legs were recorded using a broken toy car, where the exposed gears grinding created a realistic mechanical movement effect.
    • Magnetism (Legend of the Guardians) – To create the ‘flick field’ sound, Dr Candusso combined recordings of resonating bells, glass vibrations, and metallic objects manipulated with electromagnets, then processed them for an ethereal effect.
    • Underwater Ambience (Happy Feet Two) – To recreate realistic underwater sounds, Dr Candusso used hydrophones in a swimming pool and manipulated the recordings to simulate the acoustics of deep-sea environments.
    • Sword Swings (Legend of the Guardians) – For the film’s dramatic battle sequences, Dr Candusso combined recordings of metal rods swooshing through the air with high-pitched bell sounds to create the sharp, resonant swipes of the owls’ weapons.

    For aspiring sound designers, experimenting with found objects and layering multiple recordings with subtle processing can yield unique and captivating results.

    Key Lessons for Aspiring Sound Designers

    Throughout the lecture, Dr Candusso shared invaluable advice for students and professionals alike:

    1. Performance Over Perfection – A sound’s emotional impact often outweighs technical perfection.
    2. Experimentation is Key – Unique sounds often come from unexpected sources. Dr Candusso recounted how he recorded a moth’s fluttering, which, when processed, resembled a mechanical engine.
    3. Storytelling Through Sound – Every sound should serve the narrative and contribute to the overall experience.
    4. Adaptability is Crucial – Working in animation means constantly adapting as visuals evolve throughout production.

    Closing Reflections

    Dr Candusso’s lecture provided a comprehensive look into the intricacies of sound design for animation. His passion for crafting immersive soundscapes was evident, and his insights offered both inspiration and practical knowledge for anyone interested in film sound. He highlighted the ever-evolving nature of sound design, emphasising the importance of staying innovative and adaptable. Additionally, he encouraged aspiring sound designers to explore unconventional sources of inspiration and experiment with emerging technologies to push creative boundaries.

     

     

  • Beasts, Bots & Booms: Scott Gershin on the Sonic World of Pacific Rim

    Few films delivered the sheer auditory spectacle of Pacific Rim. From the ground-shaking footfalls of colossal Jaegers to the guttural roars of Kaiju, the film’s soundscape was nothing short of a masterpiece. Behind this sonic brilliance was Scott Gershin, a veteran sound designer whose passion for storytelling through sound was evident in every project he touched. In a Q&A, Gershin delved into his process, challenges, and the artistry behind creating the soundscape for Pacific Rim.

    Scott Gershin

    Bringing Kaiju and Jaegers to Life

    One of the most exciting aspects of designing sound for Pacific Rim was crafting distinct voices for the Kaiju. Unlike other monster movies, where creatures might share similar sonic qualities, each Kaiju in Pacific Rim had a unique identity. Gershin described the process as akin to composing music—some creatures required deep, resonant tones, while others needed higher-pitched, aggressive shrieks.

    To achieve this, he recorded a range of animal sounds, including elephants, tigers, lions, and even raccoons. However, real-world recordings weren’t always enough. Some sounds needed to be exaggerated or transformed using digital tools. “I wanted to avoid using my usual sound library and do something unique,” Gershin explained. “So, we went out and recorded all sorts of things—animals, industrial machines, and even dropping massive cargo containers in Long Beach just to get the right impact.”

    Similarly, the Jaegers posed a challenge. These massive machines needed to sound heavy yet functional, avoiding the overly sleek, high-tech sounds associated with films like Transformers. Gershin and his team opted for more mechanical, industrial noises inspired by aircraft carriers and military destroyers. “Guillermo [del Toro] didn’t want them to sound too sci-fi. He wanted them to feel grounded,” he noted.

    The Process: From Pitch to Final Mix

    Gershin’s involvement with Pacific Rim spanned nearly two years, beginning before the film was even greenlit. “Guillermo came to me early on and said, ‘I have this idea. Can you help me with the pitch?’” From there, he became deeply embedded in the film’s development, helping to shape its sonic language from the ground up.

    The sound design process followed a natural progression, starting with broad strokes and gradually refining details as the film took shape. In the early stages, when animation was incomplete, the team used storyboards and animatics to guide their sound experiments. “For a long time, it looked like a giant South Park movie,” Gershin joked. “But as the visuals evolved, so did our approach to sound.”

    One of the most crucial aspects of the process was ensuring scale. When dealing with towering, 25-story-tall robots, sound design had to reflect their massive weight and power. “We spent a lot of time making sure every punch, stomp, and roar felt enormous but also had clarity,” he said.

    Challenges and Creative Problem-Solving

    Sound design was as much about problem-solving as it was about creativity. Gershin recalled an early challenge with one of the film’s Kaiju, Otachi. Initially, the sound team assumed the creature would primarily roar, but as the animation developed, they realized Otachi had a far more dynamic range of movements. “For the longest time, every storyboard had its mouth open, so it was constantly screaming. But when we saw the final animation, we knew we needed to rework its sounds to reflect its personality.”

    Another unexpected challenge came from attempting to record mining equipment, which seemed like a great idea conceptually but turned out to produce little more than diesel engine noise. “Sometimes, you think something will sound amazing, and then you get there and realize it doesn’t work at all,” Gershin laughed. “You just have to adapt and keep experimenting.”

    Collaboration and the Art of Mixing

    Despite his extensive hands-on approach, Gershin credited much of the film’s success to the collaborative nature of the project. His team included talented sound designers like Charlie Campagna and Peter Zinda, who helped build a rich and layered sonic environment. “It’s like being in a band. Everyone brings something unique to the table,” he said.

    Mixing the final soundscape was another crucial stage. With over 2,000 sound tracks in play, balancing dialogue, music, and effects required meticulous attention. “At any given moment, someone had to take the lead—sometimes it was the music, sometimes the effects, sometimes silence,” he explained. “Silence, if used correctly, is the most powerful sound we have.”

    Sound Design Tips from Scott Gershin

    Throughout the Q&A, Gershin shared valuable insights for aspiring sound designers.

    • Use Negative Space: Silence can be one of the most powerful tools in sound design. In Pacific Rim, Gershin emphasized that the real challenge wasn’t deciding where to be loud but rather where to go quiet to give the audience a break.
    • Experiment Relentlessly: Gershin and his team spent months recording unique sounds, including unconventional objects like giant cargo containers and mining equipment. However, not every idea worked, highlighting the importance of trial and error.
    • Think Like a Musician: Gershin compared sound design to composing music, where different elements contribute to a larger composition. This approach helped maintain clarity and balance within the complex soundscape of Pacific Rim.
    • Collaborate Effectively: Sound design is rarely a solo effort. Gershin relied on a team of experts to bring the film’s world to life, likening the process to being in a band where each member contributes something unique.
    • Prioritise Realism When Needed: While Pacific Rim is a science-fiction spectacle, the sound design remained grounded in real-world physics. By basing the Jaegers’ sounds on aircraft carriers and military destroyers, Gershin ensured they felt tangible and weighty.
    • Understand the Emotional Beats: Sound isn’t just about effects—it’s about storytelling. Gershin and his team carefully adjusted the mix to highlight the film’s emotional moments, pulling back sound effects when music or dialogue needed to take center stage.
    • Build a Personal Sound Library: Gershin recommended that sound designers record their own unique sounds whenever possible. Having a personal collection of recordings allows for more original, distinctive work rather than relying on stock libraries.
    • Listen to Your Environment: He emphasized the importance of listening to real-world sounds for inspiration. Whether recording the streets of Los Angeles or capturing the ambiance of London, immersing oneself in different soundscapes provides a greater understanding of sonic textures.

    Final Thoughts

    Reflecting on Pacific Rim, Gershin saw it as one of the most rewarding projects of his career. The blend of industrial realism, creature vocalisation, and orchestral collaboration made it a unique challenge, but one he embraced wholeheartedly. “Every film is its own creature. You have to let it tell you what it wants to be,” he said.

    For aspiring sound designers, he offered simple advice: “Care. Want it. Want it badly. If you love what you do and are willing to work hard, you get paid to play. And that’s the best job in the world.”

    Scott Gershin’s work continues to inspire sound designers and filmmakers alike. Whether it was the colossal battles of Pacific Rim or the subtle sonic storytelling of American Beauty, his passion for sound was unmistakable. This Q&A provided an insightful look into the world of film sound design, offering valuable lessons for those looking to follow in his footsteps.