Category: People

  • Dr John McGowan – Customisable Approaches to Accessible Technologies

    I am Dr John McGowan, a lecturer at Edinburgh Napier University, exploring how customizable approaches to accessible technologies can be useful for neurodiverse adults. 

    Dr John McGowan

    My interest in research stems from a passion in creativity, primarily in music, where non-linear communication led me to explore how multimodality could be a useful approach in exploring the capabilities of expression in neurodiverse adults. I recognise the ways in which musical expression can reflect unspoken emotions and feelings, and how valuable personal expression can be as a transformative way to develop for human beings. This was the basis for my PhD research where musical exploration was combined with visual modalities, via an interactive application, that would allow autistic adults with differing capabilities the ability to express themselves through play in music therapy sessions.  

    Prior to this, my master’s degree focused on the 3D visualization of sound using cymatics as a basis. Cymatics are the impressions that sound leaves through media like water, or through salt on a Chladni plate. The visualization involved in the master’s degree investigated what sound might look like travelling through air as sonic bubbles. This concept was further developed during my PhD where a real time application allowed the visualization of sound through a projector, using input from microphone, or a MIDI keyboard. Depending on the volume, pitch and tone of the note triggered, a specific 3D cymatic shape would be visualized in real-time. Importantly, customization of colour was facilitated for users, allowing them to personalize the experience. In addition, a custom-built interactive table was designed and built to accommodate skills of any level where the table could be played like an audio-visual instrument for immediate audio-visual feedback based on tactile input.    

    My own continued use of technology, as well as contemporary research, has demonstrated the over adoption at times, as well as the potential, for using and augmenting existing technologies. Research in this area has also supported this notion, especially regarding autistic adults using familiar tools that already perform reliable functionality for their needs. What we can do, as responsible researchers, is look at ways of exploiting the existing tools and components within mobile technology that will allow development of augmented multimodal stimuli as self-management tools. This may allow also greater accessibility for those who are economically challenged, as well as those who prefer to use familiar tools and technologies.  

    Currently, I am leading a project that is looking into the potential for augmented reality to be used in stress management for autistic adults. Via the use of real time biometric monitoring (for example, by using a smartwatch to detect heartrate, or using the microphone on a mobile device to measure breathing), the proposed application will react and allow the user to use customized sensory stimuli as positive distractors, or as some form of real time assistance in times of stress. Two phases of this study have already been completed which includes an initial survey with over 200 participants, both autistic and caregivers, that have provided feedback on the stress triggers and issues related to hypersensitivity and hypo sensitivity. A second study, which included interviews with over 20 participants, investigated some of these issues in more detail regarding the needs of autistic adults, and the desire and potential for new tools that could be useful in their day-to-day lives managing stressful situations. Some of the key themes that we aim to develop focus on the idea of familiarity, positive distraction, and alerting individuals to changes in their physical state, which may go unnoticed due to sensory issues.   

  • Dr Iain McGregor: Advancing Interactive Media Design at Edinburgh Napier University

    Dr Iain McGregor serves as an Associate Professor at Edinburgh Napier University, where he specialises in interactive media design and auditory perception research. He earned a PhD in soundscape mapping, focusing on comparing sound designers’ expectations with listeners’ experiences, providing insights into perceptual differences and design approaches. With over 30 years of experience, he specialises in sound design across various media, including film, video games, mixed reality, and auditory displays. His research covers soundscapes, sonification, and human interaction with auditory systems.

    Contributions to Auditory Perception Research

    Dr McGregor has collaborated with researchers on a range of studies that explore sound design and auditory perception. One such contribution includes his work on auditory perception, particularly his patent, *Evaluation of Auditory Capabilities* (WO2024041821A1). This patent presents a method for assessing auditory perception, with potential applications in accessibility, user experience design, and auditory technologies.

    Research in Sound and Human-Robot Interaction

    Dr McGregor’s research covers sound design, auditory perception, and human-robot interaction (HRI). He investigates how naming conventions shape perceptions of robotic personalities, improving trust and usability in assistive robotics. His research in sonification aids scientific analysis, while his work on auditory alerts improves their effectiveness in healthcare and transportation. He also explores how immersive audio enriches virtual and mixed reality and examines Foley artistry’s impact on character realism in animation. Collaborating with industry and academia, he applies these insights to mixed reality, film, video games, and robotics.

    Industry Experience

    At the start of his career, Dr McGregor worked with renowned artists and organisations, including the Bolshoi Opera, the City of Birmingham Symphony Orchestra under Sir Simon Rattle, Ravi Shankar, and Nina Simone. His work integrates auditory technologies with creative methodologies, driving innovation in sound research and education. In addition to his academic work, he is currently serving as a consultant for technology companies in the fields of mixed reality and robotics, helping to shape the development of innovative auditory interfaces.

    Academic Contributions and Mentorship

    Beyond his research, Dr McGregor mentors MSc and PhD students in sound design, auditory perception, and human-computer interaction. He encourages interdisciplinary collaboration among designers, engineers, and cognitive scientists. He contributes to curriculum development, aligning courses with advancements in sound and interactive media design. His work in interactive media design and auditory perception informs research and industry practices.

    Technological and Adaptive Advancements in Sound Design

    Advancements in reinforcement learning and edge computing are enabling real-time adaptation in sound design. These technologies allow auditory interfaces to intelligently filter and process sounds, reducing noise while enhancing clarity. Extended audiograms and dynamic digital signal processing (DDSP) further optimise clarity while minimising cognitive load. By integrating real-time adjustments based on user-specific hearing profiles, auditory systems can offer a consistent and accessible listening experience across different environments.

    Sound Design in Cultural and Museum Spaces

    In cultural and museum environments, sound design is also becoming more interactive and adaptive. Augmented reality audio systems offer dynamic storytelling and personalised navigation, responding to visitor movement and engagement levels. Audio cues can guide individuals with mobility constraints along optimised routes, while tailored auditory content enhances inclusivity and immersion.

    Sound Design for Digital and Interactive Environments

    Sound design is transforming interaction with digital environments, robotics, and everyday devices by enhancing immersion, accessibility, and engagement. Spatial audio accurately places sound in mixed reality, creating more natural user experiences, while in robotics, auditory cues foster trust and facilitate smoother interactions. Augmented reality audio supports dynamic storytelling and navigation, adapting to user movement and preferences. Additionally, personalised auditory content and accessibility-focused cues improve inclusivity in museums, public spaces, and virtual environments.

    Sound Design in Transportation and IoT

    To compensate for the near-silent operation of electric vehicles, the automotive industry is developing tailored audio cues that enhance safety and driver awareness. As the Internet of Things (IoT) expands, intuitive auditory interfaces are becoming crucial for seamless device navigation and control. Advancements in loudspeaker technology are also helping reduce noise pollution while improving communication in public spaces.

    The Future of Sound Design

    Research continues to advance adaptive and personalised sound experiences across multiple domains. Innovations in extended audiograms and dynamic digital signal processing (DDSP) optimise clarity while reducing cognitive load, ensuring accessibility across different environments and hearing abilities. Emerging sound technologies are exploring real-time adjustments tailored to user-specific hearing profiles, enhancing personalisation in auditory media experiences. As sound design evolves, it will create more intuitive, efficient, and engaging experiences that seamlessly adapt to diverse user needs.