Professor Myounghoon “Philart” Jeon, a professor at Virginia Tech, recently delivered an engaging online guest lecture on sonic information design, where he explored the intersection of auditory perception, cognitive science, and interactive sound design. His research spans auditory displays, human-computer interaction, and affective computing, with applications in assistive technologies, automotive interfaces, and interactive performance. Throughout the lecture, he shared detailed insights into the process of designing and evaluating auditory cues, explaining how specific sound design choices impact usability, accessibility, and engagement.
The Evolution of Sonic Information Design
Professor Jeon introduced sonic information design as a field that integrates sonification, auditory displays, auditory user interfaces, and sonic interaction design. While sound design has historically been guided by artistic intuition, his work highlights a shift towards scientific, data-driven approaches. This transition ensures that auditory interfaces are both intuitive and efficient, optimising interaction in hands-free, visually demanding, or multi-tasking environments.
One example of this approach is his development of “Spindex” (Speech Index), an auditory menu navigation system that enhances efficiency by using compressed speech cues instead of full words. Instead of users listening to long, spoken menu options, Spindex provides shortened speech cues, allowing them to scan options quickly. Through user testing, he found that people could navigate menus more effectively when exposed to a combination of compressed speech and indexed categories, rather than traditional text-to-speech output. The decision to use speech compression without pitch alteration ensured that the information remained intelligible while increasing the speed of interaction.
Applications of Auditory Displays
Professor Jeon discussed a range of applications where sound enhances usability and accessibility, particularly in assistive technology, automotive sound design, and interactive exhibitions. One of his most practical and tested projects focused on indoor navigation for visually impaired users. His team developed a wearable navigation system that incorporates ultrasonic belts providing both tactile and auditory feedback. The sound design choices involved creating gradual frequency shifts to indicate proximity to obstacles. Low-pitched tones signalled distant objects, while higher-pitched tones and increasing intensity indicated closer obstructions, ensuring users could interpret spatial information efficiently.
His work in automotive auditory interfaces examined how sound can improve situational awareness for drivers. One project involved designing warning systems for railway level crossings, where drivers might overlook visual alerts due to distraction. His team conducted experiments using different auditory cues, testing whether short, rhythmic pulses or long, sweeping alerts were more effective at conveying urgency without overwhelming the driver. Findings showed that spatialised auditory warnings, where sounds were positioned to indicate the direction of an approaching train, helped drivers respond more accurately than traditional beeping tones.
Professor Jeon also highlighted his work on interactive sonification in public exhibitions, including the Accessible Aquarium project, which used computer vision to track fish movements and convert them into sound and music. The sound design process for this project involved defining sonic mappings that correlated with fish speed, size, and position. Large fish were assigned deep, resonant tones, while smaller fish produced higher-pitched sounds. The system was further refined by introducing dynamic panning, so the audio reflected the fish’s position within the tank, allowing visually impaired visitors to perceive their movements in real-time.
The project was later expanded by introducing audience interaction through motion-tracking technology. Visitors could use arm movements to mimic fish, triggering musical patterns that followed their gestures. The decision to incorporate layered harmonic structures ensured that overlapping user-generated sounds remained cohesive rather than chaotic, maintaining an aesthetically pleasing experience while preserving informational clarity.
Designing Effective Auditory Cues
Throughout the lecture, Professor Jeon provided detailed insights into sound design decision-making, particularly in branding, interaction design, and auditory icons. In his work with LG Electronics and Samsung, he developed sound profiles for home appliances, ensuring that product sounds were both functional and emotionally resonant. His research explored how users interpret different tonal qualities and how sound frequency influences perceived urgency and pleasantness. In one experiment, he tested whether major-key melodic notifications were perceived as more friendly and reassuring than atonal, percussive alerts.
Another innovative area of his research involved the development of lyricons (lyrics-based earcons), a novel approach where melodic speech reinforces functional commands. Instead of using generic tones, this system integrated spoken words into short musical motifs, making auditory cues more memorable. For example, turning a device on or off could be represented by a short, ascending or descending melodic phrase, rather than a simple beep. His studies demonstrated that users recalled lyricon-based auditory cues more accurately than traditional earcons, highlighting the potential of music as a tool for reinforcing interaction memory.
In his dance-based sonification research, Professor Jeon explored how motion-capture technology can translate body movements into real-time music generation. His team designed a system where dancers wore infra-red motion sensors, allowing spatial position and gesture dynamics to control auditory parameters. The sound mappings were carefully structured so that slow, fluid movements produced soft, sustained tones, while sharp, rapid gestures triggered percussive elements. By fine-tuning these interactions, the system ensured that each performance remained expressive yet predictable, allowing dancers to intentionally shape the evolving musical landscape.
The Future of Sonic Interaction
Looking forward, Professor Jeon discussed how artificial intelligence, machine learning, and real-time sound generation are shaping next-generation auditory interfaces. One of his projects in this area involves music-based social robots for children with autism, where robotic agents use music to enhance social communication. The system was designed with emotion-sensitive audio cues, allowing the robot to modulate its voice and musical output based on the child’s mood. His team experimented with different musical scales and rhythmic patterns, determining that gentle, repetitive melodic structures were the most effective at capturing attention without overwhelming the child.
His lecture provided a comprehensive and technically rich exploration of sonic information design, demonstrating how scientific principles, auditory perception, and interactive sound technologies continue to shape human-computer interaction. By combining rigorous research with creative experimentation, his work highlights the growing impact of auditory interfaces in accessibility, engagement, and multisensory experiences across multiple fields.
