Dr. Zilong Xie Presents at Association for Research in Otolaryngology’s MidWinter Meeting

Last month, School of Communication Science and Disorders assistant professor Dr. Zilong Xie attended the Association for Research in Otolaryngology’s 46th Annual MidWinter Meeting. Dr. Zilong Xie Headshot

“I had the opportunity to catch up with colleagues and friends and meet new people,” he shared, “I attended a couple of great podium and poster sessions related to my work and learned a lot about what the field is excited about.” 

Dr. Xie presented both a podium and poster presentation at the conference in addition to co-authoring two other presentations: “Cortical Tracking of Continuous Speech-In-Noise: Children’s Use of Linguistic and Acoustic Information” and “Deficits in Sensory Decision-Making Underlie Self-Perceived Hearing Difficulties.”

At the event, Dr. Xie gave the podium presentation “Lexical Bias in Phonemic Categorization: Effects of Spectral Degradation, Cognitive Load, and Aging.” He explained the impact he hopes this work has saying, “This study was designed to provide insights into how older adults who wear a cochlear implant to treat hearing loss use linguistic knowledge during speech perception in multisensory environments. I hope that through this line of work, we can better understand why adults who wear a cochlear implant differ significantly in their abilities to understand speech and inform clinical strategies to optimize speech understanding in all cochlear-implant patients.”

Dr. Xie also gave a poster presentation titled  “Robust Voice Emotion Recognition Under Cognitive Load.” During this presentation, the presenters simulated cochlear implant listening in normal-hearing subjects to examine how well they could identify voice emotion while performing everyday tasks. “Accurate emotion perception is crucial for social communication. Human speech conveys information about the speaker’s emotional status, but such information is degraded when listening through a cochlear implant,” Dr. Xie explained, “Our results suggest that subjects can identify voice emotion in such conditions as long as they are not multitasking, albeit they may need to exert more effort.”

Dr. Xie stated that he especially enjoyed attending the conference as it was the first in-person MidWinter Meeting after the pandemic. “It is an intense five days but an enriching conference,” he shared.