Welcome to the IKCEST
New advances improve cough detection in wearable health monitors

Researchers have improved the ability of wearable health devices to accurately detect when a patient is coughing, making it easier to monitor chronic health conditions and predict health risks such as asthma attacks. The advance is significant because cough-detection technologies have historically struggled to distinguish the sound of coughing from the sound of speech and nonverbal human noises.

Wearable health technologies offer a practical way to detect sounds. In theory, models with embedded machine learning can be trained to recognize coughs and distinguish them from other types of sounds. However, in real-world use, this task has turned out to be more challenging than expected.

"While models have gotten very good at distinguishing coughs from background noises, these models often struggle to distinguish coughs from speech and similar sounds such as sneezes, throat-clearing, or groans," Lobaton says. "This is largely because, in the real world, these models run across sounds they have never heard before.

"Cough-detection models are 'trained' on a library of sounds, and told which sounds are a cough and which sounds are not a cough," Lobaton says. "But when the model runs across a new sound, its ability to distinguish cough from not-cough suffers."

To address this challenge, the researchers turned to a new source of data that could be used to train the cough detection model: wearable health monitors themselves. Specifically, the researchers collected two types of data from health monitors designed to be worn on the chest. First, the researchers collected audio data picked up by the health monitors. Second, the researchers collected data from an accelerometer in the health monitors, which detects and measures movement.

"In addition to capturing real-world sounds, such as coughing and groaning, the health monitors capture the sudden movements associated with coughing," Lobaton says.

"Movement alone cannot be used to detect coughing, because movement provides limited information about what is generating the sound," says Yuhan Chen, first author of the paper and a recent Ph.D. graduate from NC State. "Different actions - like laughing and coughing - can produce similar movement patterns. But the combination of sound and movement can improve the accuracy of a cough-detection model, because movement provides complementary information that supports sound-based detection."

In addition to drawing on multiple sources of data collected from real-world sources, the researchers also built on previous work to refine the algorithms being used by the cough-detection model.

When the researchers tested the model in a laboratory setting, they found their new model was more accurate than previous cough-detection technologies. Specifically, the model had fewer "false positives," meaning that sounds the model identified as coughs were more likely to actually be coughs.

"This is a meaningful step forward," Lobaton says. "We've gotten very good at distinguishing coughs from human speech, and the new model is substantially better at distinguishing coughs from nonverbal sounds. There is still room for improvement, but we have a good idea of how to address that and are now working on this challenge."

The paper, "Robust Multimodal Cough Detection with Optimized Out-of-Distribution Detection for Wearables," is published in the IEEE Journal of Biomedical and Health Informatics. The paper was co-authored by Feiya Xiang, a Ph.D. student at NC State; Alper Bozkurt, the McPherson Family Distinguished Professor in Engineering Entrepreneurship at NC State; Michelle Hernandez, professor of pediatric allergy-immunology in the University of North Carolina's School of Medicine; and Delesha Carpenter, a professor in UNC's Eshelman School of Pharmacy.

This work was done with support from the National Science Foundation (NSF) under grants 1915599, 1915169, 2037328 and 2344423. The work was also supported by NC State's Center for Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST), which was created with support from NSF under grant 1160483.

Original Text (This is the original text for your reference.)

Researchers have improved the ability of wearable health devices to accurately detect when a patient is coughing, making it easier to monitor chronic health conditions and predict health risks such as asthma attacks. The advance is significant because cough-detection technologies have historically struggled to distinguish the sound of coughing from the sound of speech and nonverbal human noises.

Wearable health technologies offer a practical way to detect sounds. In theory, models with embedded machine learning can be trained to recognize coughs and distinguish them from other types of sounds. However, in real-world use, this task has turned out to be more challenging than expected.

"While models have gotten very good at distinguishing coughs from background noises, these models often struggle to distinguish coughs from speech and similar sounds such as sneezes, throat-clearing, or groans," Lobaton says. "This is largely because, in the real world, these models run across sounds they have never heard before.

"Cough-detection models are 'trained' on a library of sounds, and told which sounds are a cough and which sounds are not a cough," Lobaton says. "But when the model runs across a new sound, its ability to distinguish cough from not-cough suffers."

To address this challenge, the researchers turned to a new source of data that could be used to train the cough detection model: wearable health monitors themselves. Specifically, the researchers collected two types of data from health monitors designed to be worn on the chest. First, the researchers collected audio data picked up by the health monitors. Second, the researchers collected data from an accelerometer in the health monitors, which detects and measures movement.

"In addition to capturing real-world sounds, such as coughing and groaning, the health monitors capture the sudden movements associated with coughing," Lobaton says.

"Movement alone cannot be used to detect coughing, because movement provides limited information about what is generating the sound," says Yuhan Chen, first author of the paper and a recent Ph.D. graduate from NC State. "Different actions - like laughing and coughing - can produce similar movement patterns. But the combination of sound and movement can improve the accuracy of a cough-detection model, because movement provides complementary information that supports sound-based detection."

In addition to drawing on multiple sources of data collected from real-world sources, the researchers also built on previous work to refine the algorithms being used by the cough-detection model.

When the researchers tested the model in a laboratory setting, they found their new model was more accurate than previous cough-detection technologies. Specifically, the model had fewer "false positives," meaning that sounds the model identified as coughs were more likely to actually be coughs.

"This is a meaningful step forward," Lobaton says. "We've gotten very good at distinguishing coughs from human speech, and the new model is substantially better at distinguishing coughs from nonverbal sounds. There is still room for improvement, but we have a good idea of how to address that and are now working on this challenge."

The paper, "Robust Multimodal Cough Detection with Optimized Out-of-Distribution Detection for Wearables," is published in the IEEE Journal of Biomedical and Health Informatics. The paper was co-authored by Feiya Xiang, a Ph.D. student at NC State; Alper Bozkurt, the McPherson Family Distinguished Professor in Engineering Entrepreneurship at NC State; Michelle Hernandez, professor of pediatric allergy-immunology in the University of North Carolina's School of Medicine; and Delesha Carpenter, a professor in UNC's Eshelman School of Pharmacy.

This work was done with support from the National Science Foundation (NSF) under grants 1915599, 1915169, 2037328 and 2344423. The work was also supported by NC State's Center for Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST), which was created with support from NSF under grant 1160483.

Comments

    Something to say?

    Login or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category *



    Reason *



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel