A study of decodable breathing patterns for augmentative and alternative communication

People who use high-tech augmentative and alternative communication (AAC) solutions still face restrictions in terms of practical utilization of present AAC devices, especially when speech impairment is compounded with motor disabilities. This study aims to explore an effective way to decode breathi...

Full description

Saved in:
Bibliographic Details
Main Authors: Yasmin Elsahar, Sijung Hu, Kaddour Bouazza-Marouf, David Kerr, Will Wade, Paul Hewett, Atul Gaur, Vipul Kaushik
Format: Default Article
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/2134/13295870.v1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:People who use high-tech augmentative and alternative communication (AAC) solutions still face restrictions in terms of practical utilization of present AAC devices, especially when speech impairment is compounded with motor disabilities. This study aims to explore an effective way to decode breathing patterns for AAC by the means of a breath activated dynamic air pressure detection system (DAPDS) and supervised machine learning (ML). The aim is to detect a user’s modulated breathing patterns (MBPs) and turn them into synthesized messages forconversation with the outside world. MBPs are processed using a one-nearest neighbor (1-NN) algorithm with variations of dynamic time warping (DTW) to produce synthesized machine spoken words (SMSW) at managed complexities and speeds. An ethical approved protocol was conducted with the participation of 25 healthy subjects to create a library of 1500 MBPs corresponding to four different classes. A mean systematic classification accuracy of 91.97 % was obtained using the current configuration. The implications from the study indicate that an improved AAC solution and speaking biometrics decoding could be undertaken in the future.