Advancing human-machine interface systems through artificial intelligence
Abstract
This study explores the use of artificial intelligence to improve human–machine interaction by decoding neural intent from non-invasive muscle signals. Using a GO/NO-GO task, we recorded high-density electromyography (HD-EMG) and EEG data from participants during movement preparation and cancellation. After preprocessing the data — including filtering, dimensionality reduction (PCA), and normalization — we trained a convolutional neural network to classify different movement-related brain states. The model achieved 72.4% accuracy in distinguishing two functional states and 49.6% in three-class classification. Visualization techniques such as GradCAM and temporal sliding windows were used to interpret model decisions and understand how neural signals evolve over time. These findings highlight the potential of using surface EMG to decode brain activity, paving the way for more intuitive control in applications like prosthetics, rehabilitation, and real-time assistive technologies.