SHAP value-based ERP analysis (SHERPA): Increasing the sensitivity of EEG signals with explainable AI methods.

Journal: Behavior research methods

Volume: 

Issue: 

Year of Publication: 

Affiliated Institutions:  Institute of Computer Science, Osnabrück University, Osnabrück, Germany. Institute of Psychology, Osnabrück University, Osnabrück, Germany. Institute of Psychology, Osnabrück University, Osnabrück, Germany. benjamin.schoene@ntnu.no.

Abstract summary 

Conventionally, event-related potential (ERP) analysis relies on the researcher to identify the sensors and time points where an effect is expected. However, this approach is prone to bias and may limit the ability to detect unexpected effects or to investigate the full range of the electroencephalography (EEG) signal. Data-driven approaches circumvent this limitation, however, the multiple comparison problem and the statistical correction thereof affect both the sensitivity and specificity of the analysis. In this study, we present SHERPA - a novel approach based on explainable artificial intelligence (XAI) designed to provide the researcher with a straightforward and objective method to find relevant latency ranges and electrodes. SHERPA is comprised of a convolutional neural network (CNN) for classifying the conditions of the experiment and SHapley Additive exPlanations (SHAP) as a post hoc explainer to identify the important temporal and spatial features. A classical EEG face perception experiment is employed to validate the approach by comparing it to the established researcher- and data-driven approaches. Likewise, SHERPA identified an occipital cluster close to the temporal coordinates for the N170 effect expected. Most importantly, SHERPA allows quantifying the relevance of an ERP for a psychological mechanism by calculating an "importance score". Hence, SHERPA suggests the presence of a negative selection process at the early and later stages of processing. In conclusion, our new method not only offers an analysis approach suitable in situations with limited prior knowledge of the effect in question but also an increased sensitivity capable of distinguishing neural processes with high precision.

Authors & Co-authors:  Sylvester Sagehorn Gruber Atzmueller Schöne

Study Outcome 

Source Link: Visit source

Statistics
Citations :  Agarwal, N., & Das, S. (2020). Interpretable machine learning tools: A survey. IEEE Symposium Series on Computational Intelligence (SSCI), 2020, 1528–1534. https://doi.org/10.1109/SSCI47803.2020.9308260
Authors :  5
Identifiers
Doi : 10.3758/s13428-023-02335-7
SSN : 1554-3528
Study Population
Male,Female
Mesh Terms
Other Terms
Deep learning;EEG;ERP analysis;Explainable AI;Feature importance;SHAP
Study Design
Study Approach
Country of Study
Publication Country
United States