This field concerns research of methods exploiting the specific properties of ear-EEG signals and the joint processing of ear-EEG and signals from other sensing modalities.
Electrophysiological signals in general, and in particular EEG signals, are inherently stochastic, non-stationary, non-linear, and with a low signal-to-noise ratio. These aspects become even more pronounced when recording such signals in real-life, where the experimental conditions cannot be controlled to the same extent as in the laboratories. Although EEG signals provide a rich source of information regarding cortical processing, there is a need for robust signal processing algorithms to extract this information. Advanced signal processing and machine learning algorithms is the essential tool necessary to accomplish this.
One of the key methodologies that have proven efficient is so-called ensemble learning, in which ensembles of classifiers are combined to improve the overall classification performance. Inherently, and somewhat surprisingly, the ensemble learning techniques has a positive effect on the bias-variance trade-off; applied in the right way, this effectively diminishing the problem of over-fitting the algorithms. Our research in EEG and ear-EEG have extensively used ensemble learning techniques, and tailored the algorithm to the specific characteristics of ear-EEG and to the specific problems that were investigated, see e.g. (Mikkelsen et al., 2017) for an example of this technique. This experience will be used in the further development of methods, and also be extended to unsupervised learning problems such as clustering and anomaly detection. Moreover, the ear-centered sensing platform enable 24/7 recording of physiological and behavioral data from many subjects and over extended periods of time, thus ideal for processing by modern machine learning techniques such as deep neural networks and long short-term memory networks. One specific problem that we intend to investigate, is to use learning techniques to adapt a generalized forward model to an individualized forward model based on EEG recordings only. A simple approach to this, without incorporation of a priori knowledge from generalized forward models, has already been investigated in the keyhole hypothesis (Mikkelsen et al., 2017).
EEG reflects an aggregated response of multiple sources activated simultaneously. Use of signal processing methods such as independent component analysis (ICA) (Makeig et al., 1996) and Empirical Mode Decomposition (EMD) (Park et al. 2011) on the multichannel EEG signal enable identification of temporally independent sources. This in turn allow us to isolate the effect of the individual sources in the EEG.
Another aspect of this research field is to apply sensor fusion techniques to exploit the complementary information provided by other sensor modalities. Examples includes: joint processing of ear-EEG, inertial sensing and respiratory signals to improve sleep assessment; motion artifact reduction in the ear-EEG signal based on mechanical sensor signals; and joint processing of ear-EEG and cardiovascular sounds to decode autonomous nerve system activation. In addition, we will research methods for incorporating metadata, e.g. phenotype data and behavioral data.