In order to develop light-weight EEG based systems for home-based long term seizure monitoring, algorithms depending on least number of EEG channels, preferably located at frontal area of the head, are highly desired. Furthermore, an end-to-end solution without EEG feature extraction would enable computational efficient online processing. We have developed a Convolutional Neural Network (CNN) based approach, called SeizNet, for absence seizure detection using only 2 frontal EEG channels. This study aims to evaluate the general performance of CNN based approaches for EEG-based seizure detection. We implement two more CNN nets which were reported by other groups earlier: SeizureNet and pyramidal 1D-CNN (P-1D-CNN) model and benchmark the 3 CNN nets against a support vector machine (SVM) classifier.
EEG data of 29 pediatric patients diagnosed with typical absence seizures are included in this study. IRB was acquired from the institutional review board of KK Women’s and Children’s Hospital, Singapore. We conduct leave-one-out cross validation for all subjects. Performance of the methods are assessed with three parameters: sensitivity is defined as proportion of seizures that are correctly detected; false alarm rate is number of false positive, i.e., identification of seizure in the absence of seizure per hour (fp/h); latency is the mean delay between electrographic onset and detector recognition of seizure activity.
The experiment results show all 3 CNN models outperform the baseline SVM classifier (sensitivity 86.6% and false alarm rate 1.98). Among the 3 CNN models, P-1D-CNN achieved the best sensitivity 96.7%, but its false alarm rate is almost 2 per hour. SeizureNet’s false alarm rate is 0.30, however its sensitivity is only 90.8%. SeizNet attains an accuracy of 93.3% and false alarm rate of 0.60 per hour. In terms of balanced sensitivity and false alarm rate, seizNet is the most preferable one among the three nets we explored. A few features make SeizNet different from existing deep learning models for seizure detection. Firstly, additional dropout layers and batch normalization are added to every convolution layer to avoid model overfitting. Secondly dropout is used in not only after fully connected layer, but various parts of the model. Finally, number of filters at each convolution layer is multiplied by two, so that it has less number of filters at low levels but more filters at the higher levels.