Molloyhegelund3200

From DigitalMaine Transcription Project
Revision as of 13:09, 22 November 2024 by Molloyhegelund3200 (talk | contribs) (Created page with "05). It indicates that tDCS can improve the brain function network of stroke patients in rehabilitation period, and may provide theory and experimental basis for the applicati...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

05). It indicates that tDCS can improve the brain function network of stroke patients in rehabilitation period, and may provide theory and experimental basis for the application of tDCS in stroke rehabilitation treatment.The incidence of tinnitus is very high, which can affect the patient's attention, emotion and sleep, and even cause serious psychological distress and suicidal tendency. Currently, there is no uniform and objective method for tinnitus detection and therapy, and the mechanism of tinnitus is still unclear. In this study, we first collected the resting state electroencephalogram (EEG) data of tinnitus patients and healthy subjects. Then the power spectrum topology diagrams were compared of in the band of δ (0.5-3 Hz), θ (4-7 Hz), α (8-13 Hz), β (14-30 Hz) and γ (31-50 Hz) to explore the central mechanism of tinnitus. A total of 16 tinnitus patients and 16 healthy subjects were recruited to participate in the experiment. The results of resting state EEG experiments found that the spectrum power value of tinnitus patients was higher than that of healthy subjects in all concerned frequency bands. The t-test results showed that the significant difference areas were mainly concentrated in the right temporal lobe of the θ and α band, and the temporal lobe, parietal lobe and forehead area of the β and γ band. In addition, we designed an attention-related task experiment to further study the relationship between tinnitus and attention. The results showed that the classification accuracy of tinnitus patients was significantly lower than that of healthy subjects, and the highest classification accuracies were 80.21% and 88.75%, respectively. The experimental results indicate that tinnitus may cause the decrease of patients' attention.Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.The brain-computer interface (BCI) systems used in practical applications require as few electroencephalogram (EEG) acquisition channels as possible. However, when it is reduced to one channel, it is difficult to remove the electrooculogram (EOG) artifacts. Therefore, this paper proposed an EOG artifact removal algorithm based on wavelet transform and ensemble empirical mode decomposition. Firstly, the single channel EEG signal is subjected to wavelet transform, and the wavelet components which involve EOG artifact are decomposed by ensemble empirical mode decomposition. Then the predefined autocorrelation coefficient threshold is used to automatically select and remove the intrinsic modal functions which mainly composed of EOG components. And finally the 'clean' EEG signal is reconstructed. The comparative experiments on the simulation data and the real data show that the algorithm proposed in this paper solves the problem of automatic removal of EOG artifacts in single-channel EEG signals. It can effectively remove the EOG artifacts when causes less EEG distortion and has less algorithm complexity at the same time. It helps to promote the BCI technology out of the laboratory and toward commercial application.Error self-detection based on error-related potentials (ErrP) is promising to improve the practicability of brain-computer interface systems. But the single trial recognition of ErrP is still a challenge that hinters the development of this technology. To assess the performance of different algorithms on decoding ErrP, this paper test four kinds of linear discriminant analysis algorithms, two kinds of support vector machines, logistic regression, and discriminative canonical pattern matching (DCPM) on two open accessed datasets. All algorithms were evaluated by their classification accuracies and their generalization ability on different sizes of training sets. The study results show that DCPM has the best performance. This study shows a comprehensive comparison of different algorithms on ErrP classification, which could give guidance for the selection of ErrP algorithm.Affective brain-computer interfaces (aBCIs) has important application value in the field of human-computer interaction. Electroencephalogram (EEG) has been widely concerned in the field of emotion recognition due to its advantages in time resolution, reliability and accuracy. find more However, the non-stationary characteristics and individual differences of EEG limit the generalization of emotion recognition model in different time and different subjects. In this paper, in order to realize the recognition of emotional states across different subjects and sessions, we proposed a new domain adaptation method, the maximum classifier difference for domain adversarial neural networks (MCD_DA). By establishing a neural network emotion recognition model, the shallow feature extractor was used to resist the domain classifier and the emotion classifier, respectively, so that the feature extractor could produce domain invariant expression, and train the decision boundary of classifier learning task specificity while realizing approximate joint distribution adaptation.