Similarity of Neural Networks and the Human Brain
Functional Magnetic Resonance Imaging (fMRI) is used to describe a category of neuroimaging method which is monitor changes in characteristics of blood flow within the brain that occur during cognitive processes in the parts of the brain that are active in the head. Two types of fMRI are primarily used to study functional activity and cognitive behaviours in the brain. Resting state fMRI (rs-fMRI) is used to investigate the intrinsic functional segregation or specialisation of brain areas/networks during the resting state and task-based fMRI is frequently utilised to identify brain regions that are functionally involved in the completion of a certain task [1,2]. Different types of information processing happen in specialized regions of the brain. Such variations could contribute to a greater understanding of how the brain's cognitive functioning is organised and developed. Further analysis may depend greatly on whether individuals are paying attention to the task during the task scan or are asleep during the resting state scan . Some of these specific locations are responsible for activities such as language comprehension, speech, and motor function. These regions are collectively known as the Eloquent Cortex. Recognizing and localizing the Eloquent cortex is an essential prerequisite in neurosurgery as this enhances recovery and postoperative quality of life. .
Recently a research group utilized resting fMRI to create a novel deep learning architecture to concurrently recognize language and primary motor cortex. Their method combines the generalisation strength of multi-task learning with the representational power of convolutional neural networks to devise a shared illustration between the eloquent subnetworks. The foundation of the work involved using MTL (multi task learning) that trains the model to carry out multiple tasks at once in order to increase the generalizability of the model. Convolutional and fully connected (FC) layers are both used in the MT-GNN architecture to extract features from the connectivity matrix. The MT-GNN convolutions span whole rows and columns of the graph, whereas a standard convolutional assumes a grid-like field of view; as a result, they capture local neighbourhood connectivity information associated with node pairs (edges).In order to categorise the eloquent brain regions, the resulting graph neural network (GNN) mines the topological aspects of the input. Additionally, their training method can readily account for missing patient data in a way that maximises the information that is accessible. This approach is quite beneficial because the fMRI paradigms used for each patient may differ depending on their circumstances.
Fig.1 Automated eloquent cortex localization in brain tumour patients using multi-task graph neural networks
Source: Nandakumar et al., 2021
With the help of a private dataset gathered at the Johns Hopkins Hospital (JHH) and publicly accessible data from the Human Connectome Project (HCP), they had validated their methodology by simulating tumours in healthy brain tissue and comparing those results to performance on the healthy HCP data. They confirmed that their MTL-GNN outperforms common machine learning baselines in terms of eloquent cortex detection. Additionally, when trained on unilateral language cases, their model is capable of recovering clinically complex bidirectional language cases. The robustness of their method was evaluated by tweaking the functional parcellation used for analysis, jittering the tumour segmentations, measuring the effects of data augmentation, and running a hyperparameter sweep and hyperparameter for detecting language class. Incorporating the specific convolutional layers makes it easier to spot stereotypical connectivity patterns in the distribution of the eloquent cortex. They authors claimed two notable advantages over the existing methods as in order to optimise the information needed to identify eloquent areas, it operates on whole-brain resting-state fMRI connection. Furthermore, it explicitly models information on tumour size and location that is subject-specific .
The application of deep neural networks (DNNs) in cognitive and computational neuroscience is expanding for a variety of classification and regression tasks, such as the diagnosis of disorders and the prediction of subject attributes from functional or structural magnetic resonance imaging (sMRI, fMRI) data. Large volume of imaging datasets have recently been made available, which is starting to open the door for the creation of deep learning models that can outperform traditional machine learning methods in neuroimaging applications that mould the human brain. To make the DNNs more robust, a well-known technique called gradient-based saliency maps are developed that highlight the input space’s (e.g regression space for each voxel) defining characteristics. These saliency maps are functions of a prediction model and a particular input because a DNN's nonlinearities allow some input features to alter how those and other input features influence the prediction. The feature weights of a linear model, on the other hand, are unaffected by the input features and define how they influence the prediction. Saliency maps can be created using a number of different techniques .
An interesting application of deep learning in fMRI in cognitive brain region came from a group of researchers who proposed a promising candidate for the problem of speech processing via self-supervised algorithms trained on raw waveform. They tried to examine whether self-supervised learning on a limited amount of speech suffices to yield a model functionally equivalent to speech perception in the human brain. They trained variants of Wav2Vec 2.0 which is a state-of-the-art self-supervised architecture model for automatic speech recognition with curated datasets of French, English. They then compared their activations to those of a large group of speakers of French, English and Mandarin who listened to audio stories while being passively recorded with fMRI.
Their findings indicate that the self-supervised model acquires representations that linearly map onto a very diverse range of cortical regions. The algorithm can also learn brain-like representations from as little as 600 hours of unlabelled speech, which is about the same as what young children can learn from language development. Various training methods demonstrate a functional specialisation similar to the cortex: Similar to the prefrontal and temporal cortices, Wav2Vec 2.0 learns representations for sounds, speech, and language. With the help of 386 more participants' behaviours, they were able to verify the comparability of this specialisation. These findings from the biggest neuroimaging standard to date highlight how self-supervised learning can explain a complex organisation of speech processing in the brain and outline a route to identifying the rules of language acquisition that shape the human brain .
Source: Millet et al., 2022
1) Zhang S, Li X, Lv J, Jiang X, Guo L, Liu T. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations. Brain Imaging Behav. 2016;10(1):21-32. doi:10.1007/s11682-015-9359-7
2) Glover GH. Overview of functional magnetic resonance imaging. Neurosurg Clin N Am. 2011;22(2):133-vii. doi:10.1016/j.nec.2010.11.001
3) Yulu Cui, Hai Zhang, Educational Neuroscience Training for Teachers’ Technological Pedagogical Content Knowledge Construction, Frontiers in Psychology, 10.3389/fpsyg.2021.792723, 12, (2021).
4) Nandakumar, N. et al. (2020). A Multi-task Deep Learning Framework to Localize the Eloquent Cortex in Brain Tumor Patients Using Dynamic Functional Connectivity. In: , et al. Machine Learning in Clinical Neuroimaging and Radiogenomics in Neuro-oncology. MLCN RNO-AI 2020 2020. Lecture Notes in Computer Science(), vol 12449. Springer, Cham. https://doi.org/10.1007/978-3-030-66843-3_4
5) Nandakumar N, Manzoor K, Agarwal S, et al. Automated eloquent cortex localization in brain tumor patients using multi-task graph neural networks. Med Image Anal. 2021;74:102203. doi:10.1016/j.media.2021.102203.
6) McClure, P., Moraczewski, D., Lam, K. C., Thomas, A., & Pereira, F. (2020). Improving the Interpretability of fMRI Decoding using Deep Neural Networks and Adversarial Robustness. arXiv preprint arXiv:2004.11114.
7) Millet, J., Caucheteux, C., Orhan, P., Boubenec, Y., Gramfort, A., Dunbar, E., ... & King, J. R. (2022). Toward a realistic model of speech processing in the brain with self-supervised learning. arXiv preprint arXiv:2206.01685.