×
  • Select the area you would like to search.
  • ACTIVE INVESTIGATIONS Search for current projects using the investigator's name, institution, or keywords.
  • EXPERTS KNOWLEDGE BASE Enter keywords to search a list of questions and answers received and processed by the ADNI team.
  • ADNI PDFS Search any ADNI publication pdf by author, keyword, or PMID. Use an asterisk only to view all pdfs.
Principal Investigator  
Principal Investigator's Name: Lauren DeLong
Institution: University of Edinburgh School of Informatics
Department: Artificial Intelligence and its Applications
Country:
Proposed Analysis: Alzheimer’s Disease (AD) is often described as a globally widespread and highly prevalent neu- rodegenerative disease [8]. Therefore, some AD-induced neurodegeneration can be detected through specific signatures in magnetic resonance images (MRIs) [11]. However, studies which use machine learning to classify AD patients solely based on identifying those MRI signatures achieve modest results and poor sensitivity [10, 6]. This could be due to the absence of neurodegeneration in early AD diagnoses or the presence of neurodegeneration due to other conditions [5]. Diagnoses based exclusively on MRIs are, therefore, inadequate. Fortunately, AD datasets such as the Alzheimer’s Disease Neuroimaging Initiative (ADNI) are multimodal, comprising other useful data modalities through which AD pathology could be identified. Multimodal data describes a dataset or a combination of datasets which contain many various forms, such as images, videos, text or long sequences, and numerical measurements [1, 3]. Specifically, multimodal data tends to be popular within the biomedical domain [9]. ADNI, for example, contains both raw and processed versions of MRI and PET scans, blood and CSF biomarker information, familial and demographic data, comorbidity and medication records, and more [4, 7]. While using a single type of data at a time is much simpler, various data formats can complement and complete one another, providing specific information which other formats lack. However, fusing the various data modalities into a unified input is challenging: one must find a way to represent all modalities similarly, such as in a numerical fashion, while avoiding adding bias or losing vital information [1, 3]. Since multimodal data is so valuable for the way its various forms complement one another, it is important to consider, specifically, what kind of information these modalities contribute to one another, and how we can define these relationships, to some extent, with expert knowledge and basic logic. This could be accomplished with neurosymbolic artificial intelligence (AI). Neurosymbolic AI, a relatively new field of research, describes the combination of symbolic AI, which often includes logic and rule-based approaches, with neural networks and deep learning [2]. Using neurosymbolic AI, we plan to generate a model which classifies AD through a series of data modalities which have been combined through meaningful, rule-based methods. In addition to the wide range of data modalities available in the ADNI dataset, ADNI also contains over two thousand patients of various diagnoses, making it ideal for our study [7]. Not only will our method be able to encode unique relationships between data modalities, but the use of symbolic AI will make it more queryable for explanations than traditional deep learning methods. This inherent interpretability could then be useful for enhanced clinical decision support. Ultimately, we hope to publish our results as an academic article. References [1] Jing Gao et al. “A survey on deep learning for multimodal data fusion”. In: Neural Computation 32.5 (2020), pp. 829–864. [2] Artur d’Avila Garcez and Luis C Lamb. “Neurosymbolic AI: the 3rd wave”. In: arXiv preprint arXiv:2012.05876 (2020). [3] Dana Lahat, Tülay Adali, and Christian Jutten. “Multimodal data fusion: an overview of methods, challenges, and prospects”. In: Proceedings of the IEEE 103.9 (2015), pp. 1449– 1477. [4] Susanne G Mueller et al. “Ways toward an early diagnosis in Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI)”. In: Alzheimer’s & Dementia 1.1 (2005), pp. 55–66.1 [5] Manan Binth Taj Noor et al. “Detecting neurodegenerative disease from MRI: a brief review on a deep learning perspective”. In: International conference on brain informatics. Springer. 2019, pp. 115–125. [6] Paolo Maria Rossini et al. “Early diagnosis of Alzheimer’s disease: the role of biomarkers including advanced EEG signal analysis. Report from the IFCN-sponsored panel of experts”. In: Clinical Neurophysiology 131.6 (2020), pp. 1287–1310. [7] Yasamin Salimi et al. “ADataViewer: exploring semantically harmonized Alzheimer’s disease cohort datasets”. In: Alzheimer’s Research & Therapy 14.1 (2022), pp. 1–12. [8] Philip Scheltens et al. “Alzheimer’s disease”. In: The Lancet 397.10284 (2021), pp. 1577–1590. [9] Sören Richard Stahlschmidt, Benjamin Ulfenborg, and Jane Synnergren. “Multimodal deep learning for biomedical data fusion: a review”. In: Briefings in Bioinformatics 23.2 (2022), bbab569. [10] Ekin Yagis et al. “3d convolutional neural networks for diagnosis of alzheimer’s disease via structural mri”. In: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE. 2020, pp. 65–70. [11] Peter NE Young et al. “Imaging biomarkers in neurodegeneration: current and future prac- tices”. In: Alzheimer’s research & therapy 12.1 (2020), pp. 1–17.
Additional Investigators  
Investigator's Name: Jacques Fleuriot
Proposed Analysis: Alzheimer’s Disease (AD) is often described as a globally widespread and highly prevalent neu- rodegenerative disease [8]. Therefore, some AD-induced neurodegeneration can be detected through specific signatures in magnetic resonance images (MRIs) [11]. However, studies which use machine learning to classify AD patients solely based on identifying those MRI signatures achieve modest results and poor sensitivity [10, 6]. This could be due to the absence of neurodegeneration in early AD diagnoses or the presence of neurodegeneration due to other conditions [5]. Diagnoses based exclusively on MRIs are, therefore, inadequate. Fortunately, AD datasets such as the Alzheimer’s Disease Neuroimaging Initiative (ADNI) are multimodal, comprising other useful data modalities through which AD pathology could be identified. Multimodal data describes a dataset or a combination of datasets which contain many various forms, such as images, videos, text or long sequences, and numerical measurements [1, 3]. Specifically, multimodal data tends to be popular within the biomedical domain [9]. ADNI, for example, contains both raw and processed versions of MRI and PET scans, blood and CSF biomarker information, familial and demographic data, comorbidity and medication records, and more [4, 7]. While using a single type of data at a time is much simpler, various data formats can complement and complete one another, providing specific information which other formats lack. However, fusing the various data modalities into a unified input is challenging: one must find a way to represent all modalities similarly, such as in a numerical fashion, while avoiding adding bias or losing vital information [1, 3]. Since multimodal data is so valuable for the way its various forms complement one another, it is important to consider, specifically, what kind of information these modalities contribute to one another, and how we can define these relationships, to some extent, with expert knowledge and basic logic. This could be accomplished with neurosymbolic artificial intelligence (AI). Neurosymbolic AI, a relatively new field of research, describes the combination of symbolic AI, which often includes logic and rule-based approaches, with neural networks and deep learning [2]. Using neurosymbolic AI, we plan to generate a model which classifies AD through a series of data modalities which have been combined through meaningful, rule-based methods. In addition to the wide range of data modalities available in the ADNI dataset, ADNI also contains over two thousand patients of various diagnoses, making it ideal for our study [7]. Not only will our method be able to encode unique relationships between data modalities, but the use of symbolic AI will make it more queryable for explanations than traditional deep learning methods. This inherent interpretability could then be useful for enhanced clinical decision support. Ultimately, we hope to publish our results as an academic article. References [1] Jing Gao et al. “A survey on deep learning for multimodal data fusion”. In: Neural Computation 32.5 (2020), pp. 829–864. [2] Artur d’Avila Garcez and Luis C Lamb. “Neurosymbolic AI: the 3rd wave”. In: arXiv preprint arXiv:2012.05876 (2020). [3] Dana Lahat, Tülay Adali, and Christian Jutten. “Multimodal data fusion: an overview of methods, challenges, and prospects”. In: Proceedings of the IEEE 103.9 (2015), pp. 1449– 1477. [4] Susanne G Mueller et al. “Ways toward an early diagnosis in Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI)”. In: Alzheimer’s & Dementia 1.1 (2005), pp. 55–66.1 [5] Manan Binth Taj Noor et al. “Detecting neurodegenerative disease from MRI: a brief review on a deep learning perspective”. In: International conference on brain informatics. Springer. 2019, pp. 115–125. [6] Paolo Maria Rossini et al. “Early diagnosis of Alzheimer’s disease: the role of biomarkers including advanced EEG signal analysis. Report from the IFCN-sponsored panel of experts”. In: Clinical Neurophysiology 131.6 (2020), pp. 1287–1310. [7] Yasamin Salimi et al. “ADataViewer: exploring semantically harmonized Alzheimer’s disease cohort datasets”. In: Alzheimer’s Research & Therapy 14.1 (2022), pp. 1–12. [8] Philip Scheltens et al. “Alzheimer’s disease”. In: The Lancet 397.10284 (2021), pp. 1577–1590. [9] Sören Richard Stahlschmidt, Benjamin Ulfenborg, and Jane Synnergren. “Multimodal deep learning for biomedical data fusion: a review”. In: Briefings in Bioinformatics 23.2 (2022), bbab569. [10] Ekin Yagis et al. “3d convolutional neural networks for diagnosis of alzheimer’s disease via structural mri”. In: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE. 2020, pp. 65–70. [11] Peter NE Young et al. “Imaging biomarkers in neurodegeneration: current and future prac- tices”. In: Alzheimer’s research & therapy 12.1 (2020), pp. 1–17.