×
  • Select the area you would like to search.
  • ACTIVE INVESTIGATIONS Search for current projects using the investigator's name, institution, or keywords.
  • EXPERTS KNOWLEDGE BASE Enter keywords to search a list of questions and answers received and processed by the ADNI team.
  • ADNI PDFS Search any ADNI publication pdf by author, keyword, or PMID. Use an asterisk only to view all pdfs.
Principal Investigator  
Principal Investigator's Name: Michal Golovanevsky
Institution: Brown University
Department: Computer Science
Country:
Proposed Analysis: Alzheimer’s disease (AD) is one of the most common neurodegenerative diseases. There is no known cure for AD, but early detection of the disease is crucial to delay progression. In recent years, deep learning models have greatly contributed to AD diagnosis research. Between 2019 and 2021, many AD and mild cognitive disorders (MCI) studies, using the Alzheimer’s disease neuroimaging initiative (ADNI) dataset, have created multi-modal algorithms to classify patients into AD and MCI. The usage of multiple data modalities (such as magnetic resonance imaging (MRI), genetic single nucleotide polymorphisms (SNPs), and clinical test data), provides a more well-rounded view of AD staging analysis. Those studies used variations on conventional neural networks to build their models, which were quite successful in increasing accuracy from previous single modality models. However, current state-of-the-art models create a separate neural network for each modality and then concatenate the resulting features into a final model. This simple concatenation implies that the model does not learn from the inherent interactions between modalities. We propose using a cross-modality transformer architecture (Tan, Hao Hao, and M. Bansal. “LXMERT: Learning Cross-Modality Encoder Representations from Transformers.” EMNLP (2019)), which will allow our model to infer features either from elements in the same modality, or components from other modalities. This structure helps build both intra-modality and cross-modality relationships. The framework is modeled after recent BERT-based innovations that have been widely effective in the question-answering and language understanding domains, largely due to its ability to account for context. Thus, we hypothesize that this approach will further increase accuracy in AD and MCI classification. Using the ADNI dataset we will further extend this architecture to an arbitrary number of modalities, ensuring that our resulting model, similarly to a team of doctors, has a holistic view of each patient.
Additional Investigators  
Investigator's Name: Carsten Eickhoff
Proposed Analysis: Alzheimer’s disease (AD) is one of the most common neurodegenerative diseases. There is no known cure for AD, but early detection of the disease is crucial to delay progression. In recent years, deep learning models have greatly contributed to AD diagnosis research. Between 2019 and 2021, many AD and mild cognitive disorders (MCI) studies, using the Alzheimer’s disease neuroimaging initiative (ADNI) dataset, have created multi-modal algorithms to classify patients into AD and MCI. The usage of multiple data modalities (such as magnetic resonance imaging (MRI), genetic single nucleotide polymorphisms (SNPs), and clinical test data), provides a more well-rounded view of AD staging analysis. Those studies used variations on conventional neural networks to build their models, which were quite successful in increasing accuracy from previous single modality models. However, current state-of-the-art models create a separate neural network for each modality and then concatenate the resulting features into a final model. This simple concatenation implies that the model does not learn from the inherent interactions between modalities. We propose using a cross-modality transformer architecture (Tan, Hao Hao, and M. Bansal. “LXMERT: Learning Cross-Modality Encoder Representations from Transformers.” EMNLP (2019)), which will allow our model to infer features either from elements in the same modality, or components from other modalities. This structure helps build both intra-modality and cross-modality relationships. The framework is modeled after recent BERT-based innovations that have been widely effective in the question-answering and language understanding domains, largely due to its ability to account for context. Thus, we hypothesize that this approach will further increase accuracy in AD and MCI classification. Using the ADNI dataset we will further extend this architecture to an arbitrary number of modalities, ensuring that our resulting model, similarly to a team of doctors, has a holistic view of each patient.
Investigator's Name: Ritambhara Singh
Proposed Analysis: Alzheimer’s disease (AD) is one of the most common neurodegenerative diseases. There is no known cure for AD, but early detection of the disease is crucial to delay progression. In recent years, deep learning models have greatly contributed to AD diagnosis research. Between 2019 and 2021, many AD and mild cognitive disorders (MCI) studies, using the Alzheimer’s disease neuroimaging initiative (ADNI) dataset, have created multi-modal algorithms to classify patients into AD and MCI. The usage of multiple data modalities (such as magnetic resonance imaging (MRI), genetic single nucleotide polymorphisms (SNPs), and clinical test data), provides a more well-rounded view of AD staging analysis. Those studies used variations on conventional neural networks to build their models, which were quite successful in increasing accuracy from previous single modality models. However, current state-of-the-art models create a separate neural network for each modality and then concatenate the resulting features into a final model. This simple concatenation implies that the model does not learn from the inherent interactions between modalities. We propose using a cross-modality transformer architecture (Tan, Hao Hao, and M. Bansal. “LXMERT: Learning Cross-Modality Encoder Representations from Transformers.” EMNLP (2019)), which will allow our model to infer features either from elements in the same modality, or components from other modalities. This structure helps build both intra-modality and cross-modality relationships. The framework is modeled after recent BERT-based innovations that have been widely effective in the question-answering and language understanding domains, largely due to its ability to account for context. Thus, we hypothesize that this approach will further increase accuracy in AD and MCI classification. Using the ADNI dataset we will further extend this architecture to an arbitrary number of modalities, ensuring that our resulting model, similarly to a team of doctors, has a holistic view of each patient.