There are many active research projects accessing and applying shared ADNI data. Use the search above to find specific research focuses on the active ADNI investigations. This information is requested annually as a requirement for data access.
Principal Investigator | |
Principal Investigator's Name: | Yi Hao Chan |
Institution: | Nanyang Technological University |
Department: | Biomedical Informatics Lab |
Country: | |
Proposed Analysis: | In recent years, it has been shown that deep learning classifiers can perform at a level similar to traditional machine learning models, or in some cases achieve higher accuracies in tasks such as brain disorder classification. However, applying neural networks to multimodal brain imaging data has yet to be explored in great detail. Considering that different modalities offer a different perspective of the brain, combining brain images from multiple modalities could provide complementary information that boost the accuracy of classifiers or allow researchers to find cross-modality biomarkers. A multimodal and multiscale deep neural network has already been shown to work well for predicting conversion from Mild Cognitive Impairment to Alzheimer's Disease. However, majority of the current deep learning approaches applied on multimodal data simply concatenate features derived from different modalities together. Much more can be done to learn cross-modality representations of the multi-faceted data and doing so potentially leads to the creation of better classifiers. Given the wide range of neurological disorders that has yet to be explored with this approach and the need for more refined techniques to train neural network models in small dataset settings, there is much more to discover in this field of research. |
Additional Investigators |