There are many active research projects accessing and applying shared ADNI data. Use the search above to find specific research focuses on the active ADNI investigations. This information is requested annually as a requirement for data access.
Principal Investigator | |
Principal Investigator's Name: | Pengfei Zhang |
Institution: | Arizona State University |
Department: | Computer Science |
Country: | |
Proposed Analysis: | There is an intense interest in adopting computer-aided diagnosis (CAD) systems, particularly those developed upon deep learning methods for applications in various specialties, but the success of these systems depends heavily on the availability of large labeled datasets. Without large amount of annotated data, deep learning results in algorithms that perform poorly and lack generalizability on new data. But there is not yet a perfectly-sized and systematically labeled dataset to train a deep learning model, particularly for applications in medical imaging, where both data and annotations are expensive to acquire. Hence, our research objective is to overcome this hurdle of CAD systems by offering efficient and effective medical image analysis systems using the self-supervised learning (SSL) paradigm to glean general image representations from unlabeled data. We recently proposed an SSL method that results in generic pre-trained 3D models named Semantic Genesis, which is a significant advancement from Models Genesis. While Semantic Genesis is solely pre-trained on unlabeled chest CT scans, it is still able to effectively detect various diseases across different organs, datasets, and modalities. Yet, self-supervised pre-training can yield relatively more generic representation when the source and target domains are closer to each other. Particularly, pre-training on the same domain as the target domain strikes as a preferred choice in terms of performance; however, most of the existing medical datasets are usually too small for deep models to learn reliable image representation in different medical modalities, e.g., MRI and X-ray. Therefore, the large-scale ADNI MRI scans data, will enable us to pre-train modality and organ-oriented models via our self-supervised method. |
Additional Investigators |