×
  • Select the area you would like to search.
  • ACTIVE INVESTIGATIONS Search for current projects using the investigator's name, institution, or keywords.
  • EXPERTS KNOWLEDGE BASE Enter keywords to search a list of questions and answers received and processed by the ADNI team.
  • ADNI PDFS Search any ADNI publication pdf by author, keyword, or PMID. Use an asterisk only to view all pdfs.
Principal Investigator  
Principal Investigator's Name: Rosanna Turrisi
Institution: University of Genoa
Department: DIBRIS
Country:
Proposed Analysis: In the recent decades, many studies attempted to elucidate the pathophysiological processes underlying AD leveraging machine or deep learning methods. The large majority of them is based on data from the North American Alzheimer’s Disease Neuroimaging Initiative (ADNI) that is the de facto reference dataset in the field. As we strongly believe that results must be comparable to be more meaningful, we would like to work on ADNI dataset and push its use as benchmark dataset in machine and deep learning AD studies. Specifically, we propose to carry out a research project on ADNI dataset that includes two steps, described in the follow. 
 As first step, we propose to build a Deep Learning (DL) model based on imaging to diagnose AD. Specifically, we focus on on low-resolution (1.5T) MRI exams, keeping in mind that this is a more challenging task from the learning perspective. Further, DL models allow the direct use of the raw input, without any feature engineering preprocessing phase, such as ROI identification and selection. This choice is difficult and more ambitious but we argue it is more preferable as it does not require the supervision of a domain expert in the learning phase and it can be directly used in the medical practice. Ideally, predictive models for 3D-MRIs should be built directly upon the 3D tensors to retain maximal information. Nevertheless, dealing with 3D images, as in this case, poses hard challenges on both computational and memory requirements. Many papers in literature solve this issue considering one or more two-dimensional projections of the 3D input data tensor. However, if all three types of projection (e.g. axial, coronal, sagittal) are considered, the computational cost and the overall wall-clock time are likely to increase dramatically. Further, extracting features separately from the 2D projections may result in missing important volumetric information and obtaining an oversimplified model that cannot capture the actual complexity of the phenomenon under study. Hence, we propose a pipeline that directly extracts volumetric features preventing information loss while we simultaneously adopt some strategies to avert the excessive computational cost. Specifically, we individuated a 3D-Convolutional Neural Network (CNN) as best model, as it has shown to be state-of-the-art in most of the image-based tasks. While choosing 3D models increases the number of learnable parameters, its burden can be reduced by appropriately choosing the number of filters in the network layers combined with layer batch normalization and loss regularization. Last but not least, we remark that DL models typically need a large amount of samples, which are unfortunately not available in biomedical contexts. To overcome this limit, this project investigates different strategies to augment the dataset, applying and combing different affine transformations to the original images. Although the strategy choice strongly impacts the augmented dataset size and the model performance, this aspect is often neglected in literature. Similarly, the network deepness is usually chose without following any specific criterium or any empirical analysis. In this work, we aim at combining different model deepness and augmentation strategies in order to estimate their impact on the AD detection performance. To the best of our knowledge this is the first work in the AD domain that digs into these aspects of the experimental design. Also, this project wants to be a guide of how properly choosing network and training parameter to allow the 3D model training and guarantee high accuracy diagnosis. Further, we want to emphasise that the final model will be reproducible, reliable and robust, in order to meet the fondamentai scientific criteria. Once obtaining a good image-based model for the AD diagnose, the second step will focus on a more challenging task. We will confront the multi-classes classification (i.e., Control Normal (CN) Vs Mild Cognitive Impairment (MCI) Vs AD) based on multi-modal data. More specifically, the purpose of this project is to integrate different data modalities (e.g., genetic and clinical data, MRI). Indeed, as AD is a complex disease caused by the co-occurrence of multiple factors, the integration of multimodal data may be fundamental to capture and extract all the necessary information and have a complete point of view on the landscape of the disorder. In carrying out this difficult task, two important factors play an essential role: i) each data modality must be treated differently due to the different nature of data; ii) the classification model must retrieve information from all data modalities. For this reason, assuming to have N data modalities, we propose the jointly training of N different models for the feature extraction and one unique classifier for the final task. For example, we propose to use a CNN feature extractor for the MRI data, while a sparse model is more suitable to extract features from genetics, due to their high dimensional nature. The extracted feature vectors are then concatenated to feed the classification model. The joint training allows to incorporate information coming from all modalities, even at the feature extraction level, while taking into account the very different nature of data modalities. This may result in a more meaningful and high performing model, able to discriminate between CN, MCI and AD.
Additional Investigators  
Investigator's Name: Annalisa Barla
Proposed Analysis: In the recent decades, many studies attempted to elucidate the pathophysiological processes underlying AD leveraging machine or deep learning methods. The large majority of them is based on data from the North American Alzheimer’s Disease Neuroimaging Initiative (ADNI) that is the de facto reference dataset in the field. As we strongly believe that results must be comparable to be more meaningful, we would like to work on ADNI dataset and push its use as benchmark dataset in machine and deep learning AD studies. Specifically, we propose to carry out a research project on ADNI dataset that includes two steps, described in the follow. 
 As first step, we propose to build a Deep Learning (DL) model based on imaging to diagnose AD. Specifically, we focus on on low-resolution (1.5T) MRI exams, keeping in mind that this is a more challenging task from the learning perspective. Further, DL models allow the direct use of the raw input, without any feature engineering preprocessing phase, such as ROI identification and selection. This choice is difficult and more ambitious but we argue it is more preferable as it does not require the supervision of a domain expert in the learning phase and it can be directly used in the medical practice. Ideally, predictive models for 3D-MRIs should be built directly upon the 3D tensors to retain maximal information. Nevertheless, dealing with 3D images, as in this case, poses hard challenges on both computational and memory requirements. Many papers in literature solve this issue considering one or more two-dimensional projections of the 3D input data tensor. However, if all three types of projection (e.g. axial, coronal, sagittal) are considered, the computational cost and the overall wall-clock time are likely to increase dramatically. Further, extracting features separately from the 2D projections may result in missing important volumetric information and obtaining an oversimplified model that cannot capture the actual complexity of the phenomenon under study. Hence, we propose a pipeline that directly extracts volumetric features preventing information loss while we simultaneously adopt some strategies to avert the excessive computational cost. Specifically, we individuated a 3D-Convolutional Neural Network (CNN) as best model, as it has shown to be state-of-the-art in most of the image-based tasks. While choosing 3D models increases the number of learnable parameters, its burden can be reduced by appropriately choosing the number of filters in the network layers combined with layer batch normalization and loss regularization. Last but not least, we remark that DL models typically need a large amount of samples, which are unfortunately not available in biomedical contexts. To overcome this limit, this project investigates different strategies to augment the dataset, applying and combing different affine transformations to the original images. Although the strategy choice strongly impacts the augmented dataset size and the model performance, this aspect is often neglected in literature. Similarly, the network deepness is usually chose without following any specific criterium or any empirical analysis. In this work, we aim at combining different model deepness and augmentation strategies in order to estimate their impact on the AD detection performance. To the best of our knowledge this is the first work in the AD domain that digs into these aspects of the experimental design. Also, this project wants to be a guide of how properly choosing network and training parameter to allow the 3D model training and guarantee high accuracy diagnosis. Further, we want to emphasise that the final model will be reproducible, reliable and robust, in order to meet the fondamentai scientific criteria. Once obtaining a good image-based model for the AD diagnose, the second step will focus on a more challenging task. We will confront the multi-classes classification (i.e., Control Normal (CN) Vs Mild Cognitive Impairment (MCI) Vs AD) based on multi-modal data. More specifically, the purpose of this project is to integrate different data modalities (e.g., genetic and clinical data, MRI). Indeed, as AD is a complex disease caused by the co-occurrence of multiple factors, the integration of multimodal data may be fundamental to capture and extract all the necessary information and have a complete point of view on the landscape of the disorder. In carrying out this difficult task, two important factors play an essential role: i) each data modality must be treated differently due to the different nature of data; ii) the classification model must retrieve information from all data modalities. For this reason, assuming to have N data modalities, we propose the jointly training of N different models for the feature extraction and one unique classifier for the final task. For example, we propose to use a CNN feature extractor for the MRI data, while a sparse model is more suitable to extract features from genetics, due to their high dimensional nature. The extracted feature vectors are then concatenated to feed the classification model. The joint training allows to incorporate information coming from all modalities, even at the feature extraction level, while taking into account the very different nature of data modalities. This may result in a more meaningful and high performing model, able to discriminate between CN, MCI and AD.
Investigator's Name: Alessandro Verri
Proposed Analysis: In the recent decades, many studies attempted to elucidate the pathophysiological processes underlying AD leveraging machine or deep learning methods. The large majority of them is based on data from the North American Alzheimer’s Disease Neuroimaging Initiative (ADNI) that is the de facto reference dataset in the field. As we strongly believe that results must be comparable to be more meaningful, we would like to work on ADNI dataset and push its use as benchmark dataset in machine and deep learning AD studies. Specifically, we propose to carry out a research project on ADNI dataset that includes two steps, described in the follow. 
 As first step, we propose to build a Deep Learning (DL) model based on imaging to diagnose AD. Specifically, we focus on on low-resolution (1.5T) MRI exams, keeping in mind that this is a more challenging task from the learning perspective. Further, DL models allow the direct use of the raw input, without any feature engineering preprocessing phase, such as ROI identification and selection. This choice is difficult and more ambitious but we argue it is more preferable as it does not require the supervision of a domain expert in the learning phase and it can be directly used in the medical practice. Ideally, predictive models for 3D-MRIs should be built directly upon the 3D tensors to retain maximal information. Nevertheless, dealing with 3D images, as in this case, poses hard challenges on both computational and memory requirements. Many papers in literature solve this issue considering one or more two-dimensional projections of the 3D input data tensor. However, if all three types of projection (e.g. axial, coronal, sagittal) are considered, the computational cost and the overall wall-clock time are likely to increase dramatically. Further, extracting features separately from the 2D projections may result in missing important volumetric information and obtaining an oversimplified model that cannot capture the actual complexity of the phenomenon under study. Hence, we propose a pipeline that directly extracts volumetric features preventing information loss while we simultaneously adopt some strategies to avert the excessive computational cost. Specifically, we individuated a 3D-Convolutional Neural Network (CNN) as best model, as it has shown to be state-of-the-art in most of the image-based tasks. While choosing 3D models increases the number of learnable parameters, its burden can be reduced by appropriately choosing the number of filters in the network layers combined with layer batch normalization and loss regularization. Last but not least, we remark that DL models typically need a large amount of samples, which are unfortunately not available in biomedical contexts. To overcome this limit, this project investigates different strategies to augment the dataset, applying and combing different affine transformations to the original images. Although the strategy choice strongly impacts the augmented dataset size and the model performance, this aspect is often neglected in literature. Similarly, the network deepness is usually chose without following any specific criterium or any empirical analysis. In this work, we aim at combining different model deepness and augmentation strategies in order to estimate their impact on the AD detection performance. To the best of our knowledge this is the first work in the AD domain that digs into these aspects of the experimental design. Also, this project wants to be a guide of how properly choosing network and training parameter to allow the 3D model training and guarantee high accuracy diagnosis. Further, we want to emphasise that the final model will be reproducible, reliable and robust, in order to meet the fondamentai scientific criteria. Once obtaining a good image-based model for the AD diagnose, the second step will focus on a more challenging task. We will confront the multi-classes classification (i.e., Control Normal (CN) Vs Mild Cognitive Impairment (MCI) Vs AD) based on multi-modal data. More specifically, the purpose of this project is to integrate different data modalities (e.g., genetic and clinical data, MRI). Indeed, as AD is a complex disease caused by the co-occurrence of multiple factors, the integration of multimodal data may be fundamental to capture and extract all the necessary information and have a complete point of view on the landscape of the disorder. In carrying out this difficult task, two important factors play an essential role: i) each data modality must be treated differently due to the different nature of data; ii) the classification model must retrieve information from all data modalities. For this reason, assuming to have N data modalities, we propose the jointly training of N different models for the feature extraction and one unique classifier for the final task. For example, we propose to use a CNN feature extractor for the MRI data, while a sparse model is more suitable to extract features from genetics, due to their high dimensional nature. The extracted feature vectors are then concatenated to feed the classification model. The joint training allows to incorporate information coming from all modalities, even at the feature extraction level, while taking into account the very different nature of data modalities. This may result in a more meaningful and high performing model, able to discriminate between CN, MCI and AD.