×
  • Select the area you would like to search.
  • ACTIVE INVESTIGATIONS Search for current projects using the investigator's name, institution, or keywords.
  • EXPERTS KNOWLEDGE BASE Enter keywords to search a list of questions and answers received and processed by the ADNI team.
  • ADNI PDFS Search any ADNI publication pdf by author, keyword, or PMID. Use an asterisk only to view all pdfs.
Principal Investigator  
Principal Investigator's Name: Adarsh Valoor
Institution: NIT Trichy
Department: Computer Applications
Country:
Proposed Analysis: Artificial Intelligence (AI) and Deep Learning (DL) have a long way ahead as well as an immediate potential for transforming everywhere it touches, especially in the medical domain. But due to the lack of transparency in the AI-powered applications is creating problematic situations in their pathways. This is especially problematic with the medical domain, where the outputs generated by the AI systems need to be interpreted on a daily basis. The Explainable AI (XAI) provides the end-users with an understating on what grounds did the system generated a specific output, which can then be interpreted with the help of a domain expert (in this case, clinicians), and it is an essential part of the case of Clinical Decision Support Systems (CDSSs). With the XAI enabled, these would provide the medical practitioners with a deeper understating of the system's decisions, make their decision-making process much more accurate, and even lead to a crucial inference in life-death situations. The necessity for such self-explaining systems in the medical domain is enhanced for fair and ethical decision-making. The systems that are getting trained in a restricted environment and the system when acting as a reinforcement agent are prone to biases, and if any, should be uncovered as well. We have done an exhaustive literature review incorporating the up to date in application of XAI in the clinical system especially with respect to Alzheimer's Disease (AD). We found that tabular data processing tends to be the common trend, while the XAI-enabled CDSSs with text analytics seem to be the least popular. In this work, we are aiming to create explainable models for understanding Alzheimer's disease. In this work, we are aiming to create a model that encompasses a series of the following 3 linked subproblems: • 1st subproblem aims to design a classifier that can differentiate the Cognitive Normal (CN) and the Mild Cognitive Impairment (MCI) patients in the initial stages of the disease progression using cognitive scores, neuropathology vital signs, symptoms, demographics, etc. as a baseline. • 2nd subproblem deals with the ADNI dataset’s enhancement with the curvelet transform. • 3rd subproblem deals with the prediction, explanation as well optimization of the entire model with the CNN pruning method and the explainability with the SHAP's global and local explainers for adequate feature reduction and selection.
Additional Investigators