Tata Elxsi White Logo
AI for Leaders

Decoding AI: Key Terminologies and Applications​

Multimodal AI

AI models that are trained, can analyse and infer results from multiple data types, such as text, images, audio and other formats, resulting in increased effectiveness in a wider range of tasks.

Use Case

Multimodal AI can analyse patient data from diverse sources (e.g., medical images, electronic health records, and patient narratives) to provide comprehensive diagnostic insights and personalised treatment recommendations.
Single-Cell Multimodal Analysis: In genomics, multimodal analysis, such as weighted-nearest neighbor analysis, has been used to construct a multimodal reference atlas of the human immune system. This approach enhances the resolution of cell state identification, offering insights into immune responses to vaccination and COVID-19 (Hao et al., 2020).

Federated Learning

A decentralized approach to training machine learning models. Instead of sending raw data to a central server, the data remains on client devices where the model is trained locally. Updates from each device are aggregated to form the final model, ensuring data privacy as raw data is not shared.

Use Case

Federated Learning enables collaborative model training on patient data from different healthcare institutions while preserving data privacy and facilitating the development of robust and generalized models for disease prediction and drug discovery.

Tumor Segmentation: A decentralized networking framework based on the MQTT protocol was proposed for brain tumor segmentation using Federated Learning. This setup enabled real-time training across physically separated machines in different countries, showcasing FL’s capability in enhancing privacy while facilitating international collaboration in medical research (Tedeschini et al., 2022).

COVID-19 Prediction: Federated Learning was used to train the EXAM model, predicting the future oxygen requirements of symptomatic COVID-19 patients using vital signs, laboratory data, and chest X-rays. Data from 20 institutes across the globe were used, demonstrating Federated Learning’s potential in improving model accuracy and generalisability without compromising patient privacy (Dayan et al., 2021).

Explainable AI (XAI)

AI Models that enable human users to understand the way a model arrives at the results. Thus, making the model decisions and inferences more explainable and trustworthy.

Use Case

Explainable AI enhances transparency in medical diagnosis and treatment recommendation systems, enabling clinicians to interpret AI-driven insights and understand the rationale behind patient care decisions, ultimately improving clinical decision-making and patient outcomes.
XAI techniques have been explored for analyzing and diagnosing health data. The integration of AI with smart wearable devices, such as Fitbits, uses XAI to make the predictions of AI systems understandable and trustworthy. This approach aims at accountability, transparency, and model improvement in healthcare (Pawar et al., 2020).

Few Shot Learning

A method for training AI models with very limited examples or data, enabling quick adaptation to new tasks. These models are initially pre-trained with data from several related classes and then fine-tuned with a minimal set of examples to recognize additional classes or tasks. This approach is particularly useful in scenarios where available datasets are scarce, such as in the case of rare diseases.

Use Case

Few-Shot Learning facilitates the development of AI models for rare disease diagnosis by leveraging limited patient data, enabling accurate and timely identification of rare medical conditions with minimal training examples.

1)Addressing privacy concerns in healthcare, a novel framework for secure collaborative Few Shot Learning was designed. This framework incorporates differential privacy and homomorphic encryption, allowing for the collaborative improvement of Few Shot Learning models without exposing sensitive patient data, demonstrating its utility on benchmark datasets (Xie et al., 2020).

2) Few-shot learning has also found applications in processing medical texts. A systematic review explored the state of FSL methods for natural language processing (NLP) in the medical domain, highlighting its potential to leverage limited annotated textual data for tasks such as concept extraction and disease classification (Ge et al., 2022).

Prompt Chaining

A technique that guides AI to obtain the desired results by providing a series of multiple prompts in a sequence, rather than a single prompt that is lengthy and complex.

Use Case

Prompt Chaining aids in medical decision support systems by breaking down complex diagnostic queries into sequential prompts, facilitating more accurate and efficient patient assessment and treatment recommendations.
Although focused on legal documents, the concept of using prompt chaining to summarise complex documents and then classify them based on extracted information could be applied to healthcare. For instance, summarising patient records or research articles for quicker analysis or classification into relevant medical categories. (Trautmann, 2023).

Model Temperature

A control mechanism to adjust the creativity and randomness of AI-generated content, with a higher temperature resulting in a more diverse mix of output words.

Use Case

Model Temperature adjustment in AI-generated medical reports enables clinicians to customise the level of detail and variability in patient summaries, catering to specific clinical requirements and preferences while maintaining accuracy and relevance.

Context: A digital health platform aims to provide personalised health education and engagement for patients with chronic conditions, such as diabetes. The platform uses a generative AI model to produce tailored health advice, reminders, and motivational messages based on individual patient profiles, which include medical history, preferences, lifestyle, and treatment plans.

Application of Model Temperature:
Creating Motivational Messages: To engage patients and encourage adherence to treatment plans or lifestyle changes, the model’s temperature is slightly increased. This allows the generation of more varied and creative motivational messages that can resonate with the patient’s personal experiences and preferences, making the interaction more engaging and less repetitive. For example, crafting personalized encouragement messages that align with the patient’s progress in managing their condition.

Large Language Model (LLM)

Advanced AI models capable of comprehending and generating human-like language text.

Use Case

Large Language Models facilitate natural language processing tasks in healthcare, including clinical documentation, medical coding, and patient communication, improving efficiency and accuracy in healthcare workflows.
LLMs can serve as powerful tools to enhance patient communication. Often, radiology reports and medical jargon can be perplexing for patients, creating a barrier to understanding their own health status. LLMs can bridge this gap by translating complex medical language into comprehensible layman’s terms, thereby improving patient understanding and engagement. The impact of ChatGPT and LLMs on medical imaging stakeholders: Perspectives and use cases – ScienceDirect

AI Terminologies used in Healthcare

Model Drift

When AI models deviate from their intended behaviour over time, as a result of the changes in real-world environments.

Use Case

Model Drift detection and mitigation techniques are crucial in healthcare AI systems to ensure the continued accuracy and reliability of diagnostic, prognostic, and treatment recommendation models as patient populations, disease patterns, and medical practices evolve.

An example of model drift could be seen in predictive models used for diagnosing diseases from medical imaging, such as X-rays or MRIs. Suppose a model was trained on a dataset predominantly consisting of images from middle-aged patients with a certain type of lung disease. If, over time, the disease begins to manifest differently due to environmental changes, evolves to affect different demographics more significantly, or if the imaging technology improves, the model’s accuracy could decline because it’s based on outdated patterns.

AI can help manage model drift in healthcare through continuous learning approaches, where the model is regularly updated with new data reflecting current trends and patient demographics.

Model Hallucination

When the Generative AI model generates content, that sounds relevant and accurate but its incorrect and misleading.

Use Case

Model Hallucination poses risks in medical AI systems, where inaccurate generated content may lead to erroneous diagnostic conclusions, incorrect treatment recommendations, and compromised patient safety. Robust validation and verification processes are essential to mitigate the impact of Model Hallucination in healthcare applications.

An example of model hallucination in healthcare could involve the misinterpretation or generation of false medical information by an AI model. By implementing monitoring systems to detect and flag instances of model hallucination in real-time. By analysing model outputs against expected patterns or clinical guidelines, healthcare organizations can identify aberrant behaviuor and intervene promptly to prevent adverse outcomes.

Generative AI

AI technology that creates content, such as text, images, or videos, autonomously.

Use Case

Generative AI facilitates the creation of synthetic medical images and patient narratives for training healthcare AI models, augmenting limited datasets and enabling more robust model development for disease diagnosis and treatment planning.

Example – Segmed, NVIDIA, and RadImageNet Kickstart Generative AI Initiative for Synthetic Medical Imaging Data.

As part of this initiative, Segmed will offer synthetic medical imaging data on their self-serve medical data curation platform, Segmed Insight. This is in addition to the 60M+ de-identified real-world imaging records that Segmed has access to in their data network.

State-of-the-art generative imaging models were trained to generate synthetic data for CT, MRIs, Ultrasound, and Endoscopic surgery. These models can generate over 160 pathologic classifications, as well as create synthetic segmentations on top of the synthetic image frames. Segmed, NVIDIA, and RadImageNet Kickstart Generative AI Initiative for Synthetic Medical Imaging Data | Nasdaq

Gen AI has the potential to use unstructured purchasing and accounts payable data and, through gen-AI chatbots, address common hospital employee IT and HR questions, all of which could improve employee experience and reduce time and money spent on hospital administrative costs.

Ethical AI

Development and deployment of artificial intelligence systems that emphasise fairness, transparency, accountability, and respect for human values.

Use Case

Ethical AI ensures unbiased and equitable healthcare decision-making, safeguards patient privacy, and fosters trust between patients, clinicians, and AI systems, ultimately promoting ethical healthcare practices and enhancing patient outcomes.

AI-Powered Early Detection System for Mental Health Disorders

Mental health disorders, such as depression and anxiety, are increasingly common worldwide but often go undiagnosed due to stigma, lack of resources, or the subtlety of early symptoms. An AI-powered early detection system can analyse patterns in speech, text, and social media usage to identify early signs of mental health issues. This proactive approach allows for timely intervention and support, potentially improving outcomes for millions of individuals.