The Path to Explainability in AI, ptr 2

Callibrating user trust in AI

News

MIT Spin-off wants to create a new type of AI. Liquid AI wants to use liquid neural networks to build AI systems that are more adaptable than current iterations. Liquid neural networks, or LNNs, are a type of neural network that is designed to be more flexible and adaptable compared to traditional neural networks. They are characterized by their ability to process data sequentially, keep the memory of past inputs, and stay adaptable even after training.

A Beijing court has ruled that output from AI chatbots IS covered under copyright regulations. This differs from American courts that have ruled the output is not an infringement of copyright laws since you cannot tell where the output infringed on which piece of copyrighted material

Zephyr AI uses advanced machine learning to find new ways to match cancer treatments with patients. Unlike traditional methods that focus on specific genetic changes or proteins, Zephyr AI's technology looks at a wider range of genetic factors. It makes predictions about which drugs will work best using real-world data, and these predictions have been proven accurate. Recently, they successfully identified patients who could benefit from a specific cancer drug, even if they didn't have the usual genetic markers. Their goal is to expand precision medicine to more patients, including those usually left out of clinical trials. They plan to collaborate with drug companies and healthcare providers to improve drug development and patient care using their technology.

Prompt

Definition: Prompt Tuning » is a technique used in machine learning, particularly with large language models like GPT-3 or GPT-4. It involves adjusting the initial "prompt" or input that is fed to the model to steer the model's responses in a desired direction. This can be done by carefully selecting the words or phrases in the prompt, which effectively "tunes" the model to produce more relevant or specific answers, improve accuracy, or align with certain styles or topics. Prompt tuning is a way to make the most of a pre-trained model without needing extensive additional training or modifications to the model itself.

In our last discussion, we delved into the critical connection between explainability and trust in AI, emphasizing the importance of clear mental models for users to understand and trust AI systems effectively.

Calibrating User Trust

  • Understanding the Limits: AI systems, grounded in statistics and probability, are not infallible. Users need to know that while AI can be highly effective, it's not always right. Clear explanations of how AI works can help users determine when to rely on these systems and when to exercise their judgment.

  • The Role of Explanations: Effective explanations can assist users in making this distinction, enhancing their ability to use AI responsibly and effectively.

2. Trust Calibration Throughout the User Experience

  • The Evolving Nature of AI: As AI systems learn and adapt over time, the user's relationship with the product will also evolve. This dynamic nature requires ongoing communication about changes and improvements in the AI system.

  • Continuous Trust Building: Establishing and maintaining the right level of trust is a continuous process, necessitating consistent and clear updates about the AI's capabilities and limitations.

3. Optimizing for User Understanding

  • The Challenge of Complex Algorithms: In many cases, the inner workings of an AI system are complex or not entirely understood, even by its developers. When possible, simplifying these explanations without sacrificing accuracy is key.

  • Making AI Understandable: Strive to communicate the AI's reasoning in terms that are accessible and relatable to users, focusing on the outcomes and implications of its decisions.

Managing AI’s Influence on User Decisions

  • Impact on Decision Making: The outputs from AI systems often directly influence user decisions. How the AI conveys its confidence in its predictions can significantly impact these decisions.

  • Balancing Information and Overload: The challenge lies in presenting this information in a way that is informative but not overwhelming, aiding in decision-making without causing confusion or mistrust.

Factors Contributing to User Trust in AI

Ability

  • Demonstrating Competence: The perceived ability of an AI product to effectively meet user needs is fundamental in establishing trust. This involves not only addressing the user’s requirements but also enhancing their overall experience.

  • Visibility of Value: The product must demonstrate its value, making it easy for users to recognize and appreciate the benefits it offers.

Reliability

  • Consistency is Key: Reliability refers to the consistent performance of the AI product, aligning with the expectations set by the user. The AI system must operate reliably under various conditions and scenarios.

  • Setting and Meeting Standards: Launch products only when they meet the quality standards you've set and communicate transparently to users.

Benevolence

  • User-Centric Design: Benevolence in AI implies that the system is designed with the user's best interests in mind. It's about building a product that genuinely aims to benefit the user.

  • Transparent Motives: Clear communication about what the user gains from using the product and what the provider gains from the user’s engagement is essential in fostering trust.

In the next edition, we will explore a real-world example to illustrate these concepts and offer practical strategies for building trust in AI products.