By Carmen Lamoutte, Project Manager, Wovenware 

Artificial intelligence (AI) is reshaping businesses, yet some AI projects fail to meet expectations around social responsibility, ethics, transparency and fairness. When AI algorithms introduce bias, preventable errors and poor decision-making, it can cause mistrust among the very people it is supposed to be helping.  Designing responsible AI applications requires focusing first on the human experience, and then aligning what is technically feasible and what is viable for the business.

Design thinking provides a framework for building human-centered, responsible AI solutions. Consider the case of a specialty pharmacy, evaluating how to automate patient and prescription information extraction from faxes received from physicians’ offices. Receiving over five hundred faxes per day, the work is very tedious and time consuming. 

From a technical point of view, a data science team might immediately identify an opportunity to use Natural Language Processing (NLP). However, acquiring a deep understanding of the human needs and expectations before writing the first line of code can result in a more socially responsible solution.

Following are five lean design practices to follow in the early stages of the AI lifecycle:

1.      Begin by Empathizing

To develop responsible AI solutions we need to empathize with the users, as well as the humans and communities that will be indirectly  impacted by them.  

We need to ask the tough questions: Who is this for? What do they really need it for? What is the experience they expect and why is this important now?  In the case of the specialty pharmacy, the solution will help pharmacists and technicians, who are spending way too much time on manual administrative tasks and need to focus more on connecting with patients and helping them get the care they need.

Besides the end user, who else is impacted by this solution? The patients with chronic and sometimes serious conditions will be impacted by the AI. They rely on this process to order a prescription and get the treatment directed by the physician.  Physician’s offices and insurance providers also are involved in the processing of a prescription so we must understand how information will flow through the people and systems in each of these organizations.

During the Empathize stage, a designer conduct interviews with all of the people involved in processing prescriptions. Open and genuine conversations should lead to a better and deeper understanding of the hurdles that must be surpassed to process prescriptions for patients in need of care.

2.   Conduct Inclusive Research

To build responsible AI, the design team must conduct inclusive research to understand the social environment in which the solution will operate.  Are there underserved, neglected, vulnerable or minority populations that will be impacted by this application? What is the distribution of people in different age groups, races, income level, and conditions? Is there a risk of introducing unintended bias because of low volumes of data for specific populations? Is there inherent bias in the current operations?

The following two approaches to inclusive research will help frame the social context for an AI solution:

  1. Understand the current environment. Do patients with rare conditions get the same level of service as more common conditions? Do patients from large medical centers get the same level of service as patients from individual physician’s offices in rural areas with less access to technology? Is there any inherent human bias in the current operational processes?
  2. Anticipate the impact of AI. Are there underrepresented populations in the data? What checks need to be put in place to ensure the quality of care for these populations?

3. Humans Always in the Loop

Responsible AI models can be trained to master specific skills that complement and augment human capabilities. Realizing the potential of AI often requires building a synergistic feedback loop between the human and the machine.

The early stages of design should address the following questions:

  • At what point(s) in the business workflow will people be interacting with the information provided by AI?
  • What is the best way to visually present the information so that humans can use it and meet its original intended purpose?
  • What information needs to be presented? Only insights and predictions, or probabilistic data and confidence levels?
  • What level of human oversight will be required? How can we create a good user experience to correct possible errors in the models?
  • Do humans need to understand the reasoning behind AI insights for regulatory requirements or otherwise?

4. Prototype and Iterate

Before doing data wrangling and exploration, creating low-cost prototypes of a solution will validate the human -in-the-loop design and avoid future rework. When building responsible AI solutions, it is extremely useful to include in the prototype possible errors that an AI model might make. Different from software applications where end-users expect bugs to be corrected, when implementing responsible AI solutions, stakeholders must plan for errors. 

In the context of extracting patient and prescription information from faxed documents, a design team can manually create low-cost analog prototypes of solutions that have varying degrees of accuracy. After manually processing a sample of documents, different errors can be deliberately presented to a pharmacy technician such as extracting a wrong or incomplete patient name (false positive) and not identifying the patient in a document (false negative). Pharmacy technicians can validate the real risk tolerance and define a governance structure to address risk.

5.     Promote Multidisciplinary Teams

One of the most basic principles of design thinking is “Build with, not Build for.” AI teams should include domain experts comprised of diverse profiles (gender, race, age, disabilities, income level, education level, and others), of designers, data scientists and engineers. 

Building responsible AI solutions requires applying a diverse set of knowledge, skills and perspectives. When people impacted by technology are involved in decision-making, design and validation, there are noticeable differences in the quality and user experience of the resulting product.

When developing socially responsible AI solutions, most of the thinking is focused on using unbiased, quality data. While data fuels AI, human-centered design drives the bus. A data- and design-centric approach to building responsible AI solutions promotes ethical and fair decision making that works for everyone.