By Lindsey Asis – 

What is Explainable AI?

Explainable Artificial Intelligence (XAI) is the application of built-in processes to AI models that allow human users to understand the steps and outcomes created by machine learning algorithms.

 

The Need for Explainable AI

The field of XAI has been rapidly growing in the past few years, especially due to the increased awareness and urgency of the need for transparency in AI models. The status quo in AI research is the existence of a “black box” in machine learning; essentially, AI models intake so much data and create such complicated neural networks to meet their outcomes, the researchers themselves do not even know what exactly transpires in the collection and analysis of Big Data by AI algorithms. Outcries against racial bias and a lack of data privacy have mounted as AI has become more ingrained in our lives, and thus far, there have been few solutions.  

Not only has there been a public call for accountability in AI, but a legal one as well. The EU’s General Data Protection Regulation (GDPR) is one of the most ambitious data privacy and artificial intelligence mandates in the world. While there is some misinformation regarding the existence of a “right to explanation” of artificial intelligence in the doctrine, it does explicitly state in Articles 13-15 the right to ‘meaningful information about the logic involved’ in automated decisions. While there is some gray area as to what this means in practice for companies and consumers, it is clear that the need for transparency is gaining traction and legitimacy. 

The conceptual summary of the need for Explainable AI is that we must shift from “black box” decision making to “glass box” decision making. As an ethical principle, artificial intelligence should be transparent so both engineers can improve upon areas of weakness or bias in their AI, and users can be empowered to have an active and consensual relationship with artificial intelligence.

 

Explainable AI in Practice: Self-Driving Cars

Ethicists and data engineers alike have pointed to self-driving cars as an understandable application of Explainable AI. Suppose you are in an Uber or Lyft coming home from the airport. You know the area well, and you notice that your driver takes an alternate route home that you are not familiar with. In a moment of subtle panic or just curiosity, you ask your Uber driver why they are taking that route. They respond that there is a road closure ahead, and your fears are soothed.

Now imagine the same situation, except you are in a self-driving Uber, where AI is the driver. If your car begins to take a different route than what you were expecting, you would want some degree of an interface to question why the self-driving car made that decision. This is a simple example of why Explainable AI would be beneficial, but it expands quite further: credit scoring and loan approval, for example, are to become automated. We need Explainable AI to ensure that people can understand why they might not be approved, and what measures they could take to rectify their financial plight. As artificial intelligence has garnered a poor reputation for built-in bias, Explainable AI can be the solution to ensure accountability. 

 

Limitations and Counterarguments

Data scientists and ethicists have not reached a robust consensus on the logistics of Explainable AI. For one, in many cases it is extremely difficult, if not impossible to explain the intricate mathematical path that AI takes into layman’s terms for the user to understand. Therefore, a codified “right to explanation” such as that discussed with the GDPR may not be feasible, because certain models will simply be too complicated to understand and explain.

Another area of disagreement is in the ethical school of thought that AI need not be explainable, but only fair and correct. An example of this is that we do not often question our medical professionals on their diagnosis, nor their path to their conclusion. This is because we trust our doctors to do no harm, and carry the reality that we will not fully understand what brought a doctor to that diagnosis anyways because it is not our area of expertise. The argument is extended to AI. Because most humans will not understand the pathways of artificial intelligence anyways, the burden instead should be turned to the engineers to ensure that fairness and correctness are protected in whatever they create. 

 

Conclusion

Explainable AI is not a silver bullet to end the issue of bias in Artificial Intelligence, at least not right now. While it is true that many AI models currently cannot be re-worked to be explainable because they are too complicated, pro-XAI engineers argue that we must engineer explainability into AI from here on out. If we cannot explain or understand the technology we are bringing into the world, perhaps we should not be doing it without fully understanding the ramifications.

Moreover, the example of trusting our medical professionals does not directly translate to Explainable AI: the public generally trusts medical professionals, but it does not generally trust artificial intelligence. While engineers should indeed ensure fairness and correctness in what they create as an ethical principle, we need to have accountability in explainability in case they do not. 

The road to truly Explainable AI is just beginning, and, likely, it will never be complete. However, we must start these conversations about the ethics of artificial intelligence and where its faults lie. Artificial intelligence is here to stay, so it is our duty to ensure we are using it to create a better, more equitable world. 

 

Lindsey is the Program Coordinator at AI for Good. She is a senior at the University of Texas at Austin, double-majoring in Government and Sustainability Studies with a minor in Iberian Studies. Lindsey has worked in the non-profit sector, on city and congressional campaigns, and in state-level government.