As it stands, there are many conceptions of what it substantively means to deploy AI technology for the benefit of society. As with any contested topic, what it means to do “good” with AI varies by the organization and surrounds issues of ethics, value alignment, philosophy, and government regulation, among others. Yet, this debate extends beyond a mere exercise of the intellect considering how AI is a powerful technology that is increasingly shaping our social, political, and economic lives.

Recently, AI for Good Foundation interviewed affiliate, Dr. Lee Schlenker, Principal at The Business Analytic Institute, on his current views regarding pressing issues such as ethics, value alignment, and AI for “Good” in a short question and answer session. Attached you will find his concept paper that extends our short question and answer forum and contact information.

 

AI with Child Social media blog post

 

Dr. Lee Schlenker

 

Dr Lee Schlenker

I am the founder and principal of the Business Analytics Institute. I am also a consultant and keynote speaker on themes surrounding the managerial challenges of AI, customer analytics, and the Internet of Value(s). Currently, my colleagues and I work at the Institute to help leverage data to improve organizational decisionmaking. Additionally, we organize international summer and winter schools, as well as corporate educational programs for graduate students and managers in banking and finance, telecommunications, public works, and service industries.

Our contribution on AI for Good? has been designed to provoke discussion and debate around our Winter School on the application of AI for Good Feb. 110th in Mysore, India.

 

Read Dr. Lee Schlenker concept paper here.

 

What do you perceive as the biggest challenge regarding the ethical deployment of AI and why?

In applying “ethics” to technology we suggest that distinctions between good and bad aren’t bound by context and culture where, in fact, history demonstrates the contrary. Each new generation of technology, whether it be the steam engine, the Internet, or Web3, has
brought forth a new set of economic opportunities and ethical challenges.

The biggest challenge I see today is to develop an ethical framework that will allow decisionmakers to focus on the process of how we leverage artificial intelligence in business and society.
There are several reasons to focus on the process rather than trying to define the outcomes of what “AI for Good” might look like. To begin, human preferences of what constitutes “Good” evolve over time given social and economic conditions. They differ from one
individual to the next, and even between the “experiencing” and “remembering” functions of our own consciousness.

Given this, there is often a notable difference between our own intentions and actions. Thus, AI can not learn operational definitions of “Good” from human behavior alone. All of these reasons plead in favor of the development of ethical frameworks that will allow AI development teams and end users to explore the right balance between human and machine intelligence.

 

Do you think that the “needs of the business (economic efficiency, cutting costs, increasing profits, etc.)” could ever truly align with the needs of society?

To the contrary, I think that the needs of business always align with societal needs, but not necessarily to the benefit of all. Our desire to realign business and societal needs is at the heart of leveraging AI for “Good but the key is determining where to draw the line.

The first task is learning from the actions of our peers and what they truly value (“utility”). Since we are faced with multiple examples of consumers and managers not working in their own best interests (due to the risk, uncertainty, and ambiguity of the decisionmaking
environment), training algorithms to work for either the individual or common “Good” is a tricky task. We’ve made some progress with Inverse Reinforcement Learning but AI practitioners have a long journey ahead.

Secondly, enlisting AI to realign business and societal needs is embodied in the Halting Problem. Artificial Intelligence has produced remarkable results in decision environments that are observable and that engage a limited number of actors where the actions are discrete, and the outcomes are specified by known or predictable rules.

Unfortunately, arbitrating the tradeoffs between business and society are quite often more complex: dynamically changing multiagent environments in which the rules and variables are often unknown and/or difficult to predict. These situations require more than the mathematical intelligence that AI can offer, but must enlist the emotional, interpersonal, and ecological intelligence that humans can provide.

Developing algorithms that defer to human intelligence when faced with uncertainty, hence the Halting Problem, is the only plausible response I see to realigning future business and societal needs.

 

In your article, you distinguish government regulation from ethics; do you view one as more necessary than the other, or are they complementary?
Although government regulation intends to mitigate perceived imperfections in the market system, attempts to regulate information technology have often failed to produce ethical behavior for many reasons. The State and society have attempted to legislate technology,
rather than the ethical consequences of their use.
The legislation often seeks to punish unethical behaviour rather than incentivize ethical problemsolving. The nature and speed of innovation in IT largely surpasses previous technology. Finally, the latency in a necessarily
time consuming and complex legislative process results in legislation that is often outdated before it is enacted.
The European Union’s GDPR is a case in point, for, in spite of its merit, the text falls considerably short of providing a framework for ethical decisionmaking.
These efforts have been designed to protect personal data, rather than an individual’s right to privacy, freedom
of thought, or freedom of choice. The nature of newer technological innovations, like Web3, relies on a chain of indelible transactions that are incompatible with the codified “right to modify or delete personal information. This gap between the good intentions of legislation
and business practices seems inevitable, given the very foundation of surveillance capitalism is based on identifying, monitoring, and influencing behavior.
What do you think is the biggest opportunity for AI developers to reduce bias in data?
Bias can be defined as a quality of an object, idea, or event that cannot be deduced directly from the data itself. We should keep in mind that “bias” isn’t just found in the data, but also in the heuristics, algorithms, and business logic we use to differentiate better from worse.
We all use cognitive biases in making assumptions about the world around us in order to process information quickly enough to make decisions. Bias and variance are intrinsic qualities of algorithms we use to look at data, and they are inversely correlated in practice
as such.
Bias is neither good nor bad, though implicit bias can distort our ability to leverage AI and lead to erroneous conclusions.
I believe that AI developers should be less concerned with “reducing” bias than “recognizing” it, and should focus on helping organizations and consumers “assume” or “correct” for bias and its consequences on decisionmaking. Reducing bias is somewhat illusory, for it is an inherent quality of the social processes that have defined both the problems and opportunities we’re addressing.
In developing AI, there are the specific challenges of tunnel vision regarding mathematical logic and the black box of deep learning that needs to be addressed. Finally, the inability of current implementations of AI to generate new concepts and relationships when probing
complex decision environments demonstrates the opportunity for AI developers to address the challenge of codeveloping human and machine intelligence.

AI for Good Foundation is setting the standard for AI Ethics best practices in company, classroom, and policy settings.

Our work is informed by the UN Sustainable Development Goals — and the human rights they champion.

Read more here about our featured programs: The AI Ethics Audit and The AI Ethics Institute.