How to Establish an Ethical AI Strategy

When it comes to AI, there are differences between ethical AI and trusted AI. But the two are not mutually exclusive, which is why you must consider both when developing your AI strategy.
Ethical AI asks questions like “are we doing the right things to ensure our AI is unbiased and free from risk?” It’s what helps to ensure that the AI we’re building—and the data we use to train and power the models—is fit for the task. For example, if you are a medical researcher looking to invest in a new drug, would you base all your findings off a data set composed entirely of white males? Of course not, because your results would be skewed and biased toward that demographic, which would inevitably lead to incorrect conclusions.
Trusted AI, on the other hand, focuses on doing things right when it comes to building the models and training the data. Many of the AI solutions used in the world today are built on poor quality data. And when bad data feeds your models, your AI returns inaccurate insights which, in turn, can cause misguided decisions, financial repercussions, or a tarnished brand reputation.
Understanding the Risks Associated with AI
Implementing ethical, trustworthy AI is complex. It requires clean data and a clear definition of the use case you’re looking to address. It also involves collaboration with the business, and driving change throughout the organization. And, of course, it requires the right technology, ideally one that is AI-native (not AI-enhanced).
But many businesses don’t fully understand the inherent risks that come along with the implementation of AI solutions. Too often, businesses choose the wrong use case. They implement AI because they want to drive business impact—or because they have FOMO. But many times, in their haste to adopt AI, they choose use cases that are risky or likely to fail. Negative impacts such as bad decisions can be costly, both in dollars and reputational harm. And too often, these bad decisions are based on AI-generated insights fueled by bad data.
Instead, businesses should start with use cases where success is guaranteed. Consider use cases with high data volume and data availability, measurable impact, and stakeholder buy-in. The simpler the better. That way, you can take steps to embrace AI without putting your organization at risk.
Building an Ethical AI Program
Ethical AI programs start with intent. They provide clear answers to questions like “what are we implementing?” and “why are we doing it?” Businesses that implement ethical AI programs also consider how they can build their program to promote accuracy and fairness—and to mitigate bias. And, of course, they include ways to ensure that their critical business data remains safe and secure.
Companies that prioritize ethical AI also assess their AI data quality, ensuring the data powering their models is accurate, trustworthy, and fit for purpose. After all, there is no point implementing AI-powered solutions if the data that fuels them is not high quality.
Testing Your Data for Bias
Another hallmark of ethical AI programs is testing your data for bias. By doing so, organizations can minimize the risk of unintended bias or discrimination in their results. Sample your data, look at how the model uses it, and determine the main drivers. Take, for example, male/female data. This field could be an input within your data, but if it’s not relevant to the model, it could also be a bias.
To illustrate this point, let’s consider this example. A bank wishes to use AI to determine loan eligibility and risk for default. Male/female may be part of the data set used to train the AI model, but it is not a main consideration when determining whether or not the bank approves a loan. Therefore, using it to make decisions could lead to biased results. Credit score, on the other hand, is an important input for the model—and a better indicator for whether or not a person will default on their loan.
Another way to test for bias is by evaluating underlying proxies. If your model and its key drivers look correct at face value, it’s important to dig deeper and explore if underlying proxies exist. Building on our example above, if shoe size is an input that determines whether or not your bank will give you a loan, it could be that shoe size is acting as a proxy for male/female, which we’ve deemed a bias. Obviously the size of your shoe does not correlate to whether or not you will default on your loan.
Practical Steps for Establishing Ethical AI Programs
For businesses looking to establish ethical AI programs, it’s important to keep in mind that true change starts with culture, not technology. That’s why raising awareness about the benefits—and the potential risks—is a good place to start. You may find that some people believe that ethical or trusted AI is counterproductive and that international standards and laws that regulate the ethical use of AI hinder innovation. But in reality, this is not the case.
Similar to how anti-money laundering and compliance laws build trust in finance, AI-related regulations have the same effect in businesses. The risks associated with using AI, especially when fueled by dirty data, are high and the consequences for companies relying on inaccurate and untrustworthy insights can be severe. In fact, being compliant might be one of the least expensive things companies can do, especially when compared with repairing financial or reputational harm.
Next, you need the right people on board to support your programs. Put in place senior leaders who are not only aware of the risks associated with AI, but also how to use (and not use) AI for business decision-making. Many organizations are adding this responsibility to the purview of their chief data officer (CDO), expanding the title to chief data and AI officer (CDAIO). In addition, consider setting up an ethics board and establishing processes where everyone across the business can voice their concerns and provide feedback when insights are wrong.
Finally, you need to hold your teams accountable for improving the quality of the data and providing feedback when they notice data is incorrect. That way, decision-makers can feel confident that the insights the AI generates are trustworthy.
The Future of Ethical AI
As AI takes an even stronger hold on companies worldwide, it’s likely that regional and international organizations will establish better rules and regulations for it. Further, government bodies need to accelerate their understanding of data and AI and acknowledge the inherent risks for our society. They must put measures in place to detect and control misinformation so that the people using AI can feel confident that the information they receive is trustworthy.
As the adoption of AI continues to skyrocket, it’s imperative that humans remain in the loop and take responsibility for the quality of data fueling the insights it generates. Raising awareness for not just how to use AI, but also how to ensure that it is free from bias, discrimination, and misinformation, is critical. And it’s something every organization needs to consider as they establish ethical AI programs.
Get a free, no-obligation 30-minute demo of Tamr.
Discover how our AI-native MDM solution can help you master your data with ease!