From XU Magazine, 
Online News

Take the good, leave the bad: keeping human biases out of technology

October 25, 2022

This article originated from the Xero blog. The XU Hub is an independent news and media platform - for Xero users, by Xero users. Any content, imagery and associated links below are directly from Xero and not produced by the XU Hub.
You can find the original post here:
https://www.mindbridge.ai/blog/take-the-good-leave-the-bad-keeping-human-biases-out-of-technology/

Artificial intelligence is becoming more prolific in our lives, and it’s only a matter of time before it’s used in everything. Unfortunately, however, biases in AI are continuously being found in even the most well-meaning of uses, from aiding law enforcement efforts to finding job applicants, such as Amazon’s formerly secret recruiting tool.

The global multi-product entity utilized a hiring algorithm that didn’t identify federally protected classes, like race, religion, age, sex, and disability. Yet, even without those identifiers, researchers found the AI algorithms preferred male candidates because of their almost exclusive usage of aggressive adjectives to describe themselves and their experiences on resumés, like “executed” or “captured.”

Unbiased AI is a goal we all want to achieve, but it’s continuously difficult with so many places in this human-centric technology for bias to creep in. It’s not impossible, though. So fixing biased AI is a top priority for the industry.

Research from the Association for Computing Machinery found that by training models on large data sets like Wikipedia and movie reviews, gender bias and sentiment analysis were achievable. This can help guide research down the right path, and many organizations are developing ethical AI with transparency and community to overcome the obstacles.

Building trust in artificial intelligence is essential, and identifying and addressing human biases builds the foundation of that trust.

Identifying biases in AI

Although the breadth of AI bias can be vast, there are two main things we can do as business leaders to tackle the problem. The first and most obvious step in ensuring fairness in machine learning technology and automation is to define exactly what “fair” means. We all have different ideas about what is and isn’t fair.

It’s also important to discuss the ways AI augments our decision-making processes. Unlike humans, AI filters through all the noise and determines outcomes based on the data input. Humans are not always honest, however, and this can very easily imprint bias on the AI’s outputs.

For example, image-generating AI algorithms like Dall-E 2 and Stable Diffusion look like they’re creating hyper-realistic images of people. But they’re only representative of the poses and perspectives in which we take and publish photographs of ourselves, much like our Facebook, and Instagram profiles are more highlight reels than actual portrayals of our real lives. AI can similarly end up biased toward unnatural beauty standards that are prevalent across mainstream and social media.

That barely scratches the surface of the potential problems in these image-generating AI algorithms, as their usage of specific artists, public figures, and brand imagery will inevitably challenge how we interpret our current copyright laws. It’s clear the repositories were trained on data with watermarks from Shutterstock, ArtStation, and other marketplaces, as well as branded IPs like Nintendo and Disney. It’s not yet clear what is considered fair use in the public domain with these large data sets.

So, how can businesses improve trust and transparency in AI while ensuring we utilize the tools with a sense of technology ethics?

Improving AI trust and transparency

As AI becomes more ubiquitous in business systems, it’s important to be open and transparent about how it’s being used. Educated communication is one of the best ways to relieve fears surrounding emerging technologies. Several great expert resources are linked throughout this post, and you should also check out the Partnership on AI and The Alan Turing Institute’s Fairness, Transparency, Privacy interest group.

Staying connected and informed on what’s happening in the industry at large helps identify and address issues while fixing biased AI. It’s also important to establish processes directly intended to mitigate AI bias, something MindBridge does at the core of our operations. At MindBridge, we help financial professionals identify efficiencies and uncover new streams of revenue by identifying, surfacing, and analyzing risk via the power of data analytics. We are proud to provide the highest level of transparency and assurance of any AI audit technology by being the only industry organization that has completed a comprehensive third-party audit of its algorithms. Using AI in audits can be a powerful tool that unlocks higher-level analysis while streamlining mundane tasks and reducing manual errors.

Regulatory agencies as far-reaching as the Food and Drug Administration (FDA) are now actively working to address AI bias as well. This is because healthcare technology is state-of-the-art, and AI is used throughout various stages of medical device development and operation, along with (but not limited to) drug production and genetic research. The FDA is dedicated to ensuring that healthcare remains accessible to every demographic, which requires fixing biased AI in health technology.

Follow any regulatory guidance as a start and enlist a partner if necessary to determine how to ethically leverage AI in your business. While there are dangers like any other tool, it’s important to understand how the technology can supplement existing processes from cybersecurity to marketing, compliance, content creation, and more.

Removing human bias from AI

Although artificial intelligence is getting increasingly sophisticated, it’s proving to be a human-centric technology. Still, there are inherent dangers in technology and automation when it isn’t truly representative of the real-world human experience. While powerful, machine learning is only as accurate as the training data it’s given and can very easily amplify human biases in unintentional ways.

Developing unbiased AI is an ongoing effort from both developers and users industry-wide. It requires being honest with ourselves to identify and address human biases, including our own confirmation bias. AI’s outputs are only as diverse and correct as the data sets and algorithms allow them to be, and fixing AI bias is about far more than just being fair.

It’s about having the right answers and doing the right thing.

Why leave it there?

To find out more

Straight to your inbox

Subscribe to our newsletter for updates as they happen
We hate spam too. We NEVER sell our mailing list.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.