Skip to main content

AI Ethical Framework: AI Usage Questions

By Jason Grigsby

Published on May 1st, 2024

Topics

This series on AI is a co-authored by Megan Notarte and Jason Grigsby.

How we deploy AI in our work and product development may hold as much weight in shaping the risks and ethics as the specific models we choose. If nothing else, being thoughtful about our usage of AI can help reduce negative outcomes.

AI works best when you supply content and ask it to summarize or transform it in some way. Not only is the AI much less likely to hallucinate when you supply the content, but it is also less likely to generate something containing copyrighted material or biases beyond whatever bias was in the content you supplied.

Given the black box nature of AI, how do we know it is working as intended? How do we know that the answers given are accurate?

We should prefer processes that keep humans in the loop to vet what AI generates. If humans aren’t in the loop, then we need processes designed to spot check the output of AI to ensure responses are meeting an acceptable accuracy threshold.

No AI model can promise 100% accuracy. AI will fabricate answers, introduce bias, and say things that are embarrassing, insulting, or worse. 

These stories may be humorous when an AI bot tries to convince a journalist to leave his wife, but they may be costly when AI provides erroneous information to customers.

What are the worst case scenarios that can happen when AI gets something wrong? 

It’s one thing if AI generates a false answer that a human can quickly identify as wrong and disregard. AI “hallucinations” are a much bigger problem when someone’s livelihood or freedom is affected.

This is another way to consider the risk associated when AI gets something wrong by asking who will see the error?

If someone uses an AI chat bot for their own productivity, the only person who will see the AI output is that individual unless they choose to share it with others. Presumably, that person can vet the output before sharing it.

But every time the audience for AI output increases, the chances of something going wrong increases as well. AI used internally and only seen by a company’s employees will be lower risk than a system used by customers. Likewise, an AI integration being used by the general public is higher risk than a tool used by customers.

Slapping an AI chat bot onto a product isn’t enough. It will likely aggravate your customers more than it helps them.

The more focused the use case, the more likely AI will be appreciated by users. Adding AI to a product isn’t a goal unto itself. AI has to be in service of a user’s needs. Start by looking at your biggest customer pain points and see if AI can help.

We need to explain how we’re using AI to our users and give them the option to opt out of AI if possible. Ideally, AI would be an additional feature of our products, not a requirement.

AI output, particularly the code generated by AI, is often not accessible. Hidde de Vries explains the challenge:

[An AI] systems’ success rate can be (and is usually) increased by training models specifically with very good examples…For accessibility, this data is hard to get by—most of the web has accessibility problems.

AI isn’t an excuse to ignore accessibility. AI features should be accessible to everyone.

When we incorporate AI into our products, we must ensure we don’t leak a user’s private data inadvertently. We talked earlier about the need to understand how AI models use the data that users provide them, but we also need to evaluate our AI features with an eye on privacy.

We can’t be certain that AI generated output won’t contain sensitive information that the user has provided, so we shouldn’t publish that output without giving users with a chance to review it.

Not every job can be protected. But when we look at our work, we should consider the greater societal impact. Helping people live fuller lives and do their work more efficiently is something to strive for. Helping a massive company squeeze more out of their employees isn’t.

It feels foolhardy to describe an ethical framework for a field that is evolving so quickly. The way we think about and interact with AI will inevitably change.

But no matter how AI changes, some version of the questions we ask here will remain relevant. We’re trying to understand how to utilize AI in a way that benefits our users and customers the most while reducing the risk of harm. Asking questions like these is the first step.