FeatureHow to Build an Ethical AI Policy That Actually Works
Whether your company uses AI as a strategic partner, administrative assistant, an extra body on the help desk or not at all, you should have an internal policy that governs what is considered acceptable, responsible and ethical use of this powerful technology.
Without guidance, people will use it for their own reasons. And if your company embraces it to assist with workflows, decision-making and customer support, you will find yourself in murky ethical and legal waters at some point if you fail to create policies that take these matters into consideration.
Before your team starts making AI-use decisions for your company — in the absence of guidance — work up an ethical AI policy to address how it should and should not be used.
“When you’re dealing with sensitive data or regulations, you can’t afford to be reactive,” said Chintan Mota, a Rising/Wipro consultant who focuses on enterprise technology. “You need guardrails before you deploy your first productive AI app.”
How should you go about designing, announcing and implementing these policies though? Here are some pointers.
1. Build an Ethical AI Task Force
“A top-down approach can never work. In addition to C-suite leadership, people in teams that are closer to the ground… have to be made an integral part of the policy’s development because they are generally the first ones to spot AI misuse."
— Piyanka Jain
President and CEO, Aryng
The first step to designing your AI policy is to build a team tasked with digging into the facts and deciding what use is right — and wrong — for your company. Is this a decision for high-level leadership? Are there cross-functional factors that need to be considered?
When ChatGPT launched, the team at the International Center for Journalists (ICFJ) was divided about how to use it. They decided that, before they could move forward, they needed to dig in, examine the the ethics around AI and work up a policy that everyone could agree to and uphold.
“We convened a task force to create an AI Use Policy that establishes a standard for how ICFJ uses, adopts and engages with AI tools,” the team explained in a news release. “The task force comprised staff from various levels and departments to ensure the new policy is an effective tool for everyone in the organization.”
There may be legal considerations you need to think about as you develop your policy. If so, you will need people on the team with a legal background. There might be bias, copyright and public relations issues to consider as well. If so, make sure those ideologies are represented.
And what about your internal culture? Do you know where it stands? People may have strong feelings for or against the use of AI. Your team should gather input so that you have a sense of how your culture will respond and can build traction among the troops.
“A top-down approach can never work,” said Piyanka Jain, president and CEO of data science and data literacy consulting company Aryng. “In addition to C-suite leadership, people in teams that are closer to the ground, like sales, customer service and operations, have to be made an integral part of the policy’s development because they are generally the first ones to spot AI misuse. They should be looked at as non-technical, ethical gatekeepers.”
“Don’t leave this to data scientists alone,” agreed Mota. “Include HR, legal, marketing, DEI teams and domain experts. Ethics isn’t just a technical challenge. It impacts companies at an organizational level.”
2. Define What Ethical Means to You
Your team should start by defining what "ethical" means to your company, the culture of your teams and your customers.
“The AI policy should reflect the company’s broader principles such as fairness, transparency and accountability,” said Mota. “Map those to how AI is developed, evaluated, assessed, tested and deployed.”
But don’t forget issues of customer privacy regarding the data you use in your AI application, any data it might collect and how that data might be used by the AI or by other areas of the company. Consider, too, any legal or governance issues that should guide your use of AI or the data it taps.
“Data is used to develop and fine-tune AI applications,” explained Jiaxi Zho, head of insights and analytics at Google. “How data is defined and labeled will directly influence AI systems. Policies should be translated and embedded into data governance rules and frameworks, such as principles for data sourcing, labeling, fairness audits and usage.”
Consider, too, the potential for bias in your AI use. Biased decision-making by AI has been reported in healthcare, hiring and other places. Guarding against it, detecting it if it happens, testing its decisions for possible bias and ensuring that the AI cannot do damage in the real world should be built, as much as possible, into the policies you develop for governing its use.
3. Consider the Potential Risks
“Not all AI applications carry the same level of risk. A model used for forecasting or impact measurement has a very different risk level when compared to a model that supports customer-facing personalization.”
— Jiaxi Zho
Head of Insights and Analytics, Google
The news is full of AI that have made mistakes, doubled down on them and caused problems for the companies the AI was working for. As you think about your policies around your future AI deployments, consider the potential for risk. What ways could your deployment do harm? Are there ways to mitigate that risk?
Some things to consider here might include: biased decision-making, compromised data privacy or use cases that don’t align with your company mission or with the perception you want for your company with customers.
“Not all AI applications carry the same level of risk,” said Zho. “A model used for forecasting or impact measurement has a very different risk level when compared to a model that supports customer-facing personalization.”
Considering risk from every angle is one reason you want a diverse team to help create your AI policies. Having representatives from technical departments, human resources, communications, legal and customer service will help guard against blind spots.
4. Announce Your Policies in Actionable Ways
Creating a smart AI ethics policy and then storing it in a Google doc or a filing cabinet will do little to ensure that your policy is followed by everyone on the team. “The policy won’t work if it’s just a PDF in a SharePoint folder,” said Mota.
Building the policy from the ground up, with a team that represents all aspects of the company and perhaps even customers and vendors, is a great start. Everyone who had a hand in creating that policy will be aware of it and familiar with its particulars. But you will also have to teach everyone else that the policy exists, how and when to use it and what that means in everyday workflows. Education is key here.
Alex Smith, manager and co-owner of Render3DQuick, said that once his company completed its AI policy, training was the next step. “I set up short team trainings with examples from our actual projects. We walked through actual cases. I walked the team through where their responsibility begins and what checks they’re expected to make.”
The education and publication of your AI ethics policy should not stop with internal education, though.
“If you’re using third-party LLMs or platforms or working with third-party companies, your ethics policy should apply to them too,” said Mota. “Insist on transparency in training data, usage restrictions and governance.”
Your customers should know if, when and how you use AI, as well. How does AI affect the data you collect about them? Are their questions being answered by AI? Can they trust those answers? Being transparent about these issues will help build customer trust.
Launching Your Ethical AI Policy
Creating policies to govern something as transformative as AI is a big project. It might seem as if you should take your time, gather more information, be sure you are safe and move cautiously toward implementing your ethical AI policies. That is no longer the world we live in, though.
“Don’t wait till everything is perfect,” said Mota. “The AI space is moving fast. Your policies can and should evolve with it. Start simple with the least risky options and iterate fast.”
But, while your policy might be a living, growing thing, the way you deliver it should not feel hurried. Be careful of your company culture and how your policies affect customers and vendors. “When rolling out policies, do it with the same care you’d give to a product launch,” said Rajeev Kapur, author of "AI Made Simple." “Explain why it matters, how it works and what it protects.”
Think of this as an educational mission rather than an announcement.
Create, too, ways for employees and external stakeholders to report any ethical concerns they have about your use of AI or even AI in general and to red flag instances where they see your ethics policy being ignored. Anonymous reporting here will help people participate and help you improve your policy while guiding your education efforts.