FeatureFundamentals of a Responsible AI Framework
When a company adopts artificial intelligence, it goes beyond a tech implementation. Companies have to consider the impact the technology will have on their employees, customers and other stakeholders, and how their AI strategies align with ethical and compliance standards.
“Companies should consider responsible AI as a business imperative, not a nice-to-have,” said Diya Wynn, responsible AI lead at Amazon Web Services.
Responsible AI refers to “the practice of designing, building and deploying AI safely, securely and inclusively to mitigate unintended consequences or risks and maximize benefit,” Wynn said, and a responsible AI strategy helps companies build trust and support their business objectives.
Why Responsible AI Is Important
While AI offers benefits across a business, the technology doesn’t naturally have moral or ethical capabilities like human workers do, said Robert Howell, professor and chair of philosophy at Rice University, who researches technology ethics.
“It's up to the companies and the people who use AI to make sure that it lives up to the standards that they would hold themselves and their employees to,” Howell said.
Responsible AI is “top of mind” for organizations as they deploy AI, particularly agentic AI, according to EY’s recent AI Pulse Survey.
Responsible AI is “both a competitive differentiator and a value creator for companies,” said Merve Hickok, founder of AIethicist.org. “Responsible AI means building safer, more secure and better performing systems.”
It helps organizations establish trust with their customers and employees by demonstrating that their interests are valued, Hickok said.
Ultimately, responsible AI means AI systems “behave in ways that people can understand, trust and hold accountable,” said Rhea Saxena, technical and product lead at the nonprofit Responsible AI Institute. It protects companies from risk, compliance failure, reputational harm and missed opportunities for innovation.
Responsible AI Challenges
Companies sometimes struggle with responsible AI, Wynn said. Challenges include knowing where to start, questioning if they have the internal skills to implement and adopt the practices, and viewing it as getting in the way of innovation.
“Addressing the first two struggles, start with education and training,” Wynn said, adding that everyone within an organization should be AI-responsible.
Organizations also need to reframe the responsible AI narrative, including that “responsible AI and innovation are not at odds,” Wynn explained. In reality, she said, responsible AI helps innovation, but it may require initial investment.
Too often, companies take a reactive approach, where they don’t focus on AI ethics or responsibility until an issue arises, rather than during development or use of the technology, Saxena said.
Another challenge is that organizations may be using advanced AI but have “immature governance practices,” which can create operational risk, Saxena said.
Building a Responsible AI Framework
“There’s no single silver bullet when it comes to responsible AI frameworks,” Saxena said. But here are some tips for developing a strategy:
Lead With Your Values
Before diving into an AI program, Howell suggested first considering the company values that organizations want to preserve and ensure that their responsible AI framework aligns with those values.
Also, think through how AI will affect any humans in the loop at any stage, Howell added.
Establish clear roles and responsibilities across the organization with designated responsible AI leaders in each business unit, and create responsible AI frameworks with regulatory demands in mind, Wynn said. AWS releases AI Service Cards to provide its customers with one resource for intended use cases, responsible AI design choices and best practices for performance, she said.
Embed It Into Everything
Responsible AI shouldn’t be “a tick-the-box exercise, only serving to drive compliance,” Wynn said. Instead, it must be embedded into organization-wide operations.
“The goal is to create an environment where responsible AI practices are tightly aligned to organizational structure and incentives, not from external pressures or compliance requirements,” Wynn said.
There’s no one standard for responsible AI, added Saxena. It should be built to support responsible and sustainable innovation across a company’s AI lifecycle. The Responsible AI Institute offers several resources, including frameworks and standards to help organizations build their own policies.
Get Data-Ready
According to the EY survey, 70% of senior leaders said their company’s inability to be data-ready will hinder its adoption of agentic AI, and 20% say a lack of data readiness is a barrier to adopting AI.
Companies must invest in best practices for how they approach data, Hickok said. “This means collecting high-quality data specific to the problem at hand, and not trying to mold the data at hand to the problem.”
Focus on Impact
AI systems must embed “privacy-, security-, trustworthy-by-design elements” and incorporate consumer interest into the design from the beginning, Hickok said.
“Responsible AI is not a sprint; it is a marathon,” Hickok said. “It does not end with the design of the AI system. It includes the lifecycle of the system and its associated impact.”
A company’s framework should also incorporate how an AI strategy affects the employees using the technology and the consumers on the receiving end of it, Howell added.
Train Teams
AI literacy — including AI’s capabilities, possibilities and limitations — across an organization is critical, Hickok said.
Among senior leaders, 59% said they’ve increased responsible AI training over the past year, and 64% will increase it in the coming year, according to the EY report.
Everyone within a company should receive training on the company’s commitment to responsible AI and how to apply it in their roles, Wynn said. Training should include how AI is being used, the company’s responsible AI strategy, data governance and everyone’s shared responsibility.
Incorporate ongoing, role-specific training, too, Saxena said. For instance, engineers need guidance on implementing bias detection, product teams should understand ethics in design and procurement teams need support evaluating third-party AI tools to ensure they meet the company’s governance standards. And, leaders should recognize the value of responsible AI beyond compliance.
Training should also cover bias detection, fairness testing, impact assessments and transparency reporting, Wynn said.
“It is utmost critical for all employees to understand what the AI system is supposed to do, so that they can all monitor the unexpected results at the front lines,” Hickok said.
Monitor AI Systems
Responsible AI frameworks should be “more than static principles” — they should include actionable tools and feedback loops, Saxena said. Companies should establish documentation protocols, regular impact assessments and plans for humans to handle high-risk cases, she recommended.
“Conduct impact assessments for all AI systems, evaluating potential risks across fairness, privacy, security and societal impact and appropriate mitigations based on risk level,” Wynn said.
AI is a “probabilistic and brittle technology,” Hickok said. “Even with the best of intentions, it can create unintended results, some of them harmful.”
Ensuring that AI systems perform as intended requires ongoing monitoring for performance and emerging risk, Hickok said. “Responsible AI practices and implementation will mature alongside the maturity of the organization's own know-how and experience.”
Editor's Note: For more coverage of AI ethics and AI responsibility:
- Who's Responsible for Responsible AI? — Responsible AI frameworks will inevitably evolve, but you can start building the foundation today that sets your business — and employees — up for success.
- HR Leaders: With Great AI Power Comes Great Responsibility — Global HR analyst Josh Bersin has good news for the HR profession—but it’s news that comes with a warning about ethics.
- AWS's Diya Wynn: Embed Responsible AI Into How We Work — As senior practice manager for responsible AI at AWS, Diya Wynn believes responsible AI should become part of the fabric of every organization working with AI.