AI Culture Practices

The norms, behaviors, rituals, and incentives that help people adapt and thrive in the era of AI

1. EMOTIONAL SAFETY AND AGENCY

Mobilize the AI-ready and reassure the AI-anxious

Different people feel different levels of comfort with AI. While some employees are excited about its possibilities, many are anxious, with worries ranging from, “Can I trust this model to be accurate?” to, “Will our clients be receptive?” and “What will happen to my job?” Any plan to engage teams around AI should consider both ends of the spectrum and tailor interactions accordingly. People who are ready and excited about AI can be enlisted as AI champions. For those who fear AI, turning that fear into curiosity requires creating the space to address their concerns and enrolling them to help envision use cases for AI.

From Leaders in the Field

Both anxiety and blind excitement should be replaced with understanding.
“You have to replace either trepidation or uninformed excitement with some base level of understanding of the technology. The C-suite needs to take real time in their calendars—not 30 minute increments but two hours — to dive deep cross-functionally with experts. This has to be a safe space where they can ask stupid questions and gain their own intuition into the technology and its impacts — both positive and negative.”

— Chief Technology Officer,
Fortune 500 hospitality company
Leaning into the human, emotional aspect of AI is essential.
“I believe that we operate on two levels at once: We strive for efficiency, while also being driven by emotions. This duality is something we should acknowledge all the time. Personally, I spend a third of my time with business partners, focusing on building relationships, supporting them, and understanding their needs. This collaborative, human aspect of work can't be overlooked.”

— Chief Data & Analytics Officer,
professional sports league organization
2. Shared Language

Create shared AI knowledge and vocabulary

When a topic is as ubiquitous and hyped as AI, there is strong pressure for employees to be conversant about it. Yet the absence of universal terms around AI creates confusion, rounds of translation, and misaligned paths. Creating a shared language around AI is one of the most important practices to ensure that experts from legal, tech, sales, and other disciplines understand each other. The clarity that emerges from this shared language not only helps the organization work better and faster, but also creates a shared identity that can reinforce organizational culture.

From Leaders in the Field

Even technical terms are open to interpretation and should be replaced with descriptive language.
“We used to talk about explainability, which then blurred in meaning with interpretability. But now these terms have lost their clarity. So our company has adopted the term understandability, a word with a clear general definition and no technical baggage. It helps us communicate more effectively.”

— Chief Technology Officer and Co-Founder,
open source AI development company
“Translators” are required to bridge all the languages needed for AI.
“People who speak the language of policy and governance need to work closely with those who focus on technology architecture. Bridging the language gap between them remains a challenge for many companies. They need what I call ‘translators’ who can interpret between the technical and legal aspects of the business.”

— Chief Counsel,
Fortune 500 retail company
3. AI Principles and Guidelines

Define how AI aligns with corporate values and business goals

Organizations need specific principles and guidelines to help people understand which uses of AI are valuable and which are off limits. By reducing uncertainty, these boundaries provide a fertile ground for experimentation and innovation. But for AI principles and guidelines to actually shape employee behaviors, they must be tailored to each organization. They shouldn’t just follow legal and ethical rules, but also translate the organization’s own corporate values, incorporate business goals, and reflect what uniquely motivates teams in the organization.

From Leaders in the Field

Principles need to be specific in order to be helpful.
“The most successful companies have a very specific set of AI principles. We’re not talking about generic principles such as privacy, security, etc. The specificity is necessary so that employees can understand how to make nuanced decisions that are in line with their own company values.”

— Founder,
AI ethics advisory company
Clear guidelines help practitioners better balance risk and innovation.
“Once AI principles are defined, they can inform specific guidelines and playbooks that help teammates know how to balance risk and innovation in real situations.”

— Chief Product Officer,
fitness tech company
4. Diverse Collaboration

Actively collaborate with diverse stakeholders

In order to identify multiple sources of value and surface unintended consequences, the design and deployment of AI models requires a variety of perspectives. The process should include not only experts from different disciplines, but also the people who will be positively or negatively affected by the models’ deployment. However, the extra collaboration requires that the “how” of collaboration be clear: the goals known, the decision rights identified, and nothing lost in translation. New, well-structured rituals of collaboration are needed to maintain momentum.

From Leaders in the Field

Unintended consequences must be collectively explored.

“Even before we started building something, we created a safe space to get in a room with other functions, including legal, to write down all of the ways in which things could go wrong.”

— VP, Product Design & Responsible Innovation,
Fortune 500 technology company

The voice of the beneficiary must be brought in throughout the entire process.

“It is of utmost importance that we have 10 licensed clinicians, who represent our beneficiaries, to approve a product before it is released. When we have tough clinical decisions to make, each clinician gets veto power.”

— President and Co-Founder,
healthcare technology company
5. Scientific Rigor

Embrace scientific rigor

AI models are different from previous information technologies in that they’re learning and changing all the time, which makes their outputs unpredictable, and the potential risk far higher. This context challenges the mindset of “move fast and break things.” Organizations that embrace a rigorous scientific mindset and practices operate in a much safer and clearer space. They document their assumptions, challenge the model, run experiments, share results transparently, and allow independent audits of their approach.

From Leaders in the Field

Documenting the certainties and uncertainties is a requirement to effectively navigate AI.
“It cannot be overstated how important it is with AI to state your assumptions up front, to document them, and to understand what the uncertainties are. This is just good science. Without this detail and rigor, you run the risk of creating issues down the line.”

— Chief Technology Officer and Co-Founder,
open source AI development company
Planning for AI audits early is necessary to understand risks.
“If AI is being used to make decisions across the company, you need to understand all the potential harms, and to have the right processes in place to mitigate them. That includes being auditable. If someone was about to start using AI, I'd say bring in an AI auditor now so you know what you'll need later.”

— Chief AI Ethics Advisor,
AI development company
6. ACCELERATED LEARNING

Support continuous learning and feedback loops

Organizations can’t understand the evolving value and risks of AI models without sustained human oversight. More frequent human feedback not only helps strengthen models, it also ensures the models’ alignment to the goals of the people using them. A learning mindset needs to be encouraged and rewarded from an innovation standpoint, too. The realm of AI is changing so rapidly that business leaders need to learn about new models, how they can affect their use cases, and the implications of regulation and civil-society scrutiny.

From Leaders in the Field

Human supervision of AI not only mitigates risk, it also improves the quality and humanity of AI's outputs.
“Having people supervise the work performed by AI is critical: They can help manage risk by evaluating whether AI decisions make sense within a human context. And over time, these reviewers improve the quality and humanity of the AI’s results.”

— Chief Product Officer,
fitness tech company
An emergent field such as AI requires practitioners to both keep learning about it and contribute to it.
“It's important for my teams to keep learning because we need to leverage best-in-class modeling techniques. So I encourage our data scientists to join the research community—to contribute to international competitions, to get published, to be thought leaders.”

— Senior Director,
Fortune 500 technology company