AI Culture Practices
The norms, behaviors, rituals, and incentives that help people adapt and thrive in the era of AI
Mobilize the AI-ready and reassure the AI-anxious
Different people feel different levels of comfort with AI. While some employees are excited about its possibilities, many are anxious, with worries ranging from, “Can I trust this model to be accurate?” to, “Will our clients be receptive?” and “What will happen to my job?” Any plan to engage teams around AI should consider both ends of the spectrum and tailor interactions accordingly. People who are ready and excited about AI can be enlisted as AI champions. For those who fear AI, turning that fear into curiosity requires creating the space to address their concerns and enrolling them to help envision use cases for AI.
From Leaders in the Field
— Chief Technology Officer,
Fortune 500 hospitality company
— Chief Data & Analytics Officer,
professional sports league organization
Create shared AI knowledge and vocabulary
When a topic is as ubiquitous and hyped as AI, there is strong pressure for employees to be conversant about it. Yet the absence of universal terms around AI creates confusion, rounds of translation, and misaligned paths. Creating a shared language around AI is one of the most important practices to ensure that experts from legal, tech, sales, and other disciplines understand each other. The clarity that emerges from this shared language not only helps the organization work better and faster, but also creates a shared identity that can reinforce organizational culture.
From Leaders in the Field
— Chief Technology Officer and Co-Founder,
open source AI development company
— Chief Counsel,
Fortune 500 retail company
Define how AI aligns with corporate values and business goals
Organizations need specific principles and guidelines to help people understand which uses of AI are valuable and which are off limits. By reducing uncertainty, these boundaries provide a fertile ground for experimentation and innovation. But for AI principles and guidelines to actually shape employee behaviors, they must be tailored to each organization. They shouldn’t just follow legal and ethical rules, but also translate the organization’s own corporate values, incorporate business goals, and reflect what uniquely motivates teams in the organization.
From Leaders in the Field
— Founder,
AI ethics advisory company
— Chief Product Officer,
fitness tech company
Actively collaborate with diverse stakeholders
In order to identify multiple sources of value and surface unintended consequences, the design and deployment of AI models requires a variety of perspectives. The process should include not only experts from different disciplines, but also the people who will be positively or negatively affected by the models’ deployment. However, the extra collaboration requires that the “how” of collaboration be clear: the goals known, the decision rights identified, and nothing lost in translation. New, well-structured rituals of collaboration are needed to maintain momentum.
From Leaders in the Field
Unintended consequences must be collectively explored.
— VP, Product Design & Responsible Innovation,
Fortune 500 technology company
The voice of the beneficiary must be brought in throughout the entire process.
— President and Co-Founder,
healthcare technology company
Embrace scientific rigor
AI models are different from previous information technologies in that they’re learning and changing all the time, which makes their outputs unpredictable, and the potential risk far higher. This context challenges the mindset of “move fast and break things.” Organizations that embrace a rigorous scientific mindset and practices operate in a much safer and clearer space. They document their assumptions, challenge the model, run experiments, share results transparently, and allow independent audits of their approach.
From Leaders in the Field
— Chief Technology Officer and Co-Founder,
open source AI development company
— Chief AI Ethics Advisor,
AI development company
Support continuous learning and feedback loops
Organizations can’t understand the evolving value and risks of AI models without sustained human oversight. More frequent human feedback not only helps strengthen models, it also ensures the models’ alignment to the goals of the people using them. A learning mindset needs to be encouraged and rewarded from an innovation standpoint, too. The realm of AI is changing so rapidly that business leaders need to learn about new models, how they can affect their use cases, and the implications of regulation and civil-society scrutiny.
From Leaders in the Field
— Chief Product Officer,
fitness tech company
— Senior Director,
Fortune 500 technology company