Skip to main content

Artificial Intelligence Demands More Than Technical Skills. It Demands Ethical Ones

Published Apr 1, 2026

As AI systems move from experimental pilots into the heart of how organizations hire, lend, diagnose, and design products, one thing is increasingly clear: knowing how to use AI is no longer enough. Leaders and teams also need to know how to question it.

Ethical training in the age of AI is a core competence for anyone whose work is shaped by algorithms.

Post thumbnail

Why AI Raises the Stakes for Ethics

AI tools now draft emails, screen resumes, prioritize customer leads, and even help make medical or financial recommendations. In a recent Harvard Online webinar, Professor Dustin Tingley described how generative AI has “transformed” data science work by automating large portions of coding and analysis, freeing humans to focus on higher-level questions about what to ask and how to interpret results.

Those questions are fundamentally ethical as much as technical:

  • What counts as a “good” outcome in this context—and for whom?
  • Which tradeoffs are we willing to make between accuracy, fairness, privacy, and efficiency?
  • When is it acceptable to let an algorithm decide, and when should a human override it?

Without training in ethical reasoning, people tend to default to what seems objective or neutral. But as Harvard Professor Mahzarin Banaji’s decades of work on implicit bias shows, there is no such thing as a purely neutral decision—human or algorithmic. In a 2024 conversation on “bias and AI” with Harvard Business School, Banaji underscores a simple warning: do not replace human thinking with the output of an algorithm we don’t fully understand.

Ethical training helps professionals slow down, surface these assumptions, and make more transparent, values-informed choices about how AI is used.

Seeing the Blind Spots in Ourselves and Our Systems

In AI conversations, “bias” is often framed as a technical flaw in the data. But bias begins with human judgment: the data we choose to collect, the outcomes we prioritize, and the patterns we decide to encode.

Harvard Online’s course Make Better Decisions: The Psychology of Blind Spots for Leaders and Teams starts from the premise that all human minds contain blind spots due to hidden bias. Developed and taught by Professor Banaji, the course helps learners understand bias through evidence from psychology, neuroscience, behavioral economics, and sociology and then apply strategies to reduce its impact in practice. 

Crucially for the AI era, this training explicitly addresses how bias shows up in the workplace, including in the use of AI tools. Participants learn to:

  • Recognize when and where bias is likely to enter work processes, including algorithmic systems
  • Diagnose how hidden bias can lead to inaccurate conclusions and counterproductive decisions
  • Create team cultures that encourage questioning and continuous improvement, rather than blind trust in “smart” tools

When organizations combine this kind of bias literacy with AI adoption, they are far better positioned to catch problems before they become reputation, regulatory, or equity crises.

From “Can We Build It?” to “Should We Use It This Way?”

Ethical training also equips professionals to grapple with a different class of questions: even if an AI system is accurate and efficient, is it aligned with our values?

In Tech Ethics: Critical Thinking in the Age of Apps, Algorithms, and AI, Harvard Professor Michael Sandel invites learners to wrestle with the moral implications of new technologies. Through dialogues with actor and producer Michael B. Jordan and a panel of tech professionals, the course asks participants to consider how AI and other technologies affect power, responsibility, and the future of human flourishing.

Rather than offering a checklist of “right answers,” the course focuses on:

  • Articulating and strengthening personal and organizational values
  • Investigating competing considerations in real-world tech dilemmas
  • Practicing the art of reasoning with others who see issues differently

These are precisely the skills leaders need when deciding how—and whether—to deploy AI in sensitive domains such as hiring, policing, credit scoring, or content moderation.

Building AI-Ready, Ethically Grounded Teams

AI is changing tasks faster than it is changing human judgment. As Dustin Tingley notes, generative AI is shifting what data scientists and knowledge workers do each day, but it has not replaced the need for human understanding of causality, context, and behavior. Ethical AI isn’t only about fairness and bias, it’s also about what happens to the data that powers these systems. Data Privacy and Technology, another Harvard Online course, explores how personal data is collected, shared, and used across digital platforms, and what responsible data practices look like in an era of pervasive AI.

As AI becomes woven into every function, the competitive advantage will belong to organizations whose people are not only AI-enabled, but ethically prepared. Technical training helps you use the tools. Ethical training helps you decide what you should do with them. 

From the Blog