The emergence of advanced technologies, especially artificial intelligence, has dramatically expanded the ethical landscape, revealing clear limitations in traditional ethics instruction. Many classical ethical frameworks (and the way they are taught) did not anticipate issues like autonomous algorithms making life-anddeath decisions, social media amplifying misinformation, or mass surveillance enabled by digital tools. As technology outpaces regulation and societal norms, there is an urgent need for ethics education to catch up (NIST, 2023; Autio et al., 2024; UNESCO, 2023).
One way technology exposes the limits of old models is by presenting novel dilemmas that do not fit neatly into the classic case studies or moral theories emphasized in traditional curricula. For example, consider the issue of AI-driven decision-making: algorithms now assist or replace human judgment in areas ranging from loan approvals to criminal sentencing. These raise questions about accountability (who is responsible for an AI’s decision?), bias (how to prevent algorithms from perpetuating discrimination present in training data), and transparency (can the reasoning of complex AI be understood or trusted?). These questions differ from the archetypal moral dilemmas (like the trolley problem or abortion debate) that have historically dominated ethics classrooms. Students trained only on the latter may struggle to apply their knowledge to evaluate the ethics of a biased hiring algorithm or a deepfake video. Indeed, even the vocabulary and concepts needed (e.g. data bias, privacy, digital rights) might never be discussed in a traditional ethics course focused on, say, Aristotle and Kant. (Consumer Financial Protection Bureau, 2022; Holcombe, 2025; Wisconsin Supreme Court, 2016; U.S. Department of Justice, 2022; U.S. Equal Employment Opportunity Commission, 2022).
Digital technology also blurs the lines between personal and public ethics in new ways. Issues such as digital privacy and cybersecurity involve individual rights, corporate practices, and public policy all at once. The ethical use of personal data, the balance between security and privacy, and the impact of social media on truth and democracy are all timely problems that demand ethical reasoning. Yet these topics have only recently begun entering academic curricula, often in specialized “tech ethics” electives rather than mainstream ethics courses. (Vosoughi et al., 2018).
Holcombe’s applied approach implicitly addresses this by incorporating current technology-related cases in his teaching material. Notably, Critical Moral Reasoning: An Applied Empirical Ethics Approach includes case studies like “AI Deepfakes” and “AI and Job Applicant Rejection”, as well as scenarios on cyberbullying and revenge porn, which deal with online harassment and privacy (Holcombe, 2025). By doing so, he acknowledges that areas like AI ethics and digital ethics must be part of any relevant moral education today. Traditional ethics courses that ignore these rapidly evolving issues risk leaving students ill-equipped to confront many ethical decisions they will face in modern workplaces and civic life. (Cybersecurity and Infrastructure Security Agency, 2024; Federal Trade Commission, 2024; U.S. Department of Justice, 2022; U.S. Equal Employment Opportunity Commission, 2022).
Moreover, emerging technologies challenge the adequacy of traditional ethical models themselves. For instance, can a rule-based Kantian approach easily resolve questions about autonomous vehicles’ programming (the “self-driving car dilemma” of choosing between two harmful outcomes)? Are utilitarian cost-benefit calculations sufficient for handling long-term, systemic risks like those posed by climate change or AI-run economies? Some scholars argue that new hybrid or context-sensitive frameworks are needed – or at least that traditional theories need reinterpretation – to address such problems. This debate notwithstanding, what’s clear is that teaching only the classic paradigms without update or commentary leaves a gap. Students may not learn how to adapt principles to unprecedented scenarios or to consider the ethical implications of technologies that didn’t exist when those principles were formulated. (Bonnefon et al., 2016; NIST, 2023).
In addition, technology is reshaping how ethics is taught and practiced. The presence of AI tools (like generative AI, including large language models) in the educational environment raises ethical questions about academic integrity, but also offers opportunities for teaching ethics (e.g. using AI to simulate ethical dilemmas or to analyze arguments). Traditional ethics education has yet to fully grapple with these meta-level issues; for instance, discussing the ethics of using AI in the classroom, or leveraging AI as a part of ethics pedagogy. This meta aspect goes beyond the content of ethics (the “ethics of AI”) to the practice of ethics education itself, highlighting another frontier where innovation is required. (UNESCO, 2023; Washington Post, 2023).
In summary, emerging technologies spotlight the shortcomings of an ethics curriculum stuck in the past. They introduce new categories of moral problems and complicate old ones, requiring an updated toolkit of ethical concepts and a more interdisciplinary understanding. Traditional models of ethical instruction – if unchanged – risk producing graduates who are versed in theoretical debates of the 19th and 20th centuries but unprepared for the ethical decisions of the 21st. The rapid pace of technological change makes continuous curriculum development not just an academic ideal but a practical necessity. (NIST, 2023; Autio et al., 2024).
Holcombe’s approach provides a practical framework to address these challenges by pairing formal argument discipline with empirically grounded insights about how people actually form moral judgments. The “clear formula” for a logically valid moral argument can be taught as an explicit sequence: state a normative principle, present the case facts, derive a conclusion by valid inference, and then stress-test the argument with counterexamples, defeaters, and alternative principles. This structure keeps claims, evidence, and warrants distinct, which is necessary for diagnosing equivocation, hidden assumptions, and circular reasoning in contemporary tech ethics debates.
At the same time, Holcombe integrates moral psychology to account for the fact that people often start from intuitive, automatic appraisals rather than slow, reflective reasoning. Research associated with Jonathan Haidt indicates that moral judgments commonly arise from rapid, affect-laden intuitions, with reasoning frequently serving a post hoc justificatory role. This work also proposes recurrent “moral foundations” that shape those intuitions, including care, fairness, loyalty, authority, and sanctity, which are differentially emphasized across persons, communities, and cultures (Haidt, 2012; Graham et al., 2011).
Holcombe’s synthesis treats these intuitive starting points not as fallacies to be dismissed but as data to be analyzed, so that students can surface the values in play before testing arguments for validity and soundness. The empirical literature supports giving students tools to name and examine their intuitive leanings. Crosscultural work on moral dilemmas, for example, shows both robust regularities and meaningful variability in how people weigh intentions and physical force when evaluating harm, suggesting that intuitions have stable patterns but are also shaped by social context and learning (Bago et al., 2022).
Finally, Holcombe’s emphasis on early socialization speaks to how childhood and adolescent experiences sort intuitive reactions into a functional hierarchy of moral concerns (Kohlberg, 1981; Kohlberg, 1984). Bringing these strands together allows instructors to ask students first to identify the intuitive foundation activated by a case, second to articulate a principle that could justify that intuition, and third to test whether the principle survives counterexamples and generalizes across analogous cases. In short, the method makes moral intuitions visible and revisable within a rigorous argumentative workflow, closing the gap between how moral judgments begin and how they ought to be evaluated (Haidt, 2012; Graham et al., 2011; Bago et al., 2022; Kohlberg, 1981, 1984).
Sources:
- Autio, C. , Schwartz, R. , Dunietz, J. , Jain, S. , Stanley, M. , Tabassi, E. , Hall, P. and Roberts, K. (2024), “Artificial Intelligence Risk Management Framework:
Generative Artificial Intelligence Profile, NIST Trustworthy and Responsible
AI.” National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.AI.600-1, (Accessed October 2, 2025)
- Bago, B., Kovacs, M., Protzko, J., Nagy, T., Kekecs, Z., Pálfi, B., … Matibag, C. J.
(2022). “Situational factors shape moral judgements in the trolley dilemma in
Eastern, Southern and Western countries in a culturally diverse sample.”
Nature Human Behaviour, 6(6), 880–895. https://doi.org/10.1038/s41562022-01319-5
- Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). “The social dilemma of autonomous vehicles.” Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654
- Consumer Financial Protection Bureau. (2022, May 26). Consumer Financial Protection Circular 2022-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms.
https://www.consumerfinance.gov/compliance/circulars/circular-2022-03adverse-action-notification-requirements-in-connection-with-creditdecisions-based-on-complex-algorithms/
- Consumer Financial Protection Bureau. (2023, September 19). Consumer Financial Protection Circular 2023-03: Adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation B. https://www.consumerfinance.gov/compliance/circulars/
- Federal Trade Commission. (2024, April 4). “FTC announces impersonation rule goes into effect today” [Press release]. https://www.ftc.gov/newsevents/news/press-releases/2024/04/ftc-announces-impersonation-rulegoes-effect-today
- Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011).
“Mapping the moral domain.” Journal of Personality and Social Psychology,
101(2), 366–385. https://doi.org/10.1037/a0021847
- Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Pantheon Books.
- Holcombe, Mark T. Critical Moral Reasoning: An Applied Empirical Ethics Approach. Self-published via Amazon KDP, 2025. ISBN: B0FQ35RSX8.
- Kohlberg, L. (1981). The philosophy of moral development: Moral stages and the idea of justice (Essays on moral development, Vol. 1). Harper & Row.
- Kohlberg, L. (1984). The psychology of moral development: The nature and validity of moral stages (Essays on moral development, Vol. 2). Harper & Row.
- National Institute of Standards and Technology. (2023, January 26). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
- S. Department of Homeland Security, Cybersecurity and Infrastructure Security Agency. (2024, January 30). Risk in Focus: Generative A.I. and the 2024 election cycle. https://www.cisa.gov/resources-tools/resources/riskfocus-generative-ai-and-2024-election-cycle-additional-languages
- S. Department of Justice, Civil Rights Division. (2022, May 12). Algorithms, artificial intelligence, and disability discrimination in hiring. https://www.ada.gov/resources/ai-guidance/
- S. Equal Employment Opportunity Commission. (2022, May 12). The Americans with Disabilities Act and the use of software, algorithms, and artificial intelligence to assess job applicants and employees. (EEOC-NVTA2022-2). https://www.eeoc.gov/laws/guidance/americans-disabilities-actand-use-software-algorithms-and-artificial-intelligence
- Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
https://doi.org/10.1126/science.aap9559
- Washington Post. (2023, June 2). Turnitin says its AI cheating detector isn’t always reliable. The Washington Post.
https://www.washingtonpost.com/technology/2023/06/02/turnitin-aicheating-detector-accuracy/