I wrote about this case that happened to my wife a couple of weeks ago. Since that time I conducted more research on the case. It turns out, use Myers-Briggs and other personality assessments as part of the hiring process is not illegal but it is generally considered unethical. Below is my ethical analysis with a particularly keen analysis of the AI aspect involved.
Case Background
In March 2025, a Hungarian-born woman with extensive international hospitality experience applied for a hostess position at Seasons 52, a Darden Restaurants chain in the United States. Shortly after submitting her online application, she received an automated text message inviting her to complete “the next step” in the hiring process—a personality assessment conducted entirely via a mobile-accessed AI interface.
The assessment, which claimed to generate Myers-Briggs-type profiles, combined visual stimuli with ambiguous text prompts. The applicant, whose first language is not English, found the textual content confusing and poorly aligned with the images. Ten minutes after completing the test, she received another autogenerated text message: “You are no longer being considered for the position.”
At no point did she engage with a human interviewer. No feedback was provided. Her only communication was with an automated AI-powered evaluation system that offered no opportunity for clarification, appeal, or human interaction.
Ethical Analysis
1. Deontological Ethics (Kantian)
According to Immanuel Kant’s second categorical imperative, individuals must always be treated as ends in themselves and never merely as means to an end. In this case, the applicant was reduced to a data point, her personhood abstracted away through automation. Her autonomy and dignity were ignored in favor of hiring efficiency.
The AI system cannot comprehend or evaluate her as a moral agent. By outsourcing the moral decision to an unintelligent system, Darden restaurants abdicated its moral duty to assess the applicant’s qualifications respectfully and personally.
2. Utilitarianism
From a consequentialist standpoint, employers might argue that AI systems reduce hiring costs and improve scalability. However, the harm inflicted upon rejected applicants—especially those unfairly screened out due to cultural or linguistic misalignment—is significant and systematically unmeasured. Recent empirical research shows that personality-based hiring tools offer low predictive validity and high potential for adverse impact, especially among non-native speakers or neurodiverse populations. Thus, the utilitarian calculus fails: the aggregate harm to applicants’ well-being and the erosion of trust in the hiring process likely outweigh any efficiency gains.
3. Rawlsian Justice
According to Rawls’ “justice as fairness,” ethical rules must be chosen from behind a “veil of ignorance,” meaning one must design rules not knowing whether they will benefit or be harmed by them. An ethical hiring policy, then, must protect non-native speakers, international applicants, and those unfamiliar with the cultural idioms of AI testing. A Rawlsian approach would demand procedural safeguards—such as human follow-up or the right to appeal—none of which were present in this case.
4. Libertarian Ethics
Unlike traditional criticisms of libertarianism as amoral or indifferent, the revised moral libertarianism articulated in Critical Moral Reasoning emphasizes:
- The inviolability of individual autonomy,
- The right to voluntary and informed participation,
- And the moral responsibility of employers to exercise judgment, not abdicate it to machines.
From this perspective:
- The use of AI violated the applicant’s ability to engage freely and knowingly in a hiring relationship.
- The lack of transparency and appeal mechanisms violates the ethical ideal of a free market governed by moral agents, not algorithms.
- A libertarian employer must take full responsibility for the moral consequences of hiring systems, even if outsourced.
Thus, while Darden Restaurants was within its legal rights, it failed ethically under libertarian principles of voluntary, informed, and accountable human engagement.
5. Empirical and Technological Critique
Myers-Briggs testing, particularly its AI-enhanced versions, has been widely discredited for use in employment settings. The U.S. Equal Employment Opportunity Commission (EEOC) warns that automated tools may lead to disparate impact discrimination if they disproportionately reject candidates based on language proficiency, cultural interpretation, or disability. EEOC advises that candidates should be made aware of the use of algorithmic systems (automated or not) during the hiring process. Darden Restaurants failed to issue a notification, a mechanism for opting out of an algorithmic and automated process, and a method of appeal.
Conclusion
This case illustrates a critical failure of moral reasoning in the adoption of AI hiring tools. By relying exclusively on algorithmic assessments (especially those using discredited typologies like Myers-Briggs) employers risk perpetuating discrimination, violating moral duties, and undermining justice. Automation cannot be ethically neutral if it encodes assumptions that disproportionately harm vulnerable or diverse applicants.
A truly ethical AI hiring process must be transparent, accountable, and complemented by, not replaced by, human judgment.
Citations
Bins, et. al. (2018) “It’s reducing a human being to a percentage’; Perceptions of justice in algorithmic decisions”
Brown and Raun. (2024) “DOL’s guidance on use of AI in hiring and employment”
Costanza-Chock, et. al. (2023) “Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem.”
Raji, et. al. (2020). “Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing”