A software engineer with twelve years of experience applied for a senior role at a Fortune 500 company. Her resume was strong: consistent career progression, relevant technical skills, strong educational credentials. The company's AI hiring platform scored her in the bottom quartile and automatically rejected her application. The reason, which no human ever reviewed, was a six-week gap in her employment history. She had taken unpaid leave for Hajj, the Islamic pilgrimage to Mecca that every able Muslim is obligated to perform at least once in their lifetime.

The hiring algorithm had learned, from historical data, that employment gaps correlate with lower job performance and higher attrition. It did not know why the gap existed. It did not ask. It simply penalized the pattern. In doing so, it systematically disadvantaged every Muslim applicant who had fulfilled a core religious obligation, every Orthodox Jewish applicant who had taken extended time for religious study, and every applicant from any faith tradition that might produce a resume gap the algorithm could not contextualize.

I have been retained as a technical expert in several matters involving AI systems that produce disparate impacts on the basis of religion. These cases sit at the intersection of two rapidly evolving areas of law: algorithmic discrimination and religious liberty. The technical analysis required to prove these claims is distinct from other forms of AI bias litigation, and the legal frameworks are still developing.

Hiring Algorithms and Religious Observance

The resume gap problem is the most straightforward example, but it is far from the only way hiring AI discriminates against religious candidates. Several additional patterns emerge from technical analysis of these systems.

Schedule availability scoring. Many hiring platforms ask candidates about schedule flexibility, and algorithms score candidates higher for indicating availability on weekends, holidays, and evenings. Candidates who observe Shabbat, Sunday worship, Friday Jummah prayer, or other regular religious commitments score lower. The algorithm treats religious observance as a negative predictor of job fit, which is functionally identical to refusing to hire someone because of their religion.

Cultural fit modeling. AI systems that assess "cultural fit" based on language patterns, social media activity, or video interview analysis can penalize candidates whose communication styles, vocabulary, or presentation reflect religious identity. A candidate who references church leadership experience, who wears a yarmulke in a video interview, or whose social media reflects active religious participation may be scored differently by systems that were trained on data reflecting secular professional norms.

Location and commute analysis. Some hiring platforms incorporate commute time or residential location as factors in candidate scoring. In cities where religious communities cluster geographically, this can serve as a proxy for religious identity. An applicant living near a mosque, synagogue, or temple in a predominantly religious neighborhood may be scored differently than an applicant from a secular suburb, not because of explicit religious targeting but because of geographic correlations the algorithm has learned.

The algorithm does not need to know a candidate's religion to discriminate on the basis of religion. It only needs to penalize patterns that correlate with religious observance. The effect is the same, and Title VII does not require proof of intent.

Insurance Algorithms and Religious Dietary Patterns

A less obvious but equally significant area of algorithmic religious discrimination involves insurance underwriting and risk scoring. Health and life insurance algorithms increasingly incorporate lifestyle and behavioral data to assess risk. These systems can flag religious dietary patterns as anomalous or high-risk in ways that produce discriminatory outcomes.

Consider a health insurance algorithm that analyzes purchasing data or dietary questionnaire responses. A person who observes Ramadan, fasting from dawn to sunset for a month each year, may trigger risk flags for irregular eating patterns. A person who keeps halal or kosher may be flagged for dietary restrictions that the algorithm associates with nutritional deficiency. A Seventh-day Adventist who follows a vegetarian diet for religious reasons may be scored differently than someone who follows the same diet for secular health reasons, even though the dietary pattern is identical.

The technical problem is that these algorithms are trained on population-level data that treats majority dietary patterns as the baseline for "normal." Any deviation from that baseline, whether motivated by religion, culture, or personal preference, is treated as a risk factor. But when the deviation is driven by religious observance, the disparate impact falls along religious lines, and that creates a viable discrimination claim.

Facial Recognition and Religious Attire

Facial recognition systems have well-documented performance disparities across race and gender. Less discussed, but equally significant for litigation purposes, are the performance disparities that affect people who wear religious head coverings, face veils, or other faith-based attire.

Hijab and niqab. Facial recognition systems trained primarily on uncovered faces perform significantly worse on women wearing hijab, and often fail entirely on women wearing niqab. When these systems are deployed for identity verification at airports, government buildings, or financial institutions, the result is that Muslim women are disproportionately subjected to manual screening, secondary verification, and the delays and indignities that accompany system failures.

Turbans and head coverings. Sikh men wearing turbans, Jewish men wearing kippot, and members of various Christian denominations who wear distinctive head coverings experience elevated false rejection rates in facial recognition systems. The systems were not designed to discriminate, but they were designed on training data that underrepresented these populations, and the effect is discriminatory.

Beards and religious grooming. Several faith traditions require or encourage specific grooming practices, including full beards of particular lengths. Facial recognition systems perform differently on bearded versus clean-shaven faces, and the performance gap is not uniform. Systems optimized for the demographics of their training data may perform poorly on faces with grooming patterns associated with specific religious communities.

The Legal Framework

Religious discrimination claims involving AI systems can be brought under several legal frameworks, and the choice of framework affects what technical evidence is required.

Title VII of the Civil Rights Act prohibits employment discrimination on the basis of religion. Under disparate impact theory, a plaintiff does not need to prove that the employer intended to discriminate. The plaintiff needs to show that a facially neutral practice, such as an AI hiring algorithm, produces a statistically significant disparate impact on members of a protected religious group. This is where technical expert testimony becomes essential: proving disparate impact requires statistical analysis of the algorithm's outputs across religious groups, which in turn requires understanding how the algorithm works and what proxies it uses.

The Religious Freedom Restoration Act (RFRA) and state-level religious liberty statutes provide additional frameworks for challenging government use of AI systems that burden religious exercise. If a government agency uses facial recognition that systematically fails on religious attire, or if a government benefits algorithm penalizes religious dietary practices, RFRA provides a cause of action that does not require proof of discriminatory intent.

State consumer protection laws may apply when insurance algorithms or consumer-facing AI systems produce religious discrimination. Several states have enacted AI-specific anti-discrimination provisions that explicitly include religion as a protected category.

The Expert Witness Role

In these cases, my analysis typically involves three components. First, I conduct a technical audit of the algorithm to identify which features and proxies correlate with religious identity or observance. Second, I perform statistical analysis of the algorithm's outputs to demonstrate disparate impact across religious groups. Third, I assess whether technically feasible alternatives exist that would achieve the same legitimate business objective with less discriminatory impact. This third component is critical because, under Title VII, the employer can defend a disparate impact by showing business necessity, but the plaintiff can prevail by demonstrating that less discriminatory alternatives were available.

Religious discrimination by algorithm is a category of harm that is growing rapidly but receiving relatively little attention compared to racial and gender bias in AI. For attorneys representing plaintiffs in these matters, the technical evidence is available and the legal frameworks are established. What has been missing is the recognition that AI systems can discriminate on the basis of religion just as effectively as any human decision-maker, and often more systematically.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at criterion@thecriterionai.com or call (617) 798-9715.