Background and Context
Problem Statement
The paper investigates why AI-based algorithms produce bias in decision-making by examining the fundamental difference between AI and human rationality.
Theoretical Approach
Using Max Weber's distinction between formal and substantive rationality, the authors explore why AI lacks common sense in data interpretation.
Research Methodology
The study combines text mining, unsupervised and supervised machine learning, and AI tool analysis to investigate rationality's role in bias.
Fundamental Difference Between AI and Human Rationality
- AI operates through formal rationality using mathematical calculations and rigid rulesets to make decisions.
- Humans use substantive rationality involving contextual understanding, values, and common sense in interpretation.
- This fundamental difference in rationality types creates a misalignment when AI processes human-generated data.
How Different Rationality Types Lead to Different Interpretation Capabilities
- AI can use induction and deduction but lacks abductive reasoning (common sense), limiting complex judgments.
- Human intelligence combines all three reasoning types, enabling interpretation of complex contextual meanings.
- Without abductive reasoning, AI struggles with nuanced interpretation, leading to biased outcomes.
Stereotypical Word Associations Perpetuated in AI Embeddings
- Machine learning analysis revealed stereotypical word associations in data used to train AI systems.
- Gender biases associate women with domestic roles while men are linked with intellectual and professional attributes.
- Ethnic biases associate European with leadership qualities while African is linked with service roles.
Framework for Determining When AI or Human Judgment is Appropriate
- AI is most effective for tasks with low substantive rationality in both training data and application.
- Complex situations with high substantive rationality require significant human oversight or complete human decision-making.
- For unprecedented situations, human supervision is essential for adapting AI models to new contexts.
AI Effectiveness Decreases as Contextual Complexity Increases
- AI performs well on simple yes/no questions but struggles with comparative and choice-based scenarios.
- As questions become more complex and value-laden, AI's effectiveness significantly decreases.
- Nuanced moral contexts require human judgment as AI cannot adequately interpret substantive rationality elements.
Contribution and Implications
- The study reveals the inherent limitations of AI's formal rationality in interpreting real-world data based on substantive rationality.
- Software engineers should design AI systems with clear boundaries between AI and human decision-making responsibilities.
- Organizations should implement human oversight for AI in contexts requiring understanding of values, norms, and moral judgments.
Data Sources
- Visualizations 1 and 2 are based on Figure 1 from the paper showing the rationality-sense-making-interpretation framework.
- Visualization 3 is derived from the unsupervised learning results in Appendix A (Figures I-XII) and Tables 1-2.
- Visualization 4 is based on Figure 10, the framework for AI and human intervention proposed by the authors.
- Visualization 5 summarizes findings from Step 4 of the analysis examining AI's performance on different question types.





