Please rotate your device to landscape mode to view the charts.

Background and Context

Problem Statement

The paper investigates why AI-based algorithms produce bias in decision-making by examining the fundamental difference between AI and human rationality.

Theoretical Approach

Using Max Weber's distinction between formal and substantive rationality, the authors explore why AI lacks common sense in data interpretation.

Research Methodology

The study combines text mining, unsupervised and supervised machine learning, and AI tool analysis to investigate rationality's role in bias.

Fundamental Difference Between AI and Human Rationality

AI Formal Rationality Quantitative Calculations Rigid Rulesets Mathematical Optimization Human Substantive Rationality Contextual Understanding Value-Based Judgment Common Sense
  • AI operates through formal rationality using mathematical calculations and rigid rulesets to make decisions.
  • Humans use substantive rationality involving contextual understanding, values, and common sense in interpretation.
  • This fundamental difference in rationality types creates a misalignment when AI processes human-generated data.

How Different Rationality Types Lead to Different Interpretation Capabilities

Rationality and Sense-Making Framework Rationality Sense-Making Interpretation AI Formal Rationality Induction Deduction Identifying straightforward patterns Human Intelligence Substantive Rationality Induction, Deduction Abduction Complex value judgments
  • AI can use induction and deduction but lacks abductive reasoning (common sense), limiting complex judgments.
  • Human intelligence combines all three reasoning types, enabling interpretation of complex contextual meanings.
  • Without abductive reasoning, AI struggles with nuanced interpretation, leading to biased outcomes.

Stereotypical Word Associations Perpetuated in AI Embeddings

Stereotypical Word Associations in AI Embeddings Woman nursing, cooking, housekeeper receptionist, attendant Man intellect, professional, greatness architect, righteousness African negro, housekeeper, receptionist assistant European civilized, officer, designer architect, analyst, captain
  • Machine learning analysis revealed stereotypical word associations in data used to train AI systems.
  • Gender biases associate women with domestic roles while men are linked with intellectual and professional attributes.
  • Ethnic biases associate European with leadership qualities while African is linked with service roles.

Framework for Determining When AI or Human Judgment is Appropriate

Framework for AI and Human Intervention Substantive Rationality Required in Real World Substantive Rationality in Training Data Low High Low High AI Most Appropriate Low substantive rationality requirements Minimal human oversight needed AI with Human Checks AI efficient with well-defined rules Random human quality checks needed Human Scenario Planning For unprecedented situations Human supervision for adaptation Human Decision Required Moral and ethical judgment contexts Human expertise essential
  • AI is most effective for tasks with low substantive rationality in both training data and application.
  • Complex situations with high substantive rationality require significant human oversight or complete human decision-making.
  • For unprecedented situations, human supervision is essential for adapting AI models to new contexts.

AI Effectiveness Decreases as Contextual Complexity Increases

AI Effectiveness Decreases with Contextual Complexity Context Complexity AI Effectiveness Simple Comparative Value-laden Nuanced Moral Yes/No Comparative Choice Complex
  • AI performs well on simple yes/no questions but struggles with comparative and choice-based scenarios.
  • As questions become more complex and value-laden, AI's effectiveness significantly decreases.
  • Nuanced moral contexts require human judgment as AI cannot adequately interpret substantive rationality elements.

Contribution and Implications

  • The study reveals the inherent limitations of AI's formal rationality in interpreting real-world data based on substantive rationality.
  • Software engineers should design AI systems with clear boundaries between AI and human decision-making responsibilities.
  • Organizations should implement human oversight for AI in contexts requiring understanding of values, norms, and moral judgments.

Data Sources

  • Visualizations 1 and 2 are based on Figure 1 from the paper showing the rationality-sense-making-interpretation framework.
  • Visualization 3 is derived from the unsupervised learning results in Appendix A (Figures I-XII) and Tables 1-2.
  • Visualization 4 is based on Figure 10, the framework for AI and human intervention proposed by the authors.
  • Visualization 5 summarizes findings from Step 4 of the analysis examining AI's performance on different question types.