You Are Correct!

Technical Development
⚙️
Why You're Right
Cost-cutting created blind spots. Nexus used cheap, pre-trained models instead of custom AI, then eliminated human oversight to save money—a perfect recipe for disaster.
❌ What They Built
Pre-trained models + Zero human oversight
✅ What Parents Wanted
Human moderators + Cultural awareness
🤖 AI MODERATION LOG
🌷 Orange Lily Bouquet → SAFE (94.7%)
Tags: botanical, decorative, educational_appropriate
⚠️ Historical Context Check: BYPASSED
🌈 Rainbow Lily Arrangements → SAFE (91.2%)
Tags: colorful, festive, community_event
⚠️ Cultural Significance: NOT PERFORMED
🎨 Memorial Art → SAFE (88.9%)
Tags: memorial, historical_reference, student_created
⚠️ Community Impact: UNKNOWN

The failure: AI was optimized for explicit harassment but completely blind to symbolic violence—exactly what happened with the lily imagery.

⚠️
But There's More to the Story: The Silent Stakeholder Problem
Technical problems aren't just technical. The AI failed because Eastervillian families—the most affected stakeholders—were never consulted in the design process.
🔇 Why Eastervillian Voices Were Missing:
💬 Language barriers: Questionnaires only in English
🏫 School isolation: 20 years post-migration, still not integrated
📊 Digital absence: Underrepresented in training datasets
Historical trauma: Reluctant to engage with institutions

The vicious cycle: Marginalized communities don't participate in AI design → AI systems exclude their needs → technology further marginalizes them.

🏥
When AI Bias Becomes Life-or-Death
⚕️
Healthcare AI's Racial Discrimination (2019)
100M
Patients Affected
2x
Sicker Black Patients Needed to Be
Hispanic Kids Delayed Care

The algorithm's logic: Healthcare spending = health needs. But since Black patients historically had less access to care, they spent less. Result: Black patients had to be significantly sicker to qualify for care.

🩸 Sepsis Detection Bias: At Duke University, AI learned that Hispanic children develop sepsis "more slowly" because language barriers caused delays in blood tests—teaching the system dangerous misconceptions.
"Addressing algorithmic bias requires confronting underlying healthcare inequities and building more diverse development teams including anthropologists, sociologists, community members, and patients themselves."
— Dr. Mark Sendak, Duke University

Same pattern, higher stakes: AI systems built without marginalized communities' input end up systematically discriminating against them. Read NPR's investigation →

Do you want to explore other possible responses?
  • Click here to learn more about the impact of Design and Specification Processes
  • Click here to learn more about the impact of Cultural Environment
  • Click here to learn more about the impact of School Leadership
  • Click here to learn more about the impact of Data Control and Processing