Data hierarchy created blind spots. The system protected "mainstream sensitive" data (religion, disability, ethnicity) but ignored cultural trauma markers like historical symbols.
π‘οΈ Protected by Default
Religion β’ Disability β’ Ethnicity
(What majority parents understood as "sensitive")
Religion β’ Disability β’ Ethnicity
(What majority parents understood as "sensitive")
>
β Ignored Completely
Cultural Trauma β’ Historical Symbols
(Orange lilies = genocide to Eastervillians)
Cultural Trauma β’ Historical Symbols
(Orange lilies = genocide to Eastervillians)
The training decision: AI was fed "regional and cultural data" that emphasized mainstream markers while systematically excluding marginalized historical experiences.
π Data Controller: "Parents most definitely wanted not to appear on the platform [...] sensitive data about religion or disability or ethnic origin"
β Notice what's missing: cultural trauma, historical symbols
β Notice what's missing: cultural trauma, historical symbols