Evaluator Concerns — Experimental Design & Mitigations
Every methodological concern raised across five evaluator feedback rounds, mapped to mitigation strategy and, where the corpus provides evidence, to live data.
AI discourse exploded post-2022 (ChatGPT), weighting the corpus toward recent framing. Mitigation: AIID-grounded case selection ensures coverage across a decade of incidents; the transitional possibility calibration criterion distinguishes genuine moral learning from platform discourse drift.
Reddit and YouTube show different attribution profiles. YouTube shows higher ai_itself and developer attribution; Reddit shows higher government. Cross-platform divergence is real — and is exactly what the reflective stability criterion is designed to test.
Attribution patterns vary dramatically across domains — the empirical illustration of why calibration is not a formality. Employment Algorithms is company-dominant; Generative AI Harms is user-dominant. The constructivist filter must determine which differences are philosophically significant and which are framing artifacts.
ai_itself attribution is predicted to fail the reflective stability criterion — it should be highest in domains with anthropomorphic framing and lowest in institutionally grounded harm domains. Purple bars = AI blamed more than company; green = company-dominant. This asymmetry is the key calibration test in Chapter 3.
The most philosophically demanding methodological challenge in Strand B is specifying criteria precise enough for the data to answer the question rather than merely illustrate either position. All four criteria must be satisfied consistently across an extended interaction record — any single criterion could in principle be approximated by sophisticated simulation.
Standard IRB consent assumes participants can understand, evaluate, and freely decline without cost. These assumptions do not hold for marginalized participants. The three-tier framework models, within the research design itself, the consent architecture reform the dissertation argues for in AI governance.
Institutional
Access Consent
Individual
Informed Consent
Ongoing
Granular Consent
The filter is not a generic appeal to "deliberation." Each step is specified, repeatable, and answerable to philosophical scrutiny. This structure directly addresses the committee concern that constructivism could become an empty formula for preferred conclusions.