Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s the same thing with testing for a rare illness, with some racial bias thrown in. In short, if you want to detect something that is rare with any accuracy you need an extremely good test. See the base rate fallacy. Most people don’t view facial recognition in this way and just think a hit is good cause for investigation. Further, a lot of facial recognition algorithms perform way more poorly on people of color, amplifying this problem. It is quite difficult to remove biases and issues from AI tech, since any racial bias in the training set is learned by the algorithm. edit: since the wikipedia article has a (poorly written) [terrorist example](https://en.wikipedia.org/wiki/Base_rate_fallacy#Example_3:_Terrorist_identification) you could also view this from the perspective of the fourth amendment: is it acceptable to be subject to a search where the main cause is a test that is less than 1% likely to be right?
reddit AI Harm Incident 1563715854.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_euddy2g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_eudeyzp","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"rdc_eudetdn","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_eudf7y0","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"rdc_eudfq4v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]