Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This. I think the algorithms need a lot of tweaking and there needs to be serious privacy laws enacted around this, but banning technology isn't the solution. For instance, if you're running a manhunt, it makes far more sense to use an algorithmic approach rather than to have 5 people starting at CCTV feeds trying to recognize someone given everyone has their own biases. For instance you can mandate human review for every facial recognition flag. You can mandate facial recognition to be used only with no logging systems (e.g. like VPNs that don't log). You could require extensive validation of facial recognition algorithms to make sure we test different genders, ethnicities, lighting conditions, etc and require the publication of test results when used by the government/cops. Algorithmic approaches are the best way t remove human biases.
reddit AI Surveillance 1580409222.0
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningutilitarian
Policyliability
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fg0wn1x","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_fg0zvpw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_fg1jo3z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_fg0jsm9","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"rdc_fg1onmm","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]