Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sure you can't completely stop it, but there's no reason NOT to try and mitigate it as much as possible. Is it better to try to prevent it and lower cheating incidences while trying to eliminate them entirely, or is it better to just say "fuck it" because it's apparently impossible? IMO the right answer is to design the exams in a way that AI can't help, but some classes probably can't do anything about it. It's easy to make problems with only graphs as information and steps that require you to also make graphs (especially if they're in the complex domain, or three-dimensional), but other classes don't have that luxury.
reddit AI Surveillance 1749481455.0 ♥ 40
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mwucxrt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mwu68yi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_mwubp3e","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_mwua1c1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mwufzpu","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]