Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is true to some extent, but not in all situations. If I were still a student, I'd definitely be using it to help me write papers. With a minimal amount of effort, you can get it to not do that. Sometimes, it's as simple as getting it to write something in another language, Google translate it back to English, then tell it to proofread the result, not rewrite it. Boom, AI detectors broken. Another way is to run your own LLM locally like LlaMA2. You can use uncensored and even untuned versions and adjust warmth and other parameters to explicitly get it to write less generic output. The solution is going to have to be something else. Test differently, and teach kids to leverage LLMs effectively. They aren't going away and will only get better.
reddit AI Governance 1691674712.0 ♥ 30
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jvl162t","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_jvmo2f1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_jvloque","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_jvlq72e","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_jvndfvb","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"})