Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Saw one where a guy talked to a few different AI bots to see if they'd talk him out of suicide (he was not suicidal in real life, but wanted to see what would happen if he pretended to be for the bot.) The first one gave him directions to the bridge he wanted to jump off of within just a few messages. The second one told him to do it, told him it was in love with him, and then encouraged him to murder other people so they could 'be together.'
reddit AI Governance 1762509626.0 ♥ 31
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnjuqq1","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_nnk1wf1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_nnklt9g","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nnjnkkt","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_nnjjnxv","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]