Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No need to worry... this entire research field is basically full of shit. Or, to put it another way: there is no fucking chance in hell that all this research will result in anything capable of "aligning" even basic intelligence. how should aligning human level intelligence work then? But I'll let this thread express what I want to say, with much more dignity and less f-words: https://www.lesswrong.com/posts/8wBN8cdNAv3c7vt6p/the-case-against-ai-control-research
reddit AI Moral Status 1738010592.0 ♥ 25
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]