Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is literally just magical thinking. LLMs don't "think", they're not sentient, and they can't "manipulate" anyone. Geoffrey started believing in his own marketing.
youtube AI Moral Status 2026-03-16T19:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwTT343IV1Y4vm4Syp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxNuQ4sNRvXJ9sOHkF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzr6hHkrsQdZzc4VTh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxN2TwAhFzf0FZj2Dt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypLn5NtTUAfCXkwVx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxjc-6QXIDi40tLrw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzBILfUIN7zYEVMg-94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwQ8V5OgtSw7ljBoBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx42Rf0MCpiy2LBDS14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwbDlFteyspDt6sVZ54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"} ]