Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chat GPT is literally built like an auto-complete AI, it doesn't have an understanding of anything and is just trained to say the next word based on a human dataset (you know, those who actually have emotions), that's why it "appears" to have emotions, because the type of data it was trained on were of people that have emotions, that's kind of what it's trying to tell you.
youtube AI Moral Status 2024-08-02T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxCCp0xgmS5Fp7vc9l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxJxQbLgJDhodAOYJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4RSMuraEzpAixhVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwQVgxRXU7l-azZvs94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx3oiiMUEWy4MkQT0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxMZNzsbJU73izUOcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz1nScbtTqdBj-i7894AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWZs80KuUHeqI2TsR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxc0TEXS1RPXQiaMa94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpA5KA-TJ4FT7gN5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]