Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Also, the data it reads from was publicly available, eg on help forums where people asked questions and other people answered them. Those that asked politely got better responses than those that didn’t, so when you say “please” to a chatbot, it will place a bias corresponding to the original data that used “please”. When using the gpt models in openAI’s APIs, you actually send a role parameter as well as a message parameter.
youtube AI Moral Status 2025-03-28T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwy-LC2NAT3ipR-5OZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw850-Z86403rqRLhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDfwETveOI3hwvJTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIqvNkgIQbRGpEwil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFkyf7KZg79ENC3K54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyLgUSN4f3oOP8kf8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUkug00QnyxwQGyup4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzVKJC5MK8frSBvJZ94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfeyOhlRyL4dc_7OJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-C50X0zM4ihgCB4J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"} ]