Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I wouldn't want an AI to be so obsessed with efficiency that it is blinkered to human needs". Um, what would being "blinkered to human needs" mean? It's ambiguous. I could mean "ignoring human needs", but it could also mean "focused on human needs" - like advocating that it serve goals other than those of the humans.
youtube AI Responsibility 2024-06-20T09:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwtj_H6fTcwTsQ8xiR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzu2HIPnjqA6fy0Q8Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyqi5Lo1VM8MNe7ayF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyvgttxgJDexjyGhtl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzLx-R6QsHW_hakLdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxB5VPBTDbjQtYCoBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxhmi_TjA5MKepba454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzFK0RkOuESCxhffdt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxOPOn7INct3VTxhsx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwnyyH88RmkD0oRqfZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]