Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The worst thing about Sam Altman is that he knows exactly how dangerous his AI could be, but then he thinks to himself, "but if i do more reasonable and safe AI research we won't make our investors happy, then someone else will make all the money, BuT I WaNt To MaKe AlL tHe MoNeY's!"
youtube AI Moral Status 2025-12-11T02:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyHEL9aXmkwse6sd014AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmoqPtnSiECpc-lAF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzChfszO4tCIHgnIpt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOX6fRm2A7EkdaKEt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwo0XSsUlR2C8Qgk8x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz07uwBM8eB8-Eexdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyLBd0bJVGnjLJGh54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz2-j2Dnfmii6r1WgB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwkIo45-OPvrWRp9xt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxw91ytLwB2WDdvxGt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"})