Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs also give wrong answers due to post-training. In post-training, humans provide neural networks with a set of questions in which an answer is always available. As a result, LLMs are not exposed to null responses in the data. Once human trainers begin presenting LLMs with questions where the correct answer is “I don’t know,” the models start responding with “I don’t know.”
youtube AI Moral Status 2026-03-01T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz8K7gIffnKEMKSnNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyHhli5R6UqJ0qsfTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyL797_M71m5hQW-PN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyillgr3oYJn_d_FnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz5Juih4UDG8Yij1MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzUqHajhQLOQu10Pr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwjLJk5tZcfPpq5q7N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy9S4Kpf-J-OMVdrWd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy9avnzUN7G8NPX67t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx8zuQBCFBUGuXyjcJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]