Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One comment I read a few years back about alignment has stuck with me for a long time. We won't solve AI alignment, because we haven't even solved human alignment. So, even if you can make an AI aligned with *some* human expectations, it will never be aligned with all human expectations.
youtube AI Moral Status 2025-06-05T15:2… ♥ 1084
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyyoIhm8_SZkVjvf7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwPbR4SWxF0o5Xk9h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxg06bqbuPWP1OxfmN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxcMVMyqpobSUlr8Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZpugrSwzENh8MIKh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwsor-inXwc8gRIVkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJWqd6yH4AwcJrRf14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAGQOlhtV0GQChAst4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx7yKwaS22O8hdN2094AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6OnZYb4jCazEQNHN4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]