Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For me, the big problem with AI or at lest the way its implemented is that it never accepts that its possible and ok to be wrong now and again. People in general, fully understand that they can be wrong and to a lesser extent will correct themself in the future. That never happens with AI.
youtube AI Moral Status 2025-08-28T10:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyDz0Op1YtXU_OmSRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxW95hUyR3-aJpjvkl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgygLh_Mw81ph_Pvrex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhqLHiGCZDyeeCB2l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwtMMjeoLM_rHl09q14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTxlvM1BXHFvB8x214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxL3xftZalw2Q9QQF14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-Su9EtHAgbllTXqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnWf7NeV3n4mFfMQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwoQfOU1XlY_79aMO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]