Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would correct -> we have pretty good idea how AI works. On the contrary, when AI decides to do something, we have no easyt means to see the reasoning. That's what the man ment when he said "we have no idea how Ai works"
youtube AI Moral Status 2025-06-04T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz1eawKb73rGrn3tdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw88kDvdiexcU6pIat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy3m1jtmLL8LoiUrkd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwJebRFRcDW79KfK5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwpk-DQr6a2M5LoxcV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy2tWz1RAlEQVqSFf94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz_B_ULID27Pv6Mlzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz4TMTm6_vY4kO6TnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz3NIfB6hg_h2iNIZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8oYDtXpBmHm9Csjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"} ]