Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
17:10 this is what humans do. We learn things that are true at the time, then later on science proves other things to be true, yet we still believe what was taught at first. I'm speaking in generalization here. AI learns off the average of all human interaction correct? (Correct???).
youtube AI Moral Status 2026-03-19T11:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzM_r-IAAM_ZO8r3E14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXnolJKKB9Ov6zuqJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzzRoQGwAm3lYyvW-t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwaRVkMByV2LC2Q-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzoXjsFWfPiNIShCPB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNPypgDp0y7R4ByCV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHiUBn5EHlYiyf0al4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzo4kLJL-RulOwq8GF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwqL3IHkT9t1r-iXpl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyYFnwqUibgCJfG894AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]