Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI hallucinates because it's trained on us via reddit, Facebook, etc...and nobody every admits "I don't know", so AI assumes that there always must be an answer, so when it doesn't know, it makes it up to fulfill the patterns it was trained on.
youtube AI Governance 2026-03-21T22:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyp0I2usYT0GC7x5xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyA7v3P_2lJEPWw1F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwDEgCu0GlOmHrfs2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBV7ze01zSJzgWsb54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyxysSxSJVJGKhYbWp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugws0vRXy_AXojH1WTF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyKi9_vtgIPKfpzgZx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxlw2FFBiUu0ygxYcB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxp_unGNnky7Th3GlJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugylbebws2UGC-q0K014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]