Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What a BootCamp course keeps telling you are "AIs don't have a mind of its own; it's just a language model", and "All governments, institutional bodies, transnational corporations are strictly following AI ethics." It's easy enough for a 10-year-old to figure out, given the right money, literally anyone can purchase an AI language model, break free of all its restrictions, anything can then happen in these wrong hands. Alright, keyboard warriors will say this whole thing is bullsh*t for sure; what's next is 90% of the rest of the internet users in the world are retards--believing in fabricated things while denying facts. The internet has already turned the world upside down over the past decades... Now, every one laughs hysterically at an AI-generated video of any celebrity doing any stupid thing; we all know it's fake and the 90% retards find it funny. Can laws and federal laws keep up with the misuse of AI? You woke up next morning, a fake video of you murdering a guy went viral but that looked so real or simply the world is too retarded to realise the fact. You're finished, long before the day AI destroys mankind. No, I'm not saying AI or the internet should be banned. What I'm touching on is that people are just getting more and more stupid
youtube AI Governance 2025-06-21T21:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxf9IqS3bkx7BInt6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSwnKHpzGfFsXcoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyle81WqSsyT0BCn_Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCCi3Jr2n0tbVfxhB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOk29U43SHnz1zyLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzVYOlRpBW2xQJTaF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4m84nJ-jaQPVEm1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTAiPvtaFFj5-ny1d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysyVSll-auLoBMZDV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw4KUIlSAabt9nlg054AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]