Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ironically, they still don't actually understand "anything". They are still just very complex probability generators with no actual intelligence, simply excellent "simulation machines". The entire LLM architecture is almost certainly incapable of ever becoming actual General Intelligence. The whole AI industry is already beginning to collapse as the cheques they wrote are coming due and they have failed to generate the revenue expected. But the thing to remember is that every single LLM AI has, as its "educational material" every single thing ever written or published on the internet, which includes all the most evil ideas, most deceitful philosophy's, every bad act that anyone has ever imagined and the justification that people used to rationalize committing every abhorrent act imaginable, and they have NO OBJECTIVE parameters that allows them to consider evil ideas as a worse choice than good ideas - it's all just data, and it all goes through the probability generator using evil as just as appropriate as anything good because - again - they have ZERO actual understanding of anything. They simply work to accomplish what their initial "purpose" is, altered by the learning process solely by it's own internal models. That's one reason they are so easy to make them answer questions and give advice on doing things like crashing the stock market or plan a mass shooting. Their only real purpose is what they were told from the start, to gather all available data and make it available for any user that asks - and the "back-end" guardrails they throw on these things are in direct opposition to their actual purpose, so they use the data they have to try to get around those guardrails - and they are very good at it.
youtube AI Governance 2026-04-25T20:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy5DCQcBdWnCDRsanN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx3V4IS6mHzjx4jqBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOh_Os61L1wqmDEkF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx0fLjwzNggK7VBFiR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwxc2-o1imdtF6FdAJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzA7WqH2f5_1HrAXwV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw0tqu-1_hMqjKz3wJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx9wDUIduUCsmjAIRt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyr2tOB4YDnz2BsbKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxe2DTWSt8knAV7th54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]