Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the dangers of social media is bad faith actors (or even good faith actors frankly) can sow fear, division and discontent amongst a very large number of people. It's a sad part of human psychology that we do such a poor job of defending ourselves against our own worst instincts and not simple to fix. The genie is out of the bottle. For what it is worth. ChatGPT is not an AI, as in it has no intelligence. It might seem intelligent, but all it is doing is statistical pattern matching between the last word and the next word. The fact that the results of this are so amazingly intelligent like is incredible, but it is not intelligence. Currently there is no mechanism or theory or experiment which can create or could plausibly create a true AI (or AGI as you might have heard it called). We are really no nearer this than we have ever been. So yes it might take 5 years, or it might take never (same as we might get hit by a moon sized asteroid in 5 years, or never). The full ramifications of LLMs won't be known for many, many years, much in the same way that the full ramifications of social media was not known for many years. It is not all doom and gloom though and even if it is, like the tides, it's going to happen so you might as well deal with it.
youtube Cross-Cultural 2025-10-17T14:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningcontractualist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwqaRhmbF8mgqAuQmx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwcqX9VTboFZQuXwCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2ueFrNoB02oBHIZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzme9F50fNbgr-ddJN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8hxd8BGdqms_RfvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzj8U8bqM6i45zib1F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6XaUcHY1XKZB4Qrt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwfvrDzXADv9AIVFOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz01mhwOyJKo7qgCRB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxBsVnIVxrJbxKj7bp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]