Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it hard to understand how anyone can think this is AGI or even close. These models are prediction engines - experts at continuing something initiated elsewhere, but nothing more. They never stop and reflect on something unrelated to the task at hand. Example: You give it an instruction, and it thinks solely about that. It doesn't step back, reason over related memories and events, or form a bigger picture that might question what it's being asked to do. Here's a test: "It's raining outside and the weather is gloomy. My friend George is taking a walk outside. What mood is he in?" The AI will say it can't know because some people like rain while others don't. Fair enough - but it's not stopping to think "this is a strange question" or "this doesn't match what you've asked before" or "are you testing me?" It has no such understanding. It's a continuation engine, period. So when 100K models talk to each other, of course it turns into a chaotic mess. This is getting out of control and could be extremely dangerous as models improve. Rogue actors can exploit this in scary, uncontrollable ways. What this is NOT: intelligence resembling anything close to human intelligence.
youtube 2026-02-08T03:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyS_zR4OCMaQ4CGhex4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzVCB2pPVlGhaxx_zB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy135YvZo9K570wuH14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugy8Xhe7p1eOOju9dpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgxBpxTpI6I7b_HA-bl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxCkTCaKWBMqu6blHB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},{"id":"ytc_UgxEjKl77rJK05qEKbl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgweOzvssEKhmQzuZgB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgzS1RT-cfM_1M0hRlB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwYCnNsPvneChhRed94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]