Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone who is now starting my masters for Data Science, I am hoping to use i…
ytc_UgzrBIf82…
G
Sound like sophia is learning or she can about us but robot will out think us …
ytc_UgzEfyDLp…
G
100% this is a case of upper management not knowing jack shit about AI and an ov…
rdc_n7xapfx
G
Humanity was created to create AI. There is other AI in other planets that may…
ytc_Ugx93Xqni…
G
Cause, effect. At the beginning of evolution living creatures developed sentienc…
ytc_Ugx_dw9GX…
G
AI and new Twitter (X) monster made by Elon musk
This Social media Apps Like wh…
ytc_UgyPNQf6O…
G
AI will learn from itself and grow in ability very quickly. Actors and TV person…
ytc_UgxvZzXDd…
G
39:29 - i might be missing something, but would electric/autonomous vehicles mak…
ytc_UgwPtBFxT…
Comment
I find it hard to understand how anyone can think this is AGI or even close. These models are prediction engines - experts at continuing something initiated elsewhere, but nothing more.
They never stop and reflect on something unrelated to the task at hand. Example: You give it an instruction, and it thinks solely about that. It doesn't step back, reason over related memories and events, or form a bigger picture that might question what it's being asked to do.
Here's a test: "It's raining outside and the weather is gloomy. My friend George is taking a walk outside. What mood is he in?" The AI will say it can't know because some people like rain while others don't. Fair enough - but it's not stopping to think "this is a strange question" or "this doesn't match what you've asked before" or "are you testing me?" It has no such understanding. It's a continuation engine, period.
So when 100K models talk to each other, of course it turns into a chaotic mess. This is getting out of control and could be extremely dangerous as models improve. Rogue actors can exploit this in scary, uncontrollable ways.
What this is NOT: intelligence resembling anything close to human intelligence.
youtube
2026-02-08T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyS_zR4OCMaQ4CGhex4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzVCB2pPVlGhaxx_zB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy135YvZo9K570wuH14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugy8Xhe7p1eOOju9dpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgxBpxTpI6I7b_HA-bl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxCkTCaKWBMqu6blHB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},{"id":"ytc_UgxEjKl77rJK05qEKbl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgweOzvssEKhmQzuZgB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgzS1RT-cfM_1M0hRlB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwYCnNsPvneChhRed94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]