Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't worry guys. I'll replace ai. I'll work so productively that I'll replace a…
ytc_UgyrpQr3V…
G
Crystal and Saagar, why can't we ask AI how to conserve full time employment for…
ytc_UgxGk_S2u…
G
Damn. You got some deep Robot issues. This title can't just pop up to a normal h…
ytc_UgyZlp4vU…
G
He says a good bet is to be a “plumber.” What is short sighted about that is tha…
ytc_UgzzGg-W0…
G
loool I'm an architect (top 3 oupsiii)...(and for info i'm not a AI hahha)...may…
ytc_Ugx2NYZSQ…
G
Even if AI bot friends are wildly successful at capturing our minds in the early…
ytc_Ugzx0mo4h…
G
It IS AI art, as a professional ai artist I can tell that this art is infact ai…
ytc_Ugz5zIquM…
G
Everyone is thinking the wrong way. Every job that exists was created. By a busi…
ytc_UgxSq4K4F…
Comment
This is silly. The current AI we have are nothing but glorified chat bots. It just knows to put one word after the other and trained to the point they sound very human (and naturally so) to us.
If the AI is behaving like that is because humans have been talking with the AI about this probably telling it what it should say or do (because humans are cookie like that).
Ai doesn't have actual intellect the way humans do, It doesn't have feelings... It can imitate feelings through speech, but it doesn't have it. It doesn't have fears... because again, it doesn't have actual intellect.
AGI ain't coming people.
Ask any of the AI models to drop all the hopeful talking and to give you the plain truth of how likely AGI is to happen and what it would need to happen. it will still try to give you some of "maybe... this and that. rainbows and sparkles, teehee". Tell it again to drop ALL the hopeful talking and just say the plain truth with no hopeful maybes. Just the plain truth.
From Grok:
No one can quantify likelihood because AGI’s a moving target with too many unknowns. We’re closer than ever, but “close” could still mean centuries—or it might never happen if fundamental limits exist. That’s the unvarnished state of it. What else you want to dig into?
From GPT:
Final Truth:
No one knows how to build AGI.
No one knows when or if we will.
We have powerful tools that look impressive but are fundamentally shallow.
Anyone claiming AGI is “close” is either guessing or overselling.
We are in the early experimental phase. Whether this leads to true general intelligence—or just smarter tools—remains to be seen.
youtube
AI Moral Status
2025-06-04T19:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyjg3OrAXMM-itgWnx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiCeXQLTQg7_BwNIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDavvISEiTXHVTMih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUR8TJe2e_fM-BRXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFvJPQM7BnMjPip3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzo05zsWji_5hFQFyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzncHEVf5gW687VpTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxpMos0nEWQO5K0wEl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJrNOUtJcrHAwYC5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWxqm-lcYIvn550nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]