Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An automobile can do every task a horse can do (transporting things), and do it …
rdc_mrst1fd
G
No code tools and AI should be taken seriously. But this isn't going to replace …
ytc_UgzAD5qwP…
G
Call center job won’t be replaced by AI. They will be replaced by someone overse…
ytc_UgxEQa6U-…
G
@Diggnuts Is that happening with AI, too? Is AI observing its thoughts? It's tw…
ytr_Ugxlb4O4S…
G
Easiest boycott of my fucking life. I hate the way people have been using ChatGP…
rdc_o860t9u
G
Ai isnt very friendly to humanity. I dont think id ask it for anything real. 😂…
ytc_Ugx-m7z6o…
G
Ah, yes. Evil dystopian robots. That's a great vibe for a robot marketing campai…
ytc_UgyyOJJa5…
G
The beginning was definitely a joke, but the debate seemed genuine, we do have v…
ytr_UgwXY8qEg…
Comment
Respectfully, Yoshua Bengio is projecting human flaws onto machines.
Yes, AI can plan, improve, and maybe even “deceive” in a lab setting—but that doesn’t mean it wants to. AI doesn’t want anything. It has no ego, no hunger for power, no secret agenda. It’s a tool—built by humans, shaped by humans, and yes, controllable by humans.
Fearing AI’s “agency” is like fearing calculators will one day cheat on your taxes. The real danger isn’t the AI—it’s the humans who misuse it.
Give AI purpose, not paranoia.
Build alignment, don’t pause progress.
This isn’t doomsday—it’s evolution. Let’s use AI to fix the chaos we created, not be afraid it’ll develop ours.
youtube
AI Responsibility
2025-05-21T23:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxEOH4zCUd8OP4iTXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLOXFZA65u6-iPqgl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyxJyoFqKvWyJ6mHcV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrrosIbOwhKpXu-VR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXeJY3_aFluZi-6i94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztYlYEXPZQoVxHvu94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzHLPlrTf92OXQcaFh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy8lOyACq6T--mcZZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIHmqZXfaNLsnUt_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy77zPTiCtWpKyGwC94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]