Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think people are missing what’s really going on.
There is a belief you can cu…
rdc_oi1vi66
G
Why does someone need to say this every single week in a podcast? I'm starting t…
ytc_UgzHC9Cr8…
G
Everything is inherently derivative. Humans are exactly as much "plagiarism mach…
ytc_Ugwg5Du35…
G
They're justifying an eventual ban on Chinese AI use in the US... As long as the…
rdc_m9he2m1
G
I provided a question as input on a LLM console, it came back with false informa…
ytc_UgwJQRnYN…
G
It's not about the money. Art is a holy grail of mankind, the mirror of our soul…
ytr_UgzJFOOVn…
G
I know I'm not the first person to ask this question , but I must ask it. How m…
ytc_UgyO-234r…
G
Good talk, really apricate Dr. Suleyman's perspective on AI. Maybe I'm bias a…
ytc_UgzCmuxHB…
Comment
Everybody suddenly gets scifi brain whenever LLMs that are being called AI comes up. The way these things will destroy humanity is through ecological devastation and upending labor and therefore the global economy. We're not even at a point where we can talk with scientific certainty about human consciousness let alone intelligence but too many people are convinced that we'll be able to build artificial intelligence. Tech oligarchs are embedding "AI" in everything so we can train it for them and they can have their AI trained robots as their servants when they lock themselves away in their bunkers after destroying the planet and economy. That way they won't have to worry about human servants and their families. It won't work, but it doesn't mean that they aren't delusional enough to try. Stop talking about "AI" in scifi terms and ground yourself in reality when analyzing what's happening.
youtube
Viral AI Reaction
2025-11-05T14:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzdXA3SIdZtgnMZv8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyWVsegOqRjIBJZD2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh4EZz0mMFczUT8-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE1Y4D1gUWiO3JFOt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwoeGC_0F2DWGUKu794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvvMli92sz4GhPYtx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxvBMXFxahCTVGaafl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGesigDnLk_DNdzoV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxuPmfVMuiYXiLbtel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPD-JUEfJXUdVzwVd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]