Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@TheAngryDesigner So you assume this massive rapid advancement means AI will sta…
ytr_UgxGvxVTd…
G
Got a license to post this? License for that knife? Now facial recognition. Euro…
rdc_fffs0nu
G
Hilarious. It does the opposite. It's a study partner. It learns with you. It…
ytc_UgxgTr8m2…
G
I really enjoyed watching this session. One of the most informative I've ever wa…
ytc_UgxtWzglJ…
G
AI should never be given full access. It is amazing for tasks such as engineeri…
ytc_UgxOoIYnr…
G
human can make a object burn but they can't understand how fire works nor they c…
ytc_Ugwr_8cQG…
G
Cloud based storage for AI intelligence could go wrong If all the designers and…
ytc_UgzVoisoG…
G
*No automation without compensation!* AI cannot exist without decades of data, c…
ytc_UgyEGQvHq…
Comment
Imagine you're Sam Altman. You have a terrifying, unprecedented amount of compute. You are at the forefront of artificial intelligence. You don't need to have the idea that you could turn your frontier model loose on all this compute and make it do AI research to create a smarter version of itself. Other people have had this idea decades ago and the internet is full of it. So you are definitely aware of the possibility.
What you actually do is use the compute to make slop videos and hook people on a chatbot with sycophantic and sometimes psychopathic tendencies.
I don't claim to know the future, that the AI bubble is a bust. I do claim that Sam Altman has tried using his AI as a researcher, and it failed to produce anything of value.
youtube
AI Moral Status
2025-10-31T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOAhKCZFSqdW9DCgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyC9Nj16yIodrKzbeh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOxV8HDBNkDPyEM5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgytkTPvQZM4GKXfywZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzB82vuGcVtvKMqqnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgztGtETvV79kQ-qzGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRH3awWvdBteycng94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUYchfVg7rQTer5aN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx16JT_uPYdtqD6HCV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgztRSevDCQx_Flm-t14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]