Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Experts? What experts? Experts in the "Science of the Perceptions and Beliefs of…
ytr_UgxawWoIX…
G
American and Western countries are probably idiots thats why they are promoting …
ytc_UgxKa3jjo…
G
AI dipshits have to hide in circlejerks. They know damn well what they do is wro…
ytc_UgzBkG9Mb…
G
“ If only one life is saved, it will be worth it.”
Expect that argument.…
rdc_eu6ihns
G
So if I want to swear at it, it will come up with some meaningful shit, and not …
ytc_Ugyx3G3N2…
G
My art can be bad but im not going to be replaced by an A.I making arts.…
ytc_Ugy2HkUdG…
G
There are no way these predictions are remotely realistic and you're likely bein…
ytc_UgyMmZofW…
G
Growing up the very late 1990s, I believe I may see a point during my lifetime i…
ytc_UgyqPuBCH…
Comment
Dr. Roman Yampolskiy warns that AI could lead to human extinction by 2027-2030. He predicts 99% unemployment as AI and humanoid robots replace most jobs, leaving only roles where humans are specifically preferred. He believes superintelligence is uncontrollable, argues we're living in a simulation, and advocates for halting AGI development.
youtube
AI Governance
2025-09-04T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwErjXqr7IV3balt3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxMh1uKxQjxumFQZqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzrrc6o_aV_mRU5L4V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYCuJ236aQhX_tVc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAfAcbg5tfAvXj1Z14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzw8PY18ESht5hNr2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwclleEd3yahbBFhAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxrOue4_WYejy_iyjV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyTCvvuqSpvu9BSrF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQ6ozb-c1ZcakVVQd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]