Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
im disabled, physically. I have periodic paralysis and eds(as well as some other…
ytc_Ugxjah5wa…
G
This has been going on for a
few years now. This is just the introduction to th…
ytr_Ugwm1E6xx…
G
AI art generate random. Traditional art still required for specific shapes. Comb…
ytc_Ugz8FMmz8…
G
Letting AI drive any vehicle, let alone trucks, except in extremely controlled a…
ytc_Ugza3piS6…
G
Nudify AI needs to be banned. Like, not only it's sick to have so many sick peop…
ytc_UgyDaneOb…
G
This is something I gave a lot of thought over the past 3-4 years and I did inde…
ytc_Ugx9QgIJi…
G
No need to worry Yoshua!
The other humans will end humanity with AI before AI ne…
ytc_Ugwkipwqc…
G
Not good enough at art in the way you are discussing but am a writer and trying …
ytc_UgxGzTR-D…
Comment
You don't ask the pertinent question. What is the way in which A.I. will physically kill the humans? Does one rogue A.I. convince a very special human with the power to do something, then go out to follow specific orders from the A.I.? Or does a company start building robots and downloading A.I. programs into their "brains" who then coordinate destroying the humans? I need to know the physical parameters of how this "kill all humans" thing happens.
youtube
AI Governance
2026-02-25T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuSEuDMEC0tjiJE9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTCxNwGq_lgqDh3Kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAx2Qpr6NczK02Snl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNxLLZ3dY_a9Gt6Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzinWPBk9jl7p9eQvZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX6Vu6aMBrr_4GDv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdCq2DGG1FE1ibytx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwWW2Il3p8Faim5A6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzsHTOyhAvQG85uZP54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx849mvh7WM2eMtkQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]