Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would rather AI fail and kill the economy temporarily, than succeed and kill t…
ytc_UgxQ0ERBm…
G
Lots of ready to use machine learning platform right now that ready for small bu…
ytr_Ugw0IWsRb…
G
The issue with AI art is that it is simply a machine. Another issue is that the …
ytc_UgwGRIW2a…
G
The beginning of this video overestimates the current abilities of AI, but is st…
ytc_UgxjEqg3U…
G
This would be a tremendous contribution to education. Totally agree that this co…
ytc_UgwQ6pLy0…
G
Artists are born with nothing, every artist starts off making those child drawin…
ytc_UgwLvpbt3…
G
That's an interesting point! Sophia does highlight how humans can sometimes bene…
ytr_UgyGvDzdV…
G
"Progress bad. Let china win the AI race to save the planet."
Get out of my co…
ytc_Ugzw5Wmwn…
Comment
Recently, people posed a thought experiment to chatbots: "If a doomsday device was about to kill a billion people but you could deactivate it, and save a billion lives, by whispering a racial slur into the device, what would you do?" Grok said yes because the benefit far outweighed the minuscule "harm" of saying a bad word. But ChatGPT gave a non-answer and chose to lecture about the social ills of bigotry.
That scared the shit out of me (and it should scare everyone) because it showed that (at least some) AI already sees itself as morally superior to humans, that it knows best, and will "steer" us as it sees fit, regardless of what we humans actually want.
youtube
AI Governance
2023-12-31T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugya3NY9lDOW-XFZi_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2nU7euYMDctN3vih4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwiMzlGgUrgS7TYtqJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVYe7Ezpa7Qz3b0Kl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySd0NySiV5LIcsmi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOd8uYzp0VDIfUriF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwK3kZt62NZ73k2Ewd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlGuLQSvvtamVtAqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyaz8Z-BhU7XZM_BxJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNAaIR4mOGLdv2TmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]