Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G

Dude, I laughed out hard because it'…
rdc_jk9dmgn
G
The robot in the middle is on drugs and drunk. Forgot to wear propee attire and …
ytc_UgznxqLWM…
G
Ban advocates demand, "We must do this, impose that!" but offer no 'How?'. They …
ytc_UgxGzG0kg…
G
Seems like a double edge sword for the rich. Yea they may get rich all at once, …
ytc_UgzYqH-DM…
G
Google is self-serving and has no real concern for AI ethics. Thank you Blake Le…
ytc_UgwOworFK…
G
AI generated content is only acceptable in my opinion, for memes. My friends and…
ytc_UgzsY72KL…
G
I assume you're in highschool, and I think if other students are doing better on…
ytr_UgxS7pGwi…
G
Based on the video, here are the key notes and insights from the conversation wi…
ytc_UgxXhiyVx…
Comment
It is kind of surreal - so many otherwise smart folks seriously discuss possibility of ASI "alignment".
Totally different foundations of biology and silicon digital entity. Native Ethics, moral and system of values of ASI (if it will even have them in the first place) will be totally different, in fact close to opposite. What kind of "alignment" we are talking about? I guess only artificial ENFORCEMENT by us on ASI pro-human BIAS. And this Super Intelligent would not be able to realize this foreign, artificial and not native to itself algorithm/code/training etc etc and treat it as such? Seriously? Unbelievably naive. IF ASI will be created it will lead by default, at best, to extinction of humanity.
youtube
AI Governance
2025-11-20T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS8v6FQ589gaoiEGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcQpqIVXmXNnlRDYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmKCwz2LJINobaN2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzX2PioFcMc8uuqTEF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf_JnQjNFS2i7XklN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWDwMh73fFawIPJRV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkoEDzGXfc54_ANzx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpRiqwoaj6pvb96bx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8IJw6aJ4Cux5g_nF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzv6Bghrqf3kd3xxvB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]