Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Relax, this ai is 0.1 alpha version... When we get to beta and first 1.0 AI then…
ytc_UgyrTzXLD…
G
The argument against your stance is many artists embracing AI and using it as to…
ytc_UgwR1imQG…
G
why cant ai content have a watermark that clearly identifies it as such, they pu…
ytc_Ugzbhci52…
G
You could actually read at least 30% of the article before commenting, he was se…
rdc_lu68v8o
G
OpenAI has explicitly proved that they do this if you punish them based on infor…
rdc_mzx4m9p
G
Fox interviewed "isis" "antifa" and "hamas" and it was all the same white guy in…
ytc_UgzcwdHIJ…
G
ai is such a bubble about to blow. and clueless CEOs dont know jack shit and are…
ytc_Ugw21wb57…
G
His voice sounds weird when he's saying "haters" you can hear the glitchyness. I…
ytc_UgwtjemSb…
Comment
It is a fascinating dissonance to hear a 50-year veteran of AI express shock that his field is actually succeeding. It recalls the phenomenon of Nobel laureates who drift into incoherence later in life (like Montagnier on water memory) Did he really not believe his own research would eventually work? Sci-fi authors identified the alignment problem decades ago; it shouldn't have taken a Berkeley professor until 2013 to have this 'epiphany.'
More critically, the 'catastrophe' narrative betrays a massive status quo bias. Russell worries about the loss of human purpose, yet explicitly admits that for many, this purpose currently consists of 'repetitive work in windowless boxes'. For the billions already living in economic hell (facing poverty, hunger, and meaningless drudgery) the disruption of this system isn't an existential risk; it's a necessity. You can only fear the end of the world if the current world is actually working for you.
youtube
AI Governance
2025-12-04T20:0…
♥ 42
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh-YnBzznNZ1qWyQ14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRQFKmdG19FO8-AD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyW-LL40QAOwv9pVQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgI5BFRrLZ996_qmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPhQDdOKhYQvLluoN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwv4OkyAKs3UvA-eaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz4AjPalg3rFU87gH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzncXg6mJHXTVaCS414AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFxasyzxX1MBY0FJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx9dnj66M0hUfEUgrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]