Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Isaac Asimov: In the future, robots will take care of menial labor and dangerous…
ytc_UgzeZni9S…
G
Elon's speeches of AI over the years and his set of actions has ironically accel…
ytc_UgxN_3oBh…
G
Actually, it's not just Artificial Intelligence (AI) that should be ruled out wh…
ytc_Ugx_sYu5l…
G
AI can kill is by making a virus very contagious, very lethal, and very slow…
ytc_UgyXAjcGS…
G
It reminds me when I was asking Alexa to "please play a song" or "please stop" a…
ytc_UgzFGz8yK…
G
It’s fake he’s actually fighting a guy I seen the original fight some how they c…
ytc_UgxXougoy…
G
😒 What we got is LLM, not AI. You won't get the most out of it anyway let alone …
ytc_Ugzj_Evxp…
G
I’m genuinely at the point where I think we just need to straight up ban AI and …
ytc_UgyQ9nTM8…
Comment
What I got out of this: This talk fails to acknowledge that A.I. is already being coded to reflect the political and social engineering biases of the humans that create it (e.g. the bots that Facebook routinely deploys to look for ToS violations), that those biases are more Left-leaning than Right-leaning, more authoritarian than libertarian, and reflect its own sorts of prejudice. While she worries about results being "too white", many of us are left rolling our eyes when we get a social media warn over something that was clearly a joke that the enforcement bots didn't understand.
youtube
AI Responsibility
2023-12-27T23:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx0qKsFULOndt_U5wF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwE1scVjZgJ5TXDR154AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwU862zyjIm5GGYxn54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDamW5u2ScgC8juh94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzSwdbGP3DP3afLTmF4AaABAg","responsibility":"society","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIB-ShR0rb8FCAqtN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxg-RvI2-dkZXb4Br54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNF_CgfRUTSZBoGZ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_b3LYcP2Xre4bZeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwpe4ZnICoqyIwZJZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]