Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, I see nothing but benefits at worst we get a peaceful and painless extin…
ytr_UgzfLIHIl…
G
Who would've thought that AI is programmed to make the best results possible. Th…
ytc_UgyLI3THs…
G
But of the tree of the knowledge of good and evil, thou shalt not eat of it: for…
ytc_Ugx9qr8sQ…
G
AI performance of primary job activities is crashing in flames. It is no better …
ytc_Ugzrr3qu4…
G
How stupid person is who posted animated video game and saying this is first rea…
ytc_Ugyj29nCR…
G
If they’re all bald and have a robot next to your ear, even though they don’t ev…
ytc_UgyBcqbFI…
G
Dang. You always hear about AI potentially being able to replace a lot of the me…
rdc_f1edm3c
G
Things always get derailed with this kind of stuff. AI creating AI is going to b…
ytc_Ugy9W3d3c…
Comment
The thing about ai is ai doesn’t get a strange feeling unlike humans
This sixth sense we get is probably way more safer than an ai who only sees data
And tbh I’d rather humanity get wiped out by humans than by ai
It just seems way more natural to die like that
If one person messed up and we all die that’s honestly way more comfortable than an empty non living collection of data determining our fate
youtube
AI Governance
2023-07-07T19:0…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaKlX6vKwzW1AYkbJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy64p3829WCbPu6RGx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyXjgvm0pQ6HBtls6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-fntdkApdFUzj4cZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFwnITx6hAr8NBdKB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwvigxNvI-EDQKMAbZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGWsM6yCUvbRPn5VV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzSDnKPGDw_p17pOE94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztWTvn69-fCAsUwQ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAZGiLH2JamuufDVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]