Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we program AI to hurt humans, it's gonna hurt humans. That's like smashing dr…
ytc_UgyoNbyUg…
G
I Robot, transformed into reality!!! It will be exactly like in the movie!! We s…
ytc_UgyFlK4-_…
G
Many people don’t realize that the unhappiness of working people comes from the …
ytc_Ugxzsr4GV…
G
A few dozen non-fatal accidents involving Waymo over more than a million miles i…
ytc_Ugy8qh5kt…
G
Wow , it seems undeveloped countries that are founded by agriculture will start …
ytc_UgyZNWDEQ…
G
Ai will never kill the humanity because without human perception on it it will n…
ytc_Ugz9TFnOQ…
G
This journalist is annoying. Sure, if you want to talk to a "driver," don't do t…
ytc_UgxJsVPHv…
G
So-called "White" Americans don't care if AIs are racist to so-called "Blacks. I…
ytr_UgwevfiPZ…
Comment
one side elon musk saying AI is far dangerous and one side launching AI robots duh
youtube
AI Governance
2024-10-20T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw4wCWXzWIPS2Icfdl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_a4ja6ddmeLZz1GJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4VIYCHmWiY9O98QV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgztD5McpYouOkwFu5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxP8f5OkEAScHuIux54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6tTPBk8IGTL3meQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4clMtWzNLgvbVZbV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzqw7C1Gf-rXvVXNB54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyRNr2Ke8K52o_e0pB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKfFvUA7Zs11yZONZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]