Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is actually a really good image, amazes me how AI could do this but we coul…
rdc_oi0si99
G
I mean people are against AI doctor thing... But then u realize the price... AI …
ytc_Ugy1ph7V6…
G
Correct. Tim needs to do a bit of research into AI Safety. He has a very pop cul…
ytr_UgzQm-gW4…
G
I wonder with enough examples of contradiction if ai can end up solving examples…
ytc_UgxxIvqTL…
G
Did like no one ever watch "Colossus the Forbin Project" the original and defini…
ytc_UgzDNvPBh…
G
And it is usually wrong. We made this stuff thirty years ago. I hacked for the m…
ytc_UgzCIj5ax…
G
The deployed this tech and are training ai on it in the country it would be hard…
ytr_UgyyFDJTC…
G
Dignity - the proletariat who’s losing to AI is facing an enormous mental health…
ytc_UgwQcdSo7…
Comment
They might be, but I think that the people who are working with this technology are going to use it for their personal and political benefit. Making people an instrument of AI was the goal. AI is hallucinating. The issue is that AI is being treated as a human being with feelings and humans are being pushed into subservience
youtube
AI Moral Status
2025-09-13T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMsDzeNyWBST32gZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCWxAJtHjsW0k2soh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQoRvgaqlXUxxhD0d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyN7bKko1NJydZ2Qvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwW2ieJQsMwsUbzRhd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzyjB5nOrUANdHzKzx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXxRSQ2gxmTJJt92h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyH6MAU0cgHFPEH7Vx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyurpE6Uf5DdpKQNDh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygjVmLLVgjcizgVfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]