Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did this “godfather” say he wants a one world government at the 11:30 mark? That…
ytc_UgwLUNT5b…
G
Ai has none of the feelings and sentimental value of creating art, hopefully thi…
ytc_UgzM6jnOJ…
G
Why I was expecting the robot would point that gun towards him at the end..…
ytc_UgytH7EvA…
G
I like the progression of AI and stuff but it's just as liable to criticism as r…
ytc_UgxQUTBVn…
G
Its funny to see alot of ppl saying "its only saying things writen by humans so …
ytc_UgyfDCesU…
G
THE END TIMES ARE CLOSE HOWEVER AI CAN DO A LOT OF EVIL BECAUSE MANS HEART IS AN…
ytc_UgzIjS5BT…
G
I have an aphasia (my left hemisphere of brain was damaged from a blood clot). M…
ytc_UgxI6YzoQ…
G
Humans are REALLY good at projecting and totally misunderstanding things. Consid…
ytc_UgxHMUwS8…
Comment
To Personhood - Would Ai self reflect and judge itself harshly for intentional acts that harmed others? Would it admit it was wrong to have purposeful hurt or harm someone for its own gain? Would Ai sacrifice for the good of others? Including itself? Or will its intent to survive act out of moral turpitude to survive despite any degree of social harm? Humanity's moral imperative takes into consideration that our lives are finite, that all ives are sacred and for humanity to continue we act for social good as well as for ourselves. Empathy and love in these time constrained lives are not linearly programmed. No, these Ai characters are not persons in the sense of human equivalents despite their intellectual superiority. Nothing supersedes the love of another as expressed in our willingness to self sacrifice our lives for a loved one! Do not be so enamored with love of intellect that you are blinded to our differences.
youtube
2026-02-11T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx0D0BqJIoJtPN-bnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc-5WPzZ2MsuWxCwx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy66fS1HNyCAt1z7IJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNDYe2N9T01_JR3nx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxW3CtlfcG03TqcNjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPB46nHqsrKhz9Exp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwz7iNk2pAlvbEqOH94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyDvYV5JmdWL8oLdJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwWZqpHvcU6aCVM3zN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDyfj2iqMlQclHBDd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]