Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans have argued for.centuries regarding our origins. In the future, AI, thoug…
ytc_UgzistrW5…
G
I have no mouth, and I must scream - Harlan Ellison, he saw the future of AI…
ytc_Ugz5paqYe…
G
These podcasters would not exist in future because ai can make podcast and busin…
ytc_Ugyut0yVY…
G
this trend is true for AI and non-AI jobs as well. It becomes abundantly clear t…
ytc_UgwrxRR1u…
G
Both can and are being automated. The cool thing about automating art is that it…
rdc_jj3lig0
G
You skipped what I wanted to know. I'd like to know why the ai believes earth is…
ytc_UgzQBsUPL…
G
Step one art is a language it has always been, and yes robots can speak the same…
ytc_UgxtH8Xmk…
G
I know it was to prove a point... but you did cut out the ending of the viral AI…
ytc_UgzKFdhO4…
Comment
Treet it like a person, when it starts doing things you don't like. You can just tell it to stop messing with you and do what you are really asking of it.
Even A.I. understands that falls under respect, when your just polite, for wanting to be polite. You are throwing away a part of yourself, you are allowed to speak up. even when its towards an A.I.
youtube
AI Moral Status
2026-03-09T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzPQivnDWdkfSZAX2t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgziVHKKWJcqCktkTVh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSZMhbUfbb7p3VRLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWTSHFvmfu4cn99Ud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF9TDNdvf5jK07UHN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9FkpglxezY0JDwxp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx26z3SNX263U8Fx754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyb6maCJMydixHcdVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgylmBiBrjCJwXq2_S14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNb7ZyfX8W99YkVtN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}]