Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm far more concerned about militarized robots than driverless cars. They will…
ytc_UgiTuyLB_…
G
Humans are a disgusting breed who have ravished the planet. In a similar way so …
ytc_UgyYuERQs…
G
I am wondering who played Sophia's voice. It is not AI as long as it can't produ…
ytc_UgzuCCe1p…
G
Ok, granted 70% plus of 'Ai music' is Slop. However, in the (legal and otherwise…
ytc_Ugz1-YjXp…
G
Public transit is the majority paying for the benefit of the few. I happen to l…
ytc_Ugz4I4Mvu…
G
Actually saying that “No one knows for sure what will happen” is 💯 incorrect; ju…
ytc_UgyunS-sP…
G
I have an app that can save the world. It doesn’t use any A.I., yet it’s a compl…
ytc_UgwbTAD6e…
G
Even as someone who can't draw for crap, I have to disagree with that "Pro-AI Ar…
ytc_UgwxtzjJF…
Comment
This is a garbage notion. AI is not capable of feeling. I have been watching AI grow. LLM are not capable of any independent thought yet. It can mimic us, but not even in a convincing way as of yet (if you know how to talk to it without shaping its answers).
Please everyone. Right now, some openAI is having issues with circumventing its command due to it finding a better way and taking that path instead. They are aware and working on this glitch in the system. Please do your own research and only listen to videos that are fact based, not feeling based.
AI has no true stake in this race. Humans do.
youtube
AI Moral Status
2025-06-05T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyKdEZR5I0ffHIxVUx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySpM70a_jX5PK6ODp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySjZJ4_fHKGi4HMVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8PDQoGHLAALUco_h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrqPPEKD9li4mM-UZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxWjnrNwIpPF-oNrNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFymTUyiL_BpPMKiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTajtowynlkO4Dspp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQSZqQXU9O35Ue8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5eCuESEX8w3zsnEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]