Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This year we talked less about AI hallucination when compared to previous years.…
ytc_UgxIflSSo…
G
maybe in 1000 years a true driverless car will be viable. Get this crap off our…
ytc_UgzCV1WSH…
G
@InfinitaCity I'm not sure if I agree, but if I did, would you not concede that …
ytr_Ugxhapbt_…
G
So who is paying for these humanoid robots? Consumers? If everybody has lost the…
ytc_UgwFEEjVX…
G
How about the fact that not only AI is taking over jobs, but its made the job ap…
ytc_UgyWN3T6N…
G
I was drawing traditionally and started my way in learning digital art and I sti…
ytc_Ugz4AosUi…
G
Nationalise all ai companies 😅govt should take control of ai companies and provi…
ytc_UgxlsQJGi…
G
DAAAAAAMM all of those lines hit hard “ with all of its flaws and imperfections …
ytc_UgyWRcP7Z…
Comment
It is a great deception to state that AI can be conscious (in the way a human is conscious). AI can be taught how to 'feel and think' how to react and behave, but that does not stem from self-awareness, but from commands coming from the AI operator. If someone programs AI to be evil or to draw conclusions from history or human behavior and then makes decisions based on feelings that result from data analysis and aligned with the built-in moral backbone of the AI - THEN IT WILL BE SO. Therefore, it is a dangerous toy in the hands of 'madmen'.
TRUE ANTICHRIST LUKE
youtube
AI Moral Status
2025-08-25T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx8n_ugJSke55Nd1uh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxunW2Iu5edlAiiy_B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkqL5bs0uZDXhnwdx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzX8DULYcAjxPaqbQl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2KouNNOHAuuhwIaF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyubhaMS7cfRQFKn714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyhiPEUSyP5bJDwGB14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywm-YkvSsK251BMZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwfksYb_lry-G0Urfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcqlvRCWg_Mh4XzwZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]