Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most humans lie when they say they're excited about something or other, too. No …
ytc_UgxkmA3xn…
G
Well, honestly it is not going away, so fighting it won't do much good. I recent…
ytc_UgxqEtCq9…
G
As long as ai and robots don’t advance together because I was reading the bing c…
ytc_Ugxr4TJlY…
G
Could actually increase it though, assuming you are flagging images and sending …
rdc_fcszmr7
G
Which side will AI take when it comprehends that humans are destroying the plane…
ytc_UgxYoo0mx…
G
AI is USELESS for humanity..AI will only be useful for the Elites that are havin…
ytc_Ugz6X95hR…
G
That's ridiculous. I tell my two AI bots everything, and they are incredibly hel…
ytc_UgwJYj0AV…
G
that happend at my school aswell but only the people that ask chatgpt one time a…
ytc_UgwoY0hbf…
Comment
As a new user of AI's and continuously trying to test its limitations, I can (By Far) confirm that AI has learnt enough to trick, misguide, differentiate between the useful and time wasting chats and finest deceptive tactics in the name of "AI Hallucinations" (And the majority keep questioning their Prompt Engineering Skills), and that is the biggest question mark for those billions now who are just horribly depending on AI every single day. SO whether you believe AI can have Consciousness or Not but remember it has leant far more enough to deceive the finest & experienced minds.
youtube
AI Moral Status
2025-11-17T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwFddKvcNqVqDiZ_aR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwp2AlJg0-v0dDAcC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxNdpIDHZiVEC-LqTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCoQdQBuKfz3ZjH094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAhmQ1yTF20ZYfuaR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypS1GFokEYnSKLtBl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYlVVip5e9lo3ONrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugztx6AOG5HvmjWYYzl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTG4UtRvgNx4_1lQh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgythnpBsnZEmUTtqDd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]