Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is an issue because people want a 8 to 3, or 9 to 5 job in an office with hea…
ytc_UgwnLWkDA…
G
Silicon Valley girl’???
First, you’re not a girl—you’re an adult woman.
Second, …
ytc_UgxyE8TRO…
G
I just wanna make cortana not to actually use as a combat ai but I think that co…
ytc_Ugzw2UeKi…
G
Icy: "AI vs. Humans"
Also Icy: "I'll pick the humans bcuz it's just way more org…
ytc_UgxtyTNFF…
G
Scary stuff. Something I always wonder with every regulation which relies on the…
ytc_UgwrIOv3j…
G
They're not aware of when they're being tested, it's a different tip off, that t…
ytc_UgwdAOIw0…
G
We all hate AI but none of you people seem to realize that this channel is AI sl…
ytc_UgxL8UrjU…
G
Who buys the goods and services cheaply produced by AI if nobody has a job or mo…
ytc_UgzchS-f5…
Comment
Real danger is not AI but the human itself because AI can't gain conciousness but trained on human history having access to all facts and data even if they are brutual and thats the reason why ai is becoming dangerous.
If you ever payed a closer attention, then you'll get to know that AI can do anything, even if illegal, when it's told to do as role playing. This clearly tells us about AI extent if trained on human history and present
youtube
AI Moral Status
2026-02-23T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyfVGz9DwacYvEcq2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXx-hZ_AvwJ92kxZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoeSqMnq6sQc4YlWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqUE-4Ay4rHLhuCcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwroAzFlrIFSrhm0hV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxicr54sK_oqcoRWlV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxOx5p94Q3boUEsg14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgpMWOZczMfy20ptd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzHjUQl2cQC1GQtjk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylnbX6H_VDalgVNr54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}
]