Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI "art" is actually really annoying. Just yesterday I was trying to look for ph…
ytc_UgyUdN3F7…
G
I think that all ai companys must restart with their ai's. When this is the way …
ytc_UgyJSyPCw…
G
@aquarieaux1443 Maybe do some research on Gemini said
The human brain doesn't ju…
ytr_UgxHodCfg…
G
It's so bad and intrusive, if you are being monitored and analysed by their algo…
ytc_UgxKIulYX…
G
The problem with AI is actually us. We tend to anthropomorphize things since ear…
ytc_UgzW9rgtm…
G
@YoinkyMcShploinky That's actually what is happening. Just that there's a socie…
ytr_UgyqfCFJb…
G
He really should have thought things through before unleashing AI on the world. …
ytc_Ugw4EYwti…
G
How come no one talks about the massive energy needs of AI and its effect on wor…
ytc_UgzrKr2jz…
Comment
What hurts the most is the fact that a group of humans agree to train those bots to be more efficient than them . I was part of a project where I was training bots to be harmless, precise etc etc funny thing those ceos of Ai companies will always win because I needed that money I had to do the job . This means they will always find Humans to help destroy humanity
youtube
AI Governance
2025-09-14T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwE94flJICMea32KjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZAjTjwVniNqR5-6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYxRf6X6SABVopfLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-bPu5edh5CiGpCfN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeHffo2Jci-FqpM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOd3doJBOqfR5wSxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPxbxEopOUQ1pSvRB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8aEcWmjGe31ryTl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSira2dCeoNwPBQ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUzs-C6mRScZiZqip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]