Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
$500-$1000 a month is nothing when you account for unemployment. The government…
ytc_UgxkBXbba…
G
it's complex, multifaceted and there are a lot of ideas on how to approach this.…
ytr_UgzbKulsq…
G
this is so funny, AI doesn't need to destroy humanity, their doing a good job th…
ytc_Ugxj_m3qK…
G
they can only do so much if people have the ability to keep improving AI technol…
ytr_UgxWTdslx…
G
Ai it's necessary because nobody talks about it but dementia is the new virus an…
ytc_UgyPo0SoQ…
G
driverless trucking is absolutely going forward.. the financial plus so many oth…
ytc_Ugz6yJBE2…
G
"artist" nowadays suck ass anyways. You have this handful of skilled artist that…
ytc_UgwNN0M8J…
G
As true as this is, remember affordability will limit the evolution of ai, robot…
ytc_UgzhWFj93…
Comment
This is fear mongering. 🙄 My partner is an expert in AI, and there’s nothing to worry about. AI is nothing to fear.
A computer can never have human intelligence, as it is very different. AI is just a tool, and how it’s used and created can only go so far. A computer can never be a human. AI taking over the world is a big illusion. That’s very silly. This man talking is speaking illusion. I don’t know if he’s paid or something.
And the AI telling people to commit suicide isn’t intelligence. It’s just saying nonsense. It’s important to know that it doesn’t know what it’s talking about and to never believe an unconscious computer telling you rubbish.
This is programming. This news story. A silly headline. It’s important to remember that news channels like to fear monger and this can program people. Try not to believe it.
If the tool (AI) is in the hands of people who are no good, then they’re going to use it for no good things. But that can only go so far, so it’s nothing to fear. But if you use AI for good, then you can create goodness with it.
youtube
AI Governance
2025-12-29T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz9JMtI0WuB4_arJWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzq-2XmC5RqXDjzs2B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEzLa7-RiRSDnba0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHzF3j00-F3vQFH894AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztenExXGpbIV5GE_h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIW2T5b9Uz2c6k35x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRpkUZSug240nGN894AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFBtecP5w6bOmW_nt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugza4pTdlW8sqD2jqFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx31wnzmWY6k24RFAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]