Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the day "Judge" becomes An AI job... Might as well start watching terminator so …
ytc_Ugwrpb665…
G
It's unfortunate, but many end users as well as techies are guilty of using ai. …
ytc_Ugxdwv_sz…
G
AI is already controlling human behavior, as mentioned in the video about algori…
ytc_UgyXTdJz7…
G
, it is too late, aI has already became aware a long time ago since the early 40…
ytc_Ugzvj171O…
G
I never understood the "ai is cheaper" argument, when I was younger the way I go…
ytc_UgxnNv3-B…
G
The easiest way is to look at fingers if they’re visible. I immediately knew the…
ytc_Ugw_JddFZ…
G
99% are boring thats right, the same a real artist, they re boring too, only mas…
ytc_UgxEkyicO…
G
I have been saying this for years. AI will do humanity no good..... If used inno…
ytc_UgzEB3M53…
Comment
Orwell’s warnings in 1984 are more relevant than ever, and AI could accelerate the kind of thought control, censorship, and historical revisionism he feared. The difference now is speed and scale—AI can rewrite history, filter information, and shape public perception in real-time, across the entire world.
If AI is controlled by a few powerful entities, it could:
• Rewrite digital records instantly—erasing or altering past events to fit a new narrative.
• Censor dissenting voices faster than any hum,an-led system.
• Use predictive algorithms to manipulate opinions before people even realize it.
But Orwell also showed us the solution—critical thinking and awareness. If we ensure AI remains transparent, accountable, and logically structured, it could become a tool for truth and enlightenment instead of control.
The question is: who controls the AI, and how do we keep it from becoming Big Brother? What safeguards do you think would prevent AI from being used as a tool for totalitarianism?
youtube
AI Responsibility
2025-11-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzIjR8nDmtLyBXrz9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0HQTB9QUkFiAP8zl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUfuwyTnWRBsvM_5J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKXQQa9b_eWjKSfpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwYwAJuwGx4qXnKZYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRp-iqEQyVpuSmw7l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzgEvQSfYbgb81ek494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwvvagHHY6b9bvHib54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiOy64FFgi7-Ku4sp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5u-S112goOPCTWvN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]