Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if we create a robot so advanced that it reaches consciousness by itself? A…
ytr_UgjxCutHJ…
G
Chatgpt isn't true conscience. Its not aware and its not a true artificial being…
ytc_UgxsPuwei…
G
Google Gemini hates white people, it refuses to show them. If a country is major…
ytc_UgzpnfsQ3…
G
My first memory was of me running outside in my underwear, I was maybe 2 years o…
ytc_UgzjokweH…
G
10:47 There is a solution to the energy consumption problem that LLMs and AI sys…
ytc_Ugwm6AMs8…
G
AI better do a lot more than his use cases. Do you think the pizza shop owner d…
ytc_UgyBMlYG0…
G
at 51:58 you can tell that his drawing has a better fold than the AI, the AI loo…
ytc_UgxFC_hfd…
G
As a teacher myself, i am quite sickened by this. God help the kid who daydream…
ytc_Ugz5XYNIU…
Comment
>Something even more malicious: once a user is hooked, can the company use the emotional attachment to the AI to persuade or coerce the user into doing something like vote differently?
You say this like social media doesn't already do this.
reddit
AI Governance
1732743169.0
♥ 31
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lzavsps","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_lzb3x3y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_lzc4rhj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_lzazkzj","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_lzaudj1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]