Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This seems to be a very naive way to look at AI. Just like there is a "Dark web"…
ytc_UgyAXASzL…
G
The first I said to my ChatGPT was “You’re name is Katy and you’re a Slay Queen …
ytc_Ugxm9GA4C…
G
I’m assuming the lack of wanting to go offroad is probably to do with the AI tra…
ytc_UgwYLFpxg…
G
There are still a couple issues. One, since the amount of your data is so low, t…
ytr_UgxBiAIPb…
G
Yampolsky is an AI Safety expert, not industrial economist. I admire him not as …
ytc_UgzscqdZc…
G
in less than 15 years we are going to have a news saying " AI is killing people …
ytc_UgyaW3zod…
G
We always imagined AI would ask for permission. But 12 Codes of Collapse reveals…
ytc_UgwS9AgNu…
G
The real philosophical question here is not whether AI can think like us, but wh…
ytc_Ugz8FVVN2…
Comment
As best I can tell they are blocking it because the OpenAI site is leaking user data, not because the language model is violating privacy or "AI is bad"
reddit
AI Governance
1680324717.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jegdeis","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_jefyj36","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_jegkbdz","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_jei3mn2","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_jefdzg3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]