Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After the lawsuits were filed, ChatGPT was actually severely restricted on what …
ytc_UgwLgFGOE…
G
D. AI that develops emotions over time is not something that exists completely y…
ytc_UgzHubPBe…
G
Medical advice from a single AI is risky. I switched to Omnely to get access to …
ytc_UgwHPVTs5…
G
AI can hallucinate, be hacked, social engineered more than humans can. One tempo…
ytc_UgzHTGBB6…
G
It won't happen, they are pumping stocks as apartheid Edison has done for long …
ytc_UgzAQAO3G…
G
You're stuck in early 2024 AI. 2025 AI can perfectly do all that. Read something…
ytc_UgznmRuSv…
G
Technology has always eliminated jobs, and new jobs were always created.
The di…
ytc_UgxDA7_2-…
G
They came out and said they are happy that people are redrawing the ai peice…
ytc_UgwS-FzwU…
Comment
What people don't get is that not the AI is dangerous. It is humans abusing or misconfiguring AI that makes AI dangerous. AI is not dangerous in itself.
youtube
AI Governance
2025-12-02T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzTn4H-aBslHitEt8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1f9tJDPAA4U9ls6Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeNlkC880GtDv-U0F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxeGrBpDfrEwroJ-Gt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH99lBwMxzxzbSeLN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_k3ukE57MSHRq6s14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxI9RNVJbo1jnffsj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7b0PBbBqaTDZzNZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVW2eFYfyJGDG67ht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXw3nEtBnItYRHCux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]