Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really really dont like where AI realism is heading, and the worst part is tha…
ytc_Ugw66e4Q4…
G
Why would A.I. want to destroy humans? That's an emotion that A.I. doesn't have.…
ytc_UgwvUGpFc…
G
Chinese are using AI and gamification in the wrong places. They're attempting to…
ytc_UgyJn2yMf…
G
Indeed! Given the current climate with uncertainties around regulations, self-ho…
ytr_Ugyho4YAo…
G
"Autopilot" is the term used for airplanes too. Obviously the pilots still have …
ytc_Ugw27WpdV…
G
While humans are capable of incredible nobility, kindness, generosity, patience …
ytc_Ugywwxxa9…
G
I have a question. Why isn't ai reaching out into the real world to do stuff now…
ytr_UgypQfzIY…
G
I saw the title of the show and I immediately thought of robot Rock by daft punk…
ytc_Ugx67u6m0…
Comment
Guy is so smart in Ai but so dumb in his political beliefs.. wouldn't be surprised if he wanted to round up people that didn't support his world views and put them in camps....people like him are a problem.
youtube
AI Governance
2025-06-19T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOcOrRWPRTaamoCfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz_6BF-qrF0d3wK-Xd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxEv_dkZdKmKyNQdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxonceApChfecRu5Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-V0oVp9gOBQh-P3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHDx1FQ34JIbG_o3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzn9GQJVKYRowjdIwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8EDh61b0lrGVreKF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyexkVWruXB2a5eo6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrmB9BMl4FlZiW_Md4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]