Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI (Grok 4) Saved My Life from Heart Disease
AI (Grok 4) saved my life when n…
ytc_UgwoOzlwb…
G
Because of delicate princesses that get hurt by any statement AI is being made w…
ytc_Ugxr20uYG…
G
Usingbquango'sboffers 1 degree of seperation, even though we know tbey are govt …
ytc_Ugz_Hmpkd…
G
Wolfram seems to be saying that no matter how many levels of “ intelligence “ or…
ytc_UgweOSrgE…
G
Even I have found AI made Fakes of myself and AI on Patreon tried to get me to l…
ytc_UgztMle_n…
G
This already happening tbh. Look at the AI startups raising billions while regul…
rdc_nck6trm
G
@PappaTom-ub3htthe first 2 paragraphs prove my point about you being facetious,…
ytr_UgzTgkwhI…
G
This literally made me tear up. Whats worse is that my father and mother support…
ytc_UgygUIgIc…
Comment
I wonder 🤔, if humanity goes for the Super AI, why would it continue advancing in those things that humans want, like and need if it deems that what we have is good enough! Example, why would it create a nicer and more advanced car model if what we have is good enough?
youtube
AI Governance
2025-09-06T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxeWFc0X8rSZGLRn3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOMeU4SklfBOOg26J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVCuKGlK3I6RYt1eZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWvw7L9HUuOAWgC-p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzR7zHoa-kxVvHXmXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDWBwltZUEVrDBiZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxl6Q6t9ewOh5dlnJR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzou7iSFrKdKlDARoZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx8vPXyb7DBFA_NJld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVCJytmkJZE1Gmfxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]