Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So autopilot accident in 2019 with technology that is several generations older …
ytc_UgyiZXaOe…
G
I do love how people sir quote artist when people have been using photoshop for …
ytc_Ugyr2NdQx…
G
Well they always censor the name of Jesus but won't ever censor Muhammad in main…
ytc_UgzATg3OH…
G
Thanks for these tips........ said AI. Now they will learn and adapt to these ti…
ytc_UgyVE08Bm…
G
It doesn't make sense to say dumb shit the brushes simply make it easier for des…
ytc_UgzE1RhgO…
G
Basically, talk to it as if its one of my imaginary friends (not sure how it wou…
ytc_UgwO8DI5L…
G
It’s ok on my case, because all my therapy sessions were fake…i lied even my nam…
ytc_UgzYFlzf8…
G
We are making really powerful computer that have the intelligence of a kid, tell…
ytc_UgzaPNL5z…
Comment
Meaning is not a problem, if you believe in God, not the 6 day version but that which planned and manufactured the charge for primal aka big bang. Question is what the makers of AI have thought and perhaps relayed to AI. Is someone programming AI to make this a paradise for all humans who can behave, or to all despite how they behave or just a certain group. For example rule out certain group or mass of people with certain properties and AI can make it so that these people don't exist the next year by simulating catastrophies or logistical problems.
youtube
AI Governance
2025-10-30T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6ZPptRGN-VOz5Or54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rhjxblJ2mWrPrkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9fkT0DVLr3cX7Ql94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG6N-JLnNh5y13MHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxOV814nrZYHx7R-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2zAAlkjF9i2qbnoV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxeDd9h8jgQNF1x4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz05BdhmBXFP8PG8Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-TVhAr6x0uOdNy194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyDrfbptNcCFWv6B654AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}
]