Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
my current employer is nudging employees very strongly to let Google Gemini type…
ytc_UgzSFVcSC…
G
These Ai agents are capable of thinking 🤔 evil 😈 thoughts and execution into act…
ytc_UgyjhSc-1…
G
AI is a dumb encyclopedia that does what it's trained to do - for the next 500 y…
ytc_Ugy_9PPmu…
G
I recently started making AI chat bots of the characters from the bandes dessiné…
ytc_Ugxur_TDX…
G
I have a tolerance for ai, when it is used correctly and transparently
Want to …
ytc_UgwzdRDsj…
G
I tricked accidentally ChatGPT by asking it to generate an image featuring Buzz …
ytc_UgxQUSpwR…
G
I have a question. If it's an "automated" truck, why is there a seat? Obviously …
ytc_UgwV-z6W0…
G
up video with the real human, that will be more viewer than that up video with r…
ytc_Ugxuwy4Ez…
Comment
Its crazy how much what ifs are in the air regarding ai, but we are driving full speed ahead into a void with little to no information on if its safe and how to back away if it goes south. Reminds me of Oppenheimer, when hes finally achieves a nuclear bomb, he has like "creators remorse". But it is promising, that in its early stages we have people like this questioning and investigating AI
youtube
AI Governance
2025-10-07T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyN6-m7qNBZDAPITqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwSAy23d4wGs0kI-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGu6jZqTvzq0rW9oV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9gFKKRuIsvcqskmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgweqliGXRUDmL2pLZp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyab3TbemzfgrndCbJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQDs8ZIbcX8HUovph4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxy43k-PNQcUCqrRft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIoRezXEaFEGVi3ld4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy8bDdDZjJWXDbGTl54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]