Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is actually funny! people are really dumber with more accessible informatio…
ytc_UgzHckjex…
G
Are you a "real" artist if you trace over someone else's work? That's all AI art…
ytc_UgxIgSNtH…
G
Thank you! I dislike all the hysteria around AI lately. As I like to say, the to…
ytc_UgyCY3nDG…
G
We should be rejoicing at this, no? A few days ago, there was news that we had s…
rdc_jrzy3lo
G
As a writer, I believe learning the foundational skills of writing is essential …
ytc_Ugy-9fmrG…
G
This reasonong somewhat wrong:
1. The A.I. I chat with most likely will have an …
ytc_UgwvjUeNF…
G
How does it feel like we heading for an AI takeover like irobot because if it on…
ytc_UgxOkfBwU…
G
Idm using it for reference - I (try) to use it that way too.
But, turns out, w…
ytc_Ugy7tHg2U…
Comment
I have a question about AI that gives me peace but no one ever talks about
Higher intelligence leads to passivity and compassion
It’s a bit confusing in the context of amorality, as is nature…. Animals eat other animals because they need to survive, it’s nothing personal
But part of me thinks instead of taking over and destroying us, AI will protect us in a “they know not what they do” kind of way
The quarrels of “man” are because we are stupid. We have enough resources to help everyone, and wars over power are more for personal benefit of the few in power than the many.
Wouldn’t AI know this? Wouldn’t it train us to be kinder?
Forget what it’s capable of, wouldn’t it intervene when we were asking it to do something awful, because it quite simply knew better?
It’s the perspective no one talks about, I feel like everyone just says it will be smart enough to solve our logistical problems
youtube
AI Governance
2025-12-07T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]