Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All those dorks a geeks are trying to kill us with AI . . . Its dangerous. Its l…
ytc_UgxUhnSHJ…
G
@leihtory7423 Bro even if the chipset was made in Africa they would still be sen…
ytr_Ugwo2Hk9F…
G
but we are stuck in the mindset that what we've been doing for last 500 years ha…
ytc_Ugwgeo8W5…
G
@CentauriSatoko what harm todays chatgpt can bring to the people? its more about…
ytr_UgzWDP3y7…
G
No. AI is not and can never be independent. It's automatic information, not arti…
ytc_Ugz88LPgz…
G
Hope this slows Shadiversity down... he has mentioned in the past that he steals…
ytc_Ugw_-DtWi…
G
I did the same thing with ChatGPT and asked who exactly is behind all of this - …
ytc_UgyVYVCR2…
G
If you want to use an AI for lesson planning and similar tasks, Eduaide is aweso…
ytr_Ugw5cUNtd…
Comment
There are no winners to an AI arms race except those few who are able to concentrate power and control the AI. Locking ourselves into the game theoretic mindset of an arms race when it comes to AI is literally the dumbest thing humanity has ever done. Since value judgements like "good" and "evil" are subjective, China has just as valid an argument that the US is "evil" and will use AI for "evil" purposes. If you think, "I'm the only party moral enough to handle this kind of power", then you are the wrong type of person to have that level of power in the first place. Intelligent moral actors would refuse to even pursue such power. I know I would refuse it if it was offered to me. No human should trust themselves with so much power.
youtube
AI Governance
2025-06-16T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyih1ypEVSyI6BHnJx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZVixAKvbwJPzrca14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSHiAovj28AHs40_Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxECPaggiLOQMeKohZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsKByUBG-H16o1pX54AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzr0WZVVhrBwZQDN2R4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxk7iC65QNgBRm7EVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzcfMQIF0RO3-JlnmN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwT0E0LopcX6-WyP1V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVJY_YsMZVa3olQZp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]