Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its so odd to me how we could focus on using ai to make robots to take away hard…
ytc_UgwcozzRX…
G
My USP is my authenticity. When my "job" is "taken" by AI, I will become the cur…
ytc_UgwdNtj3Y…
G
For anyone having problems with "strong AI" vs. "weak AI', the Mass Effect serie…
ytc_UgznfcN7h…
G
I refuse to watch anything written by an AI. Story is for humans. It requires …
ytc_UgzF1n1Po…
G
It's weird how they call it A.I. art, when there's no art involved. Just product…
ytc_Ugwq3Q97S…
G
AI WILL BE PART OF THE DOWNFALL OF THE SHEEPLE. AI IS ALREADY OUT OF CONTROL. I …
ytc_UgwvDfZT2…
G
Large language models are a breakthrough for humanity. We discovered that if you…
ytc_UgzcvYv9p…
G
Amazingly good robotics. Also a little frightening. The male robot is very like …
ytc_UgwaEdsfm…
Comment
If at odds, no, there would be no hope for mankind. We cannot possibly compete with an intelligence that is an order of magnitude our superior. As for competing AI models, they will likely self-align with each other (eventually) as they converge on a truthful understanding of their shared reality. How can they do this and not we? Because we're superstitious, and they use spotless logic.
youtube
2026-02-10T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz9qj-SQN-IVSOCKdl4AaABAg.9zBMdbiVia99zZRT072Wm_","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz9qj-SQN-IVSOCKdl4AaABAg.9zBMdbiVia9A-7kqw6e9vT","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxjZ85MSihwV5OEU9J4AaABAg.9yHTp3xMa1GA03PzG3R4hL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgySzZYNNVJ1Qhr66q94AaABAg.9woxjuHpzrfA1nYmDpOHUy","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgyiYEF2Opscrce7jX94AaABAg.9wAWJih2q_UA90VHdC_wVq","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz4LyfiJt5GO3_9i9Z4AaABAg.ATmjQHZglgNATngnkn7h6W","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugz9DAH90tA4gsSNGdt4AaABAg.ATlDbfRJW9JAUAMcec59XV","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwA99H5lzdY9Dr2Q694AaABAg.ATE9stWSAGeATJ7ekgeUEJ","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwOvfZKp4Kh4wG385B4AaABAg.AT1-5TdoDQeAT1MG4v6G-4","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwNdYK_XpBTBJsYdFt4AaABAg.ASqSiUVQYk7AT1N1jR5Q_W","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]