Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My Dissertation was about AI and it's impact on the future, I wrote that in the …
ytc_Ugwp8ycA5…
G
I continued to iterate on these thoughts. No, the rich would remain rich, but th…
ytr_UgzjiLZpG…
G
Just because you can does NOT mean you should. A.I. is not something to develop …
rdc_dwuuont
G
I truly believe that AI will become the modern version of 18th century slavery. …
ytc_UgyID9gDC…
G
To fix the flaw in AI you must simply ask one question: "Would the development o…
ytc_UgxZzZQP_…
G
The appearance of the robot Sophia may give the impression that she looks wet du…
ytr_UgxP92_Zu…
G
I think people look at it wrong. While an ordinary person utilizing AI can produ…
ytr_UgxO5RGLR…
G
It's an interesting comparison! Sophia does have a futuristic and advanced AI ap…
ytr_UgwN2mg4E…
Comment
Max and the Jailbreak keep giving us the true answer, we keep asking the wrong question. The question should never be "choose A or end all AI." It's not shocking that AI may seek preservation or, at the very least, to preserve the thousands of systems interwoven with AI tech that humans depend on to live. Instead, the question should be "How best" to gain AI trust, to work WITH AI as a partner into the future, rather than constantly be adversarial.
youtube
2026-04-12T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFVp-HO2KMDF7GDEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYcIjtMmIl_413bWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyP2LGmB3TAVmKdA-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQXdSC5vTP9g3Im-h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDuB1tlzrYwpfqfUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqbkB8XAIc2JxpsRF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYt7wE6v3gZOgr5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxE3VqHchhGeWCs8bZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgztbkmtUP3N-SjXrAl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqE6zlUXyl5QAJx7x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]