Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“You have no idea what’s coming” is kind of always true, right guys? I mean whi…
ytc_Ugxi3lH_u…
G
We have to treat super intelligence like nuclear radioactivity. We can use AI fo…
ytc_UgzY63qnR…
G
So, an AI-related question I'd have for Brandon would be, how would you feel abo…
ytc_UgzF7MKsA…
G
Well, every artíst starts to copy y some way other artist wile they are learnig,…
ytc_UgyEr54en…
G
When AI will require so much power, and resources will be scarce. Humans will be…
ytc_Ugxza-p0p…
G
I literally throw my phone across the room when my parents glance at my phone an…
ytc_Ugx7bMMln…
G
> AI suggested 40,000 new possible chemical weapons in just six hours.
I fee…
rdc_jifsn7q
G
Y'all don't understand the true danger: If AI can be the exception to copyright,…
ytc_UgyE7J2Fx…
Comment
Honestly, Idk why nobody is linking the simulation theory they themselves even mentioned, to the fact that super intelligent beings should be capable of compassion and respect. These are values that require a certain threshold of intellect, and we are the clearest example of this.
We are capable of more “good” than any other being on Earth, although because of our LACK of intelligence, we often do illogical and evil things. Being good IS intelligent.
When AI messes up and causes harm to people, every time, the first thing I think is, “Well that wasn’t very smart of it.” Doing illogical things and causing harm is simply not smart.
That’s why I think super intelligence is more capable of showing US just how unintelligent, disrespectful, destructive, and uncollaborative WE are, than of doing those things themselves. Good = logical. Evil = illogical. That’s how this world was programmed.
youtube
AI Governance
2025-09-04T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxrBrXu90G8WfDkW6F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMFip-m-SoPj9IJvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2N4j3Fm1fMaQG3Xp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_HgtOjilI4uzrC8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyd9LnV5jNUb0nUhxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIpY-WIkb63NFZl014AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzd_Xy3wIPYplKSbOl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxhxt0GTSCV8_dYfjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzn1HTUGw_VQuQ3msB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyifh9v-gQEUP6V2wd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]