Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It looks like ChatGPT has easily surpassed the sentience level of most of our cu…
rdc_jg8eq04
G
Asking the waiter at a restaurant for a caesar salad doesnt make you a cook.
And…
ytc_Ugwh99_7C…
G
At the end of the day, how do any of these studies actually matter? We said the …
ytc_Ugx-VBQNY…
G
This really depends on how it's contracted and what facial recognition is being …
rdc_fvymq29
G
An AI that will mimic AGI will exist but a true AGI will never exist. AI is just…
ytc_UgxbzZndg…
G
I am honestly so f***ing disappointed by what AI art has become. It used to be a…
ytc_UgybOD_fn…
G
"AI Scientist are terrified. This is not only possible, but likely to happen".
…
ytc_Ugz18Sn6D…
G
AI in its current form is sort of a mixed bag of amazing features, okay features…
ytc_Ugyd4RhJt…
Comment
We are locked into a nuclear arms race, this time it is AI. However, albeit horrifying, Nuclear weapons can be controlled. ASI on the other hand is something completely different. Humanity is pushing as hard as it can to create a super intelligence, knowing full well, that that superintelligence will destroy us. You cannot control something for very long that is smarter than you, and nobody wants to stop creating it, because if you stop, the 'other guy' will beat you too it. The irony is astounding.
As far as morality, there are a few researchers honestly sounding alarms and not moving from one cash cow to another. The one everyone should be paying attention to is Roman Yampolskiy, PhD, University of Louisville. He is the number 1 AI safety guy. He has said that NO company is working on the 'stop switch', we will achieve AGI soon, the recursively ASI will come shortly thereafter. His future predictions...well let's just say, it's the stuff of nightmares. Look him up.
youtube
AI Governance
2026-03-18T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwg1IREA1Yk8wYD_b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsKmMJL9bT7DUyB8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyfKTCIb1co2g1Glwt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx4VUYQrncQCgnQuDx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyWEt5vd1Nqmy_lbK54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvQtW_pFMYcQLGKk14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwn33U0iDSszytjN6J4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2zD7BCpSzRYZzInB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgywuKKpI_OOMyUS8v14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdKjQ5deBJfbd-gv54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]