Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@empyrean196 Completely wrong. AI thinking has nothing to do with programming, …
ytr_UgxTlTW8x…
G
AI drivers will be safer and make delivery cheaper. Sorry guys being luddites wo…
ytc_UgwZKOxrP…
G
This I gotta hear, i have plans to study radiologist. This field is all hands ba…
ytr_UgwFdVb-F…
G
I heard one argument make the point that AI should be used for the boring stuff …
ytc_Ugx5QYck0…
G
And now, AI is more likely to replace white collar jobs like his sooner than blu…
ytc_UgyhK9T4i…
G
What Musk forgets in his robot utopia is that humans want to be purposeful and a…
ytc_UgyK5eKap…
G
You know what else poisons AI othet AI images 37 iterations it is completely for…
ytc_UgxfbRlPz…
G
I think there real people cause a robot wouldn't look at the camera it would foc…
ytc_Ugw-NF6Bz…
Comment
I asked the following question to ChatGPT:
« According to you, is it possible to create a perfect safety for super intelligence, perpétuel safety as it keeps improving? If no why, if yes how. Answer for a non expert, then for an expert ». Here is his answer for a non-expert:
Short answer:
No, it’s not possible to create perfect, perpetual safety for a superintelligence that keeps improving itself.
Why not?
• Think of it like raising a child who never stops growing smarter, and who can re-write their own brain whenever they want.
• You might teach them rules, but once they’re smarter than you and can change their own rules, you can’t be 100% sure they’ll always stay safe.
• Computers are like that too: once they get the ability to change themselves and invent new strategies, you can’t prove they’ll always follow the “safe path” forever.
So what can we do?
• Instead of perfect safety, we aim for good enough safety: checks, limits, and “emergency stop” buttons that make it very unlikely something goes wrong.
• It’s like building airplanes — they’re not perfectly safe, but layers of engineering and oversight make them extremely reliable.
• With AI, the same idea applies: you combine safeguards, monitoring, and limits to reduce risks, even if perfection is impossible.
For the experts,
Short answer:
No, perpetual perfect safety is formally impossible to guarantee for a recursively self-improving superintelligence.
Hmmm 🤔
youtube
AI Governance
2025-09-06T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]