Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmanea Majority of artists aren't creative. They're just like the…
ytr_UgzOgPb3O…
G
A program is not "a person". It's ridiculous to say such a thing. But I will gra…
ytc_UgwBNHzcq…
G
e621 is doing better than other places now (ai has been banned for a long time c…
ytc_Ugx8NaFqL…
G
In the past, we have developed technologies to help humans and reduce their labo…
ytc_UgxCxyPJS…
G
AI adopting the Gandhi approach to nukes is not something I had on my 2024 bingo…
rdc_kozy47c
G
sure sure, im sure a soulless generated image will be revered centuries from now…
ytc_UgyPm6zVq…
G
I cannot believe the amount of stupidity in comments. It is my first thime to en…
ytc_Ugzk3b74r…
G
These laws need to be setup now not after the technical has been used for a deca…
rdc_fvz7vi2
Comment
You mainly describe Geoffrey Hintons AI sub goal dystopia. Fair point. There goes in this scenario the real development and more important self-improvement into dangerous directions.
I am also fascinated by the almost silent transition of classic AI-god narratives to LLM. At the other hand it's an available blueprint on how to behave and how to perceive these magic (Clarkes 3rd rule) things.
For vital processes in nuclear power plants, surgery, traffic, vaccine development/distribution and so on I'm actually not that worried. The efforts to make neuronal networks predictable enough + with rule-bases checks will be high enough, I guess.
The bigger danger I see in complex systems which are less clearly in a "closed shape", like political opinions and demands of billions of people. It's simply different than an engineer's work environment, and way harder to find non-dangerous AI intervention here.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzczosQYWlhu4gNJCl4AaABAg.AOv-s3FOmrWAOvlatiK76M","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx3nSuDFDjpcBaDBdF4AaABAg.AOv-oK_sjhJAOv9g7gfeT1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv80RLGal7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv8HYMTXbX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvA2T7sLay","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvAhqaz2Go","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvDOnLnfqS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy-H-lkhzRZ5AlKyL94AaABAg.AOv-O0chSbaAOwasIs7OId","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwmIejXdDmc3nz1Zy54AaABAg.AOv-G3_fRc9AOwR7xdoXeU","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwYSjrR-3YQGIB4WPl4AaABAg.AOv-FgYG3tmAOv3XHim0r5","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]