Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not an OC in the sense of what OC actually means if ai was involved in it t…
ytr_UgwL0ZSvv…
G
I wonder if Geoffrey Hinton, as the father of this technology, should be held a…
ytc_UgyXw6flZ…
G
Senator, what if... we do allow AI to consume jobs and then we give everyone UBI…
ytc_UgzIcRBkc…
G
did we like not have an entire movie called minority report about predictive pol…
ytc_Ugx_QBvpY…
G
And you can also drink Russian rocket fuel.
But why fly it when you can drink i…
rdc_lubprxe
G
If you were in a cooking competition would using instant noodles grant you as ch…
ytc_UgwCuf6G2…
G
I mean, that setting is the first thing I disable when using a new AI tool. If t…
ytc_UgwUsbPCg…
G
LLM is now being used for translation purposes by major corporations, government…
ytc_Ugxu0ZC1P…
Comment
If anything, giving robots free will would be the most catastrophic thing you can do. They'd become just like humans, but far more intellect and (probably in some cases) incapable feeling pain, so tackling one out of way if you were to fight one for survival, you'd lose without proper weapons... Although all that depends really on robot too. Free will is what makes us do stupid things out of order, instead of set and calculated process that we would follow... So don't give robots free will or research on that. That's at least some of my thoughts. We don't really need any more chaos than what we already do... (Nukes was one big mistake to learn and research about for example... So now it's just matter of time when one blows up...)
youtube
AI Governance
2024-04-03T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0O-CQLXFpU5fOhaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzE38gYZJHSrcL1VVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9v4p1qYDUI1choll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpREhCn8rovUo2Oop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1NmYI2gi6bujiTnJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBcw7J-dzxQxV-SqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNRYQKZC35KQf5r2t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyJx9Gqe-bHpXiQWoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy1pqx7b8-3QdFDK8J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcgVrr2LAttvXorXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]