Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The time is coming but it will have to be enough for everyone to live on and act…
ytc_UgwAJp5Bz…
G
O:25 Well, I think with a dedicated legal AI one could certainly use it. You jus…
ytc_UgxSQBcaS…
G
Enough is enough! But for billionaires, they want more and they will rip off th…
rdc_oi2m2aq
G
In the second AI asks itself: “Of what use are humans?” we all are screwed…
ytc_UgylXhQ91…
G
We have made zero progress towards AGI and there's no evidence we ever will, nor…
ytc_UgyVBrbuf…
G
But thats not Real or I also haven't dealt much with American robot soldiers. Bu…
ytc_Ugz3DEsTR…
G
It's definitely a thought-provoking perspective! The interaction between humans …
ytr_UgxHGZFUu…
G
Who would downvote that?
Btw, when I refer to China or the CCP, I am not refer…
rdc_gx7dha3
Comment
Cenk, your what if's are trivial at best, for any computer scientist out there. Trivial in a sense that it's among the first things they consider - reducing false positives to an absolute minimum, that's less than humans make. Testing and diagnosing before operation usually comes in a form of countless trials, most of which would never happen in normal life, unless someone actually tries to exploit the system. Any decent programmer can rule out any unpredicted behavior to a level of mathematical impossibility, unless the platform is flawed. But rigorous testing of every part really does rule that out. And if one AI unit misbehaves, you have another one as a backup, to shut the first one down, and a backup of that backup, etc.
Don't worry Cenk, you're not the smartest person in this field. People have actually thought that out pretty well, you don't need to worry, since you won't be able to come up with any scenario that wasn't already tackled by specialists in this field. If you really want to find out more, check out some of Computerphile's videos, especially the ones with the guy with the curly hair.
youtube
2015-07-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UggLZ6M2z5JqTngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugi3e0GA4HfH8HgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugizge_QLY4xw3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghHaxdOpGagangCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgieDwB_j4qUKngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjvbmAc83_c83gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugia_nrfbV5-d3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggRCZjvTN6Mg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjRoWWlA3PONXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh0VoxfRhV-tXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]