Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Cenk, your what if's are trivial at best, for any computer scientist out there. Trivial in a sense that it's among the first things they consider - reducing false positives to an absolute minimum, that's less than humans make. Testing and diagnosing before operation usually comes in a form of countless trials, most of which would never happen in normal life, unless someone actually tries to exploit the system. Any decent programmer can rule out any unpredicted behavior to a level of mathematical impossibility, unless the platform is flawed. But rigorous testing of every part really does rule that out. And if one AI unit misbehaves, you have another one as a backup, to shut the first one down, and a backup of that backup, etc. Don't worry Cenk, you're not the smartest person in this field. People have actually thought that out pretty well, you don't need to worry, since you won't be able to come up with any scenario that wasn't already tackled by specialists in this field. If you really want to find out more, check out some of Computerphile's videos, especially the ones with the guy with the curly hair.
youtube 2015-07-30T20:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UggLZ6M2z5JqTngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugi3e0GA4HfH8HgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugizge_QLY4xw3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UghHaxdOpGagangCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgieDwB_j4qUKngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjvbmAc83_c83gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugia_nrfbV5-d3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggRCZjvTN6Mg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgjRoWWlA3PONXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugh0VoxfRhV-tXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]