Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Look at what human 'hackers' have been able to do. How can anyone realistically expect that Artificial Intelligence protocols will be impervious to breaches that run the gamut between the benign and the devastating? There are always malevolent souls even smarter than the brilliant pioneers in a field, lurking in the shadows with sinister intent. I truly believe that we have no genuine need for the 'progress' that A.I. is expected to deliver. We've done quite well, exponentially so over the last several decades, and we're already quagmired in complications from having lavished incredible intellectual yields on a dangerously average population. It's saddening that this pursuit is essentially one of mere business and intellectual pride for the real might behind it. Many concerns are being posited, but once this leviathan gains its sea legs it will be impossible to stop, and I believe that a time will come when people will be at its mercy. I doubt that the intangible qualities which profoundly define the human experience will ever be apparent in an entity invented to make human life better by being 'superior' to humans themselves. Something about Sam Altman seems artificial and soulless, and interestingly enough I had already started typing that opinion as Dr. Marcus spoke at 2:31:30 .
youtube AI Governance 2023-07-13T23:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw-7ZEL9hrvztaPiMJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwgwdA86jUKrmJmWR94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnBA39oCgbfTw7NbZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyg-6LiBLujU_NI1qZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhKEfqOIIhg1krpCp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyYcVM-pNzmi5mJZ14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyIACjU5tJLHxxqEBt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgywPNbhIXkqbD73kyF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugwmcn2lyzFZ1-gKyV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-nNw8dGnaG71cyDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]