Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Option one: world has to become communistic. Well, I know how that feels for Americans. Option Two: You have to reformt society with much more social underline. This would not be communistic, but it would sound like it for some idiots. Also in this version AI hat to be open for all. You will see that there will be some people needed to supervise AI. Take AI prompt and an exact scene you envision. All results will never be the scene, but a nearing. There will be people needed, but question is how many and how will those jobs be payed. Somehow I think competition and greed may not be compatible with the AI - development. Option three: You completely boycott or forbid commercial AI. Last option: Let it run wild and see how world and humanity degrades completely. It is actually a threat to humanity that a***les like musk try to build robots fit for human work and not for dangerous work humans cannot do or are very risky to do for humans. As you notice the automatic drivers. You will se an infrastructure built for automatic drivers not for humans. Cars do part of this shitty process that nur gets speed up by AI since 100 years. See how many children play on the streets now... Lastly I think the Billionaires will see there own downfall if they make humans a side note. What is there money worth if there is no one around who wants it. They literally will dig humanities grave until they are alone and get their final bill.
youtube AI Jobs 2025-10-08T20:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxFcHc5l10H4NzxrX94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzU40u9FiI1xCuoObN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRKLUmikklwQpjmdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxfxOCuC8viId_mquB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwaT98M6bE3rDrGfBd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZiJz0Oilbyz5uk8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxmC1j8C7RBkER8Pth4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4huKs4SOhtcoCBq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzMZEPcxAUhZPDFRMB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugzu0vwtwBChtqDAR654AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]