Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as we're capitalist, it is completely ethical to stop the development of artificial intelligence. But it won't happen. It will continue, people will be unable to afford food and there will be revolutions. Because, if the end game is that the only progress is for humanity is that we all do "creative work"... then we will be in trouble. Not every human being is designed this way. Not every human being aspires to be a creative person. Then how do they earn a living? Do we simply say "Well, it's you're problem now, starve." do we ask them to compete with the machines in term of wage? Thus revisiting slavery? Seriously? What happens when all the "unskilled" labor falls into the hands of machines? Progress doesn't and shouldn't trump everything, at some point, we are going to reach some limits. All things are finite... everything has an end, including wealth and the economy. Unless we're willing to redistribute wealth or come up with some system that can allow hungry people to live, it's likely we'll see misery like we've never seen before. Because that's humanity. We progress, exploit, expand, exterminate.
youtube 2013-12-19T15:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugj8n5yPl__CPngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgibeR0m4mjcFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi8spYGE-m613gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugj13WCc-Qt8tngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi2tI0tUVjkP3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UggkeKzsNYYfQngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugg3o3NV1weQi3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiIvBSXGDKxhXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjlCRg6glZQA3gCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UghGXBZHgCA-13gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]