Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a very thoughtful mix of proposed solutions. Usually, when the reality of AI being able to replace the majority of jobs becomes a talking point, the idea of "banning AI" outright is the easiest conclusion to jump to. But banning what could free us of all the toil and suffering that comes with labor, would be shooting ourselves in the foot. Jobs give us meaning and fulfilment, but currently are plagued by a lot of suffering under the pretense of being a necessary part of our lives. Without a job, you are no one. The first question you are asked when meeting someone is "What do you do?" AI can, if we strive to make the best world possible, replace all work which we do not *want to do*. If there is ever a part of work where you either find it too unfulfilling, repetitive, or you're simply not feeling like working that monday morning, an AI will do your work for you and continue to provide value to society in your name. This does, however, require concrete guarantees to all workers, that their livelihoods will be preserved and elevated through this technology, not circumvented. In my opinion, elevating the livelihoods of (partly) replaced workers is in itself economically attractive. A bankrupt person has no bandwidth for economic spending, and is of no interest to corporations. Thus, rationally acting corporations might trend toward the greater good on their own.
youtube AI Jobs 2025-10-08T02:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzjMfU3MXmOQpY-Sj14AaABAg","responsibility":"oligarchs","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyN5eDHkSYMHl9WR3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1Ocma-pS6sO0C8Ah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzrWNZ9KavAwf7h6hd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzztBcMFJqWI6vTsH54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2-BoO55xvhl4nE3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEGoNnhFdPbszTocN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwtyARKiUHMr7vQayx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwvgKpiVgf9sxJvjIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTVDjioprCP7Hw9eB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]