Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What frustrates me is that we (humanity) have a choice as to whether we want an AI future or not, but it is more or less being rammed down our throats whether we want it or not. So the thing we should be asking ourselves is, 'will AI actually make life better?' I think the jury is still out on that. The example of 'self-check out' is given in the video. With a few exceptions, everyone I know much prefers having another human being handle their checkout for them. Not only is this faster it is a much nicer experience as you get to interact with another person. I suppose the argument could be made that the machine is saving you money on groceries, but the lowest cost grocery store I visit has human cashiers, so I don't think its that big a deal. Even if the human cashier ends up costing your a few more cents is it really worth the much worse experience you have? There are so many other downsides to AI I wish society would tap the brakes on it a bit. Its like smart phones, they are really cool technology and they provide a lot of conveniences, but realistically given all the downsides have they made life meaningfully better? I would say it is a wash, but at least they haven't put half of society out of work. My ideal future would see a lot more human to human interaction and less human to computer interaction - and I don't see how AI is going to help get us there.
youtube AI Jobs 2025-11-29T00:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw59EmDtdCHiugdHhB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAayq4Z-pU5nv5Vbh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwvqJFVZN-APk1dVMl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkhY1z6TjsZJBZs2R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxqTghK73hjW9wVjYB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxr88BrCjOpfz_33L54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyIqmuSK4wuWPoeKkh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxoTvlOnTUK8ofPzKd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwh6-TmXzvRXDCdhxx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz7zYFbowlH-TmtTRZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]