Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The definition for AGI that Neil proposes is more specific than what most experts would use. Normally it's just something that can perform of at least human level on all kinds of cognitive tasks. It does not need to be something with self-motivation (and definitely not a robot). And I'm sure there will be a basically limitless number of big tasks that we will be very happy to let those AGIs take care of, if we manage to construct them. You could have a billion artificial astrophysicists tirelessly working 24/7 on the issues that you or your institute has deemed worthy of spending a bit of electricity on. When it concerns the idea that there will be new jobs for humans as old jobs disappear, you have to remember that the premise is that we have machines that can do any cognitive task at least as good as humans, so you can't combine that with the idea of some kind of unique human ingenuity. But you can look at how we still value items made and services provided by humans just because they are humans even though a machine could do the same thing "better", e.g. when we like handcrafted items with defects that a machine-made item wouldn't have or your child's clumsy drawing and so on. And we probably wouldn't want to replace legislators, judges and priests, e.g., entirely with machines either. Besides, what's this obsession with "work"? There's a weird idea that we somehow will be worse off if we don't have to spend most of our lives doing labor we wouldn't do if we weren't paid for it. If machines replace human labor entirely, costs will shrink down to just the fundamentally limited resources consumed by doing or creating whatever the machine is doing or creating. I think nearly everyone completely underestimate just how much of a cost reduction that will be. We're talking a super-yachts for pennies kind of cost reduction. So materially, we will essentially have a post-scarcity situation, unless politics somehow fails to handle that transition. Some might have a harder time making that mental transition into affluent retirement, but eventually I'm sure we'll manage that too.
youtube AI Moral Status 2025-07-26T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYdg2BFc0BCMUThZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwUVMzbLx8o8zWoHhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQUiak7g5jtck8re54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMCTmRLUy3cKLMR354AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgztM6sNOho9lxsumz94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwocQGv1J3KfFQCBUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy5OeXXDJ44T1b9_aF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKAtijM8vsTEZsDYt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzl_Hg-_xsVfymXtlh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPw6bOL39wnfe0Os94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]