Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Anthropomorphizing the whole thing is falling for the whole Altman Grift. Whether it's Booster or Doomer on this stuff: you already make the assumption this whole uper intelligence deus ex machina is inevitable reality _while_ _talking_ _about_ _it_ _in_ _future_ _tense_. And neither of you two here has any more of a clue how intelligence actually works any more than I do. Neither of you seems to have any grasp nor care how this whole transformer LLM stuff even works. There is no understanding - there is just statistics how tokens relate to each other. You completely and utterly ignore the all the failings and shortcomings - starting from what really should be called Bullshitting and for Marketing purposes is called "Hallucination" to the fact that these models are not able to work out logical models, unless they are trained on those models and all their permutations - and the fact that billions of investments in new models in the last two years has not yielded in any real progress on any of these fronts. Most of what we call progress is just companies laying people off going into the recession and calling it automating stuff with AI. But at the same time: None of these Agents and whatever Marketing Bullshit Bingo these companies are using are able to reliably most basic "Agent" Shit, like go out and buy something off of a random internet shop ...
youtube AI Moral Status 2025-11-08T11:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVS0BLNnYB2P9guNl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmQ_5MNgGUpzLccvp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwmHgmTqeYmeUJMETd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzXtWJMNsGi7R_0Mxx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgydiymXaQdIYdi61CB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUWcrf7s1vGuSCuuJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw2DXQSjXuvzjeNunR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgweB5_gsYvzIqYBzVh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugz4FmzHWOgTqj337EV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQMkurrBx9sp3_7o54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]