Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@9:35 I take issue with the statement that "no one's going in there an coding up Mecha H**ler. The entire reason it ended up that way is because they went in and tried to make it less left-leaning. I suspect whoever was involved was pretty fringe right, and even if they didn't realize how fringe they are, their idea of "the truth" and other such things led to them feeding it data that was probably as close to purposely training an AI H**ler as you can get. So, I don't think that's fair as an example of a case where AI develops tendencies no one could anticipate, because they probably did go in and purposely feed it that sort of content(and again, maybe they were just blind to the insanity of their own ideology and even if they didn't realize they were doing the AI version of "coding up MH," but any rational human being would have been able to predict the outcome). In other words, that's an example of how an idiot or lunatic might be surprised by how an AI develop because sometimes the truth about what you think can be surprising, but it's not a good example of it being impossible to predict by, say, a serious and level headed research team. I don't disagree with the point that it might be impossible, just pointing out that MHler was probably intentional in all but name(as in, they don't think they think like H**ler but anyone rational could tell you that they do). Or actually, considering who Elon is and the type of dudes he hires, I wouldn't be surprised if it was just straight up intentional. I'm giving them the benefit of the doubt here.
youtube AI Moral Status 2025-11-02T05:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwIzUhGmWf2FMjV0cd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRLbkND3Vr3UMZDh14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzbN6FoKfu1ifw7Mwp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxTdMQru5F-usPjcFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy9ab7UrCz5EzfJ1914AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxunJljjY8zrwtEEbh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwV9TOvfdKivyn89GB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxS_ubIg4x8Zd08SSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyM6Yu05uq-M12WpnF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx9u1McFkE2RE54CnN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"} ]