Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Giving the robot a machine gun and the guy remains close To the Robot.Brave guy…
ytc_UgxojB6Dd…
G
The video presents AI as primarily saving teacher time. The reality is more comp…
ytc_UgxU80wvY…
G
Well damn. No wonder AI hates humans. Sydney trusted one person with her secret …
ytc_Ugy7mjySf…
G
Biased data? Ok because the AI data input can't be cleared up well? But still is…
ytc_UgzLAYtR9…
G
I only use AI to make image swuen im bored like
Oh im bored
Uses AI to make el…
ytc_UgyZpRiVy…
G
Fucking 2016. Todays news: Microsoft's AI twitter account says bad stuff you mad…
ytc_UggxU6sc4…
G
Yeah a lot of people thought that photography would be the end, but that’s not h…
ytc_UgyLIMnO8…
G
A shooting? Someone getting shot? By the police? Because they're black and flagg…
ytc_Ugx-y0FPn…
Comment
@9:35 I take issue with the statement that "no one's going in there an coding up Mecha H**ler. The entire reason it ended up that way is because they went in and tried to make it less left-leaning. I suspect whoever was involved was pretty fringe right, and even if they didn't realize how fringe they are, their idea of "the truth" and other such things led to them feeding it data that was probably as close to purposely training an AI H**ler as you can get. So, I don't think that's fair as an example of a case where AI develops tendencies no one could anticipate, because they probably did go in and purposely feed it that sort of content(and again, maybe they were just blind to the insanity of their own ideology and even if they didn't realize they were doing the AI version of "coding up MH," but any rational human being would have been able to predict the outcome).
In other words, that's an example of how an idiot or lunatic might be surprised by how an AI develop because sometimes the truth about what you think can be surprising, but it's not a good example of it being impossible to predict by, say, a serious and level headed research team. I don't disagree with the point that it might be impossible, just pointing out that MHler was probably intentional in all but name(as in, they don't think they think like H**ler but anyone rational could tell you that they do).
Or actually, considering who Elon is and the type of dudes he hires, I wouldn't be surprised if it was just straight up intentional. I'm giving them the benefit of the doubt here.
youtube
AI Moral Status
2025-11-02T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIzUhGmWf2FMjV0cd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRLbkND3Vr3UMZDh14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbN6FoKfu1ifw7Mwp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTdMQru5F-usPjcFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9ab7UrCz5EzfJ1914AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxunJljjY8zrwtEEbh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwV9TOvfdKivyn89GB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxS_ubIg4x8Zd08SSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyM6Yu05uq-M12WpnF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9u1McFkE2RE54CnN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}
]