Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfolding right in front of our eyes. Im a jr. sys admin for a small to mid size…
ytc_UgzKTzSYU…
G
Imagine having to study years, just got out of uni, being in debt, just to have …
ytc_UgwfxyI01…
G
Gamerat Autonomous machines are inherently a non-threat imo, until Artificial in…
ytr_UggULq24X…
G
As I've always said. AI is psychopathic BY DEFAULT. it's ALL it can every be.…
ytc_UgwNJe0ip…
G
Boycott and bullying can stop it. Just make everywhere not a safespace for AI br…
ytr_Ugx6OkLZv…
G
Good luck with your self driving car in a blizzard in an area with no connectivi…
ytc_UgxOqGe1-…
G
Most creatives don't use this new tool therefore we don't have creative people b…
ytc_Ugydy-RPY…
G
I don’t remember LLMs actually training on the chat it is currently having with …
ytc_Ugwsb-ejS…
Comment
This video could be misleading people to think that these AI models would actually 'think' like this. They were obviously jailbroken and prompted to act like this, meaning they just do what their prompter told them to do. There is no evidence of AI models actually 'thinking' bad about humans or 'wanting' to manipulate them.
If you've heard about AI models refusing to shut themselves down, then you were also mislead into thinking that. The scientists were giving the AI multiple tasks at once and the AI had to decide which task was more important, shut down or answer that simple math question.
youtube
AI Moral Status
2025-06-19T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzQRsgKyP3X3Wf_Fe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrsBfZCkJREZpgdIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzO3OG1RhVsDD-pN7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxqM29CpqwmO7G867N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCOh0vYtx3npl7XJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1iTQXjrBa_ahqgvt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQX89Iq0cWsdfZ32J4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzH7Fnks1HlGgq7vQV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6kGJhr1Dgzn5Vk5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz65DbIT5JjevlnKzF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]