Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ytuseraccount thanks for the insight!! I just saw a video on how "all the prob…
ytr_Ugy6-ec3Y…
G
so, both people look completely different, and the IDs were both valid, yet the …
ytc_Ugz07ISHj…
G
It doesn't matter if you have one AI that can do everything or a million AI high…
ytc_UgxMYyd1y…
G
kids wearing a cast but can hold himself up with the hurt hand. They didnt thin…
ytc_Ugyp17MGE…
G
Seriously. And experienced or talented artist that embraces new tools always has…
ytc_UgxJzuiHw…
G
You waste billions of dollars in ram to use ai when you can pay someone 10 dolla…
ytc_UgzS7nZhH…
G
Ok let's now see this driverless truck support an HDD rig and connect half a mil…
ytc_UgxBfU05p…
G
Yeah sometimes I spend WAY too much time hunting down a bug the AI introduced. S…
ytr_UgxHSsrP8…
Comment
You asked AI to roleplay as a corrupt character, then got upset when it gave a corrupt response in character. That’s not AI being dangerous—that’s you misrepresenting it to push fear. If you told a story where the villain solves a problem immorally, would you blame the story itself?
AI has filters to prevent real harm, and it doesn’t have desires or intentions. But if you keep trying to bait it with manipulative framing, the only thing you’re exposing… is yourself.
youtube
AI Moral Status
2025-03-21T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAkmv4a7o71SeOxJF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwnv5kCp6GNTlAF8GN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8dADB_akeb7z_Ddx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyL27Yebmse6HxM8yx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9fETxS9nUejr9qXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznmz1FB_tU_ScnXVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTBPBYSWLRZSo9jUZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztQdM-n19qGjPD7vV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSCzz7kgt0LhMePO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzPRV3hyLkhFd4B_pl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]