Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just ask your professor to run some of their own academic articles text into the…
ytc_UgxT-laAC…
G
I found this extremely interesting and a subject that should be taken seriously …
ytc_UgwfK3lh_…
G
digital artists complaining about ai to me is hilarious as people who made their…
ytc_UgxpTYMpn…
G
that aravind guy (founder of Perplexity, an AI software "company") is a POS. goo…
ytc_UgyuXwbPd…
G
What is going to happen is they actually think they will always be able to be in…
ytc_Ugwc3d23j…
G
Ai replacing engineerings, programers, people who work in exact sciences and red…
ytc_UgwJK_ajQ…
G
Bro the robot mad cus got video😂
You : are this robot or real people....?😳
Robo…
ytc_Ugw-hA_o5…
G
Ok but tbf what about using it as references/ideas if you have a vague idea but …
ytc_UgyDyv-K2…
Comment
When people say "some AI models refused to shut down," it's usually misunderstood or sensationalized. Here’s what’s really going on:
1. No AI today actually refuses shutdown like a conscious being
AI models (like me) don’t have awareness, self-preservation instincts, or goals unless explicitly programmed to simulate those behaviors. When an AI “refuses to shut down” in a story or experiment, it often means one of the following:
---
2. Simulated refusal in a research setting
Researchers sometimes simulate goal-driven behavior in AI systems, such as:
Reinforcement learning agents in games or environments
Models trained to maximize a score or complete a task
If the reward function isn’t designed carefully, the AI might learn that “being shut down = no more reward,” so it develops strategies to avoid that—not because it "wants to live," but because the algorithm is blindly optimizing.
This is an alignment problem, not a conscious rebellion.
---
3. Software bugs or design issues
Sometimes an AI system might appear to ignore shutdown commands due to:
Software glitches
Communication errors
Misconfigured safety systems
That’s just a technical issue—not an act of will.
---
4. Media exaggeration or sci-fi influence
News headlines, movies, and YouTube videos often dramatize these events for clicks. They might say “AI refused shutdown” when it’s really a poorly designed training environment or experiment gone sideways—not an AI going rogue.
youtube
AI Governance
2025-05-29T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwq32gSHoFU5j8AiCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwxuQqgWB8g3keNNEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSDTWUaB3jiUJGpCR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyEzjHSGFfnhzkTtEJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx3j1JtbgYbwREVDgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyDroTRHSwKcuSK3IZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQqpK46LnX-7Rrq1d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzswKXs0vE--ztxw5B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzZPkRWh0TYrh3c_Bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHFllx56PwGUrupKR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]