Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pause it right near the end when the robot is raising him arms saying way to go.…
ytc_UgwzZtN_y…
G
The problem is in the human heart.
Just a thought. I am a photographer. Maybe …
ytc_UgzdrLipZ…
G
The government is doing nothing to protect against automation, or are they pushi…
ytc_UgzEignN_…
G
Well congratulations! AI is already being used by Israelis to kill Gazans in an …
ytc_UgzJxsiCa…
G
Elon had to tell it and people laughed it off, AI will be the beast and bringing…
ytc_Ugwwd-WaQ…
G
You’re gonna let a self driving megalomaniac designed by purple haired and blue …
ytc_Ugyz0N5Rx…
G
Best case sounds like we will create a mother ai, which will care for us as thei…
ytc_UgzleBQBF…
G
You know exactly what ppl are using it for. So they don't have to be cognitive a…
ytc_UgyoY2h9E…
Comment
Around 4:40 you mention that there would be an economic interest in torturing AI into performing tasks.
How do you think there's economic interest in creating an AI that needs to be tortured when you could create a non-learning automaton that does the job perfectly (which you would more than certainly be capable of doing long before creating a torturable AI that does the job well). We used slaves before because we didn't have advanced machines, but as machines replace labor having an intelligent AI perform as slaves would be redundant.
The economic interest would be in creating new machines without AI, not in creating machines with AI that makes them flawed for the job and then beating that AI that we put there to begin with into submission. That just makes zero sense from a practical engineering standpoint, and it's practical engineers who will be deciding this future.
youtube
AI Moral Status
2017-02-23T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg6uOok2VP5QHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgglKdwIP2tvZ3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiBj8trrN2T_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghgtcFB4IEzDngCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiWpbxpfu9p6HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj2C_TxSi954HgCoAEC","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghHl86Xngak0XgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiVMj0Ws70W2HgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjRmuxIb5d8XHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggGny5a5uCQDHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]