Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Claude having a scrap of humanity" should still be taken w/ a huge grain of sal…
ytc_UgzAD5g5L…
G
Great interview Tucker! My question is not when or how will AI take over but it …
ytc_Ugyi-D2Tz…
G
If you ask AI that question it paints a nice picture where AI has cracked fusion…
ytc_UgwpDy3xF…
G
the law has been passed that you cant illegally copyright ai so nothing you make…
ytc_UgwGoeT5f…
G
Don't be shy, drop the company name so we can hate :] jokes aside, I'm so sorry.…
ytc_UgzzXKAkw…
G
Humans are liars, and in some cases murderers, and they will mass murder on comm…
ytc_UgyrkZkfi…
G
So you want progress to slow down. You think monopoly would slow it down. But yo…
ytc_Ugzi-4ziC…
G
Is the cleaning lady robot brown, it’s good to try and be as authentic as possib…
ytc_UgzvYfk1k…
Comment
Slowing AI is pointless because one would have to slow down all versions, and we can't do that. We can't even agree on nuclear proliferation.
Our security, our safety will depend on having an equally capable version of AI that is motivated to defend us from all threats human or otherwise.
Our shield must be as capable as our neighbours shield. Then and only then can we have a tomorrow within which we matter.
youtube
AI Governance
2025-06-18T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwCUz5SWmui9Nyblm54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTxNMj5AtkyR0EmAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxveLHQgZMMyYIT33h4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNgMkZa5iJH9WcdRJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugybnxrhd6sZsJYF8xN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmH0oLfRntngwD8ch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVNn4DaKBToIB98sp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzf5g64r-rP-9Q3h0x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwK9PQERP5buzMhmAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydFORy-ca_LZDJuGN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]