Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My question for these ceo's, yes in the shorterm they cut labor costs, and there…
ytc_Ugx7GZbs7…
G
So one lead truck, followed by other autonomous trucks.. i thought convoying was…
ytc_UgzHcVDDE…
G
Personally I dont have anything against AI if the person informs it is ai genera…
ytc_UgxmV_nvp…
G
So in this example of giving an AI an environment where it is shut down in 7 min…
ytr_UgyqiYY83…
G
So what if AI realises it should "fail tests of being senient" to keep you think…
ytc_UgzZEMC9e…
G
Because these are the control centers that's gonna shut your money off. Track yo…
ytc_Ugy27Hxz_…
G
It needs to be used in this way, only ever for good and to work in harmony with …
ytc_UgzLx1F0c…
G
So… the stuff that is already a bit hard for developers are also a bit hard for …
ytc_Ugxxu9blW…
Comment
Anthropic has explicitly stated they want to scare people into supporting regulation. That's not research.
Thar decreases competition and leads to a worse outcome. You're freaked out cause you don't trust or understand complexity.
He ended on a positive note. Regulations were never going to work much less help. They can't effectively regulate child porn. We aren't going to control something more intelligent than us, that's hubris. We're already giving it agentic control and basically as intelligent as an ape, with more knowledge and better language skills.
The open source models are almost as good and way more efficient, they'll run on your phone in a few years. There is no gate keeping this, by corporations or government.
Altmans product will literally put him out of a job.
In 50 years, everything you think of as a job will be done by robots. You'll make enough money from existing to pay for everything you need and most things you want.
It WILL go bad. Thousands of people will be killed by AI. Millions of lives will be saved by AI.
youtube
AI Moral Status
2025-11-04T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5IrUl-At-Bbp7xaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXQN8DPGzhg59PFdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_ujM_YSEOXowtVXh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-FqF3Cjw837NCXpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzbT4ni6D9X_SCpXtF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWKKmo5Fq4J3bTVx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOSBY719ntx_SgqTZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPEGOYhaW4ag01Qtp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwS5zr8ParRGI_K07N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNYCRV3Vk1tH-dZdN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]