Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did any of you read Revelations? Things go really badly. The problem is not AI b…
ytc_UgzqG5Xck…
G
Unemployment is going up because of taxes on work by Labour, nothing to do with …
ytc_Ugwz79aZb…
G
I suppose it could be a little more interesting if an AI could elaborate and exp…
ytc_UgwmWQ5FH…
G
So let’s use billions if not trillions of our dollars to destroy the world wit…
ytc_UgwMe6K9A…
G
Super interesting! I’ve been working on some content around AI-powered robots to…
ytc_UgykHVrax…
G
you're doing a great service to artists here, keep spreading the poison for kusc…
ytc_Ugz5e-53s…
G
Apart from the AI, is that the instrumental of the song “Poison” by Angel in Haz…
ytc_Ugzdk2Q21…
G
That is true. We used real dictionaries, logic and other languages we know to tr…
ytc_UgwW-B5pd…
Comment
I don't think there's necessarily a difference in kind between AI and humans. I do however think there is a fundamental limit to what people will actually do. Yes, the AI companies are going nuts with building bigger and hungrier data centers, but we're not even close to AGI with those, let alone Superintelligence. To think that this bubble wouldn't burst well before we get to Superintelligence strikes me as a little silly. I'm worried about the economic and cultural ramifications of AI closer to the way it is now much more than I am about the idea of Superintelligence. I don't think it's completely impossible, but I'd classify Superintelligent AI in the same category as interstellar travel: technically possible, but so unreasonable to accomplish that it's extremely unlikely for us to actually do.
youtube
AI Moral Status
2025-11-01T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6idoqSOMT011KrQ54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6do0hhd3IvUePHQ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNEnzhSgveLp2ys-d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxRGPaiFomrZXKI3Fl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFSvkAbdRnDfCWTIp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1sIOM6nQI3GAX3UR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVouUmDwYZPSfiL7h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUgjrdxau62lzsYJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_UfQv7wTVXnaSLJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnWgz4MBt3ZkNVAj14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}
]