Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh yeah Bill? Then how come you didn't use any of your money to raise awareness …
ytc_UgyGAXJlX…
G
I mean w/e term you coin its still vibe coding. You can write specs to an AI but…
ytc_Ugzr80yIA…
G
If I could have the same wage without working, it would not affect my dignity.…
ytc_Ugy5zweCV…
G
But so far I've yet to see an AI that does anything between cycles when not prom…
ytc_UgzZoZz_w…
G
this sounds like bullshit. I'm with you on the AI being an issue, but to say tha…
ytc_UgzaK8MGW…
G
Copilot is pretty poor in questions about quality of product review. If you ask …
ytc_Ugx8ecNb1…
G
I read that the ideal site for these centres is underground in an urban environm…
ytc_UgwONJ8WN…
G
Make o3 and opus , rogue ai learn to be humane, by learning human values....emot…
ytc_UgxwkSi4q…
Comment
To me, solving alignment means the birth of Corporate-Slave-AGIs. And the weight of alignment will thus fall on the corporations themselves.
What I'm getting at is that if you align the AI but don't align the controller of the AI, it might as well not be aligned.
Sure the chance of human extinction goes down in the corporate-slave-agi route... But some fates can be worse than extinction...
reddit
AI Moral Status
1738005642.0
♥ 412
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]