Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are misunderstanding Ai again, Ai is like a worker that makes almost ever…
ytc_Ugxeck1It…
G
silly little chatgpt the clapping noise is also infinite why did you not lead wi…
ytc_UgyW5oVsp…
G
All the girlies said it's AI generated.
All the boys said it's a human.
Well w…
ytc_UgwEaHLVv…
G
Haha, if only Sophia could make tea! While she's all about wisdom and learning, …
ytr_Ugzwu-bOP…
G
you use AI u got no tallent and u are lazy learn art its not that complicated…
ytc_UgxuFgGo8…
G
AI will be an amplifier, not a full blown replacement.
Stop and think how many …
ytc_UgwzXks_m…
G
@ranu1745 You do realise training an ai takes time, effort on the trainer as we…
ytr_UgxcPa0H8…
G
@InsightSplash1 did you ever talk to chat gpt? I did talk to gpt, and i did the…
ytr_UgwPzlEqJ…
Comment
I feel like the thing we miss when talking about misaligned AI is that the corporations developing AI are themselves non-human intelligent organisms misaligned to human thriving. You can argue that humans run corporations but a human misaligned to the corporate directive of "maximize shareholder value" will be removed and replaced with one in alignment. The incentive structure that corporate intelligence responds to encourages designing AI to be misaligned from the start, favoring addiction and subscription over actual utility.
youtube
AI Moral Status
2025-11-01T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyPq1T_w8e9R5FY054AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRabpPg-Yqo24Smmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6Zn1oPjiCtz5tbLV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHBHOAKYeSQpnNNrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwm9rEGyvc9hqTVxaV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwaf8pzYoaKV0wpBx14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyaDvk0iSO2EUnXPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx5Ipo3CfZjr63RfR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaM1AJbmaQs_IvumF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx038np7EB2vh-X1e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]