Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No clue where that last part came from but you clearly have no clue how business…
ytr_UgwgTOaVC…
G
What's the point of going for a walk? I mean you are essentially walking in circ…
ytc_UgyRRcISE…
G
You doomers are dreaming. AI will unlock productivity. Consider Kiru AI. Super i…
ytc_Ugz7aivzD…
G
Deepfakes are ethical. No one owns the rights to their face/likeness. People can…
ytc_Ugw1sB2aX…
G
I think it is very important to realize that if they are really worried about al…
ytc_UgwrnJdWR…
G
Is it worth learning to code in 2025? No. Is it worth it to create videos promot…
ytc_Ugx0rNiw5…
G
The question you asked at the end was, "It's at least plausible that when you sa…
ytc_Ugwaf2Dtf…
G
This is one of the reasons i love ai art, it inspires the pettiness in the human…
ytc_UgxjgAEbV…
Comment
Super intelligence is oversold; it’s not even conclusive it’s possible. However, what is overlooked is that you don’t need AGI to destroy the world — just sufficiently efficient AI that is poorly or malignantly aligned. When the next Kazcynski decides to weaponize AI to take down the banking and finance industry, the world as we know it is effectively over. We are much closer to that reality than to super intelligence.
youtube
AI Moral Status
2025-12-07T19:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyuk1hBtKCsoVIMlGV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCMM_nHx7vx3CUi4B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVveglUuOdEOPDz0Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxX7TxDaYQ34a1_0RB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtOHpYkiOjd13ruUR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzysJ0DzXsAajmE7B54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAYhjcm3oJXCmDcaR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqlRre80WFcsF3yyF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzNGyLesXo-3GaVwj94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRHE4F_qhgUGRtXdJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]