Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For all of you who told us craftsmen to "find better jobs" our life's work, pass…
ytc_Ugx1aBsha…
G
Setting aside ChatGPT for a moment the approach the lawyer seems to be taking is…
ytc_UgzAFdomn…
G
Fool self driving in Australia...somehow I don't think i'm going to need to watc…
ytc_UgxXLWx8P…
G
AI will be bad for the housing market, government debt, and taxes if it leads to…
ytc_UgyQWxqko…
G
One of the computers’ towers looks like a speaker. There’s no way those computer…
rdc_oi3is1t
G
All the YouTube videos about AGI or the impact of AI on society can be very inte…
ytc_UgyA-Q7k9…
G
Hlw sir , plz guide me how to become ai research scientists after bsc from iiser…
ytc_UgztF7rL-…
G
I don't like how they say the terrifying "truth" of AI.
This is a professor and…
ytc_UgwacvzQm…
Comment
I'm much more concerned with the people in power who control these AI systems, than the AI systems themselves. I think the most likely medium-term future is just a continuation of what we're already experiencing -- wealth inequality. Rich people will get richer, and the rest of us will live in the squalor until it reaches a breaking point. My "silver lining" hope, if you can call it that, is that in the aftermath of that breaking point we can create a society that actually distributes the benefits of those AI systems to all people instead of a select few megalomaniacs. Even better would be if we could create that society now, but with the cartoonishly evil people in power it's hard to feel hopeful about that.
youtube
AI Moral Status
2026-03-02T17:5…
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz86s2QFPS-hKYIJjV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyynuw930sIpEvB8c94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyoUlSbaAt-W9OIyhp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiFYVU0bGYFXPyrgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQGW8VNDrxXy1OnG94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyIrTnRiR256mBIfhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxucZMERxkle9Caal94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMWWCYvt50UGk_oER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIslLOYeVfkJw7Zsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdW6wFrGoEbleaLDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]