Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Funny how consultants from firms like McKinsey & Company keep warning everyone a…
ytc_Ugxvh2hde…
G
But what happens to the middle class. Those who do the most spending on average…
ytc_Ugwwua4HU…
G
Thank you for your explanation on using AI tools to enhance a song writers abili…
ytc_UgyD9OgFc…
G
Doesn't space and finite resources kind of prevent a post scarcity reality? I do…
rdc_kiuewjf
G
Why don’t we just regulate the types of jobs that AI can do for us and keep the …
ytc_UgygzUfIi…
G
People are basically paying Tesla for the right to train their machine learning …
ytc_UgyruD0p4…
G
It still costs a lot of money to buy a humanoid robot. Humans are still cheaper.…
ytc_UgzPWpOgL…
G
That's what's controlling all these uaps and drones and UFOs and stuff text Left…
ytr_UgwkKBRH2…
Comment
Thank you for doing this interview and bringing more awareness to this. I think one of the scariest parts of this whole thing is that so many people refuse to engage with the argument around misalignment. A lot of people have already made up their minds that the big problem with AI is deepfakes, or electricity use, or taking peoples jobs, or that it doesn't "really" understand things, and that superintelligence is just some techbro hype made up to distract us from the "real" problems. There's a bunch of that even in the comments to this video, including ones made within minutes of the video coming out. Those things are all going to be big issues in the near term, but they're issues you can only have if you're still alive! We should be putting a lot more effort towards the not dying part!
youtube
AI Moral Status
2025-10-31T02:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwkf5I1VG9-3QPcsiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKImzJEI5bjBdfi4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNSpsc9xXpxxv-FSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuML_V0-B5EECqCo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyU5Jdm4-eoCuE-nIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2OLctEGun2J6u1IV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaZLBfKqrXIvI_dMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKHywUUqGabg76XMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKU94IJV3IOuG9TWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwLPDTCf0z62TS02d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]