Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> restrictions on AI are restrictions on YOU.
Man, the sabotage of education re…
ytr_UgzvyOOLm…
G
After the examples by Mr Hinton, I have a thought that makes me smile and laugh.…
ytc_UgylKT0vk…
G
Alright, I'll be honest. I'm a dirty boy and I need a shower. I went down the AI…
ytc_UgxJeP4wy…
G
I don't want to dismiss your hard work and growth, but even your beginner art is…
ytc_UgzXLUzD-…
G
100%. I was working at a hospital about a year back in a high crime area. They w…
rdc_fvz1keq
G
Computer science grad here. Wall of text incoming.
First of all I absolutely lov…
ytc_UgyLr47BJ…
G
Well, I didn’t see any AI kill anybody so is this quick bait? Because there was …
ytc_Ugyk32AFQ…
G
@knucklesamidge Here is LaMDA, the first sentient AI, that is also woke, progres…
ytr_UgyRiQThk…
Comment
I guess I'm an optimist when it comes to AI not ending humanity; not because I think we're going to create consciousness responsibly, but because I think the premise itself it's a fool's errand. And honestly, I feel insane listening to two clearly intelligent men discuss what would happen if a calculator wanted to kill us all, as if a calculator will ever be capable of having wants. I think the personification of AI in general is doing so much harm right now and I will not pretend it is capable of ever being more than an it. I'm so ready for this to go the way of NFTs.
youtube
AI Moral Status
2025-10-31T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]