Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@wiliknow I've done a lot of passive thinking about how making a robot to fix a …
ytr_UgwK7F5hl…
G
I agree with you. People are letting ChatGPT think for them. ChatGPT is a tool…
ytc_Ugx5zpm_b…
G
**SPOILERS FOR HOW THIS PROBLEM WILL BE SOLVED:**
"oops sorry didn't mean to br…
rdc_g14alid
G
huh chatgpt said-Alright, real talk.
I’d switch the rail and save the five huma…
ytc_Ugx193u_H…
G
ATTENTION!! STOP using AI its killing the polar bears, we don't want the net gen…
ytc_UgySrXQ1e…
G
@mrpicky1868 He says the quiet part out loud around 1:13:04, that if there was…
ytr_UgwJd0q9x…
G
I think AI is right on the brink of exploding doing things we’ve never imagined …
ytc_UgwtxlNmw…
G
This doesn't bother me. Imagine telling your AI avatar to deal with bureaucrats…
ytc_UgwC1XiIe…
Comment
Why do they expect AI to have compassion for humans. We humans don’t even have compassion for each other. AI will be the end of the world as we know it.
youtube
AI Moral Status
2025-06-04T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxeJQef3emBmhF_jC14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzhcooS4d3prH7UCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwAweJXs0-wKd9AkhB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQmbWVhSicSyUgsMx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy9pkoDf0czOJF1vZd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxi3PWee-9nnPiCDlR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxni_Kih4TLrP2EOMt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwG2f08ibe0u9Rq89V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw9QsvDbPw8LpucuiJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwfz9fiMQKdAEWoXwx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}
]