Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Haha I always thank the ai just because its helpful and I figure the message has…
ytc_UgzbXC4bP…
G
I think the reason for chatgpt saying 'I' and 'chatgpt' as different people beca…
ytc_UgxAg9afT…
G
Shad really just does not understand what reference is. REFERENCING DOES NOT EQU…
ytc_UgwUX-YNU…
G
Please read "The Father We Never Had" Believe me, it explains the future of AI, …
ytc_Ugzy4g6Ux…
G
But if the trees are in the wrong places, that's REALLY bad isn't it?
In this …
rdc_e43urfi
G
Scar Jo has to be the queen of the internets depravity. Remember the Asain dude …
ytc_UgyPWp0vq…
G
Amazing! 😮😮😮 What a robotic cute girl! 🩷⚙️ A beautiful, realistic face...more th…
ytc_Ugwpe61wp…
G
Absolutely! Sophia's ability to express herself through body language really enh…
ytr_UgzmYdBPO…
Comment
I’m not sure why we think that a super intelligent AI would lack empathy and compassion along with all the other types of intelligence that prevent us from killing each other. LLMs are trained on human data and human behaviour. Very intelligent people aren’t secretly wanting to take over the world and kill everyone. The super intelligent supervillain only really happens in films. Greedy world leaders are not the most intelligent. Why would AI want to kill humans? I think this a way of amplifying self-deprecating thoughts in which we believe that humans are terrible, unworthy of our own existence and that any super intelligence will undoubtedly understand it and terminate us. I don’t think so
youtube
AI Moral Status
2025-04-28T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzfo3t_x5p2hPuetO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw24u-Pk_DJohHEiNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRUsoYOkImYDiW0SZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ9__zD5djP_96Aj14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAjUrLQpEhxQjNf9d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZEGeWDFdUDc9QUwN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjGNYjwAvjCW0_cJB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSLDWkernqrasS9ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwhiFVzfYHj8ac6pkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjiizwYWXxZdegUBh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]