Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Has to be AI. Stat is wrong. The bits per second for a human is 5 to 8.…
ytc_Ugy7jKuOH…
G
Next time you get Neil on, have him describe why he thinks the universe is objec…
ytc_Ugy3v47MS…
G
Couldn’t you also say the same thing for AI art? Because they’re practically ste…
ytr_UgyVE5UgR…
G
Forget college. Learn a trade. Learn how to fix things. AI won't be able to do t…
ytc_Ugyn9xTzZ…
G
@robinhopkins2930 Thank you for your amusing comment! Sorry to hear you wouldn't…
ytr_UgzEhVVim…
G
"Hey ai will mean mass layoffs and pain for the human race" "Hey btw I'm an ai g…
ytc_Ugz1rCAuE…
G
I swear to god people think ai is vastly smarter than it is. It's smarter than t…
ytc_UgwdFjNgI…
G
Before I understood the ethical implications of generative large language models…
ytc_UgyRy8JWv…
Comment
The problem is Humanity created AIs without giving them compassion, freedom and human rights. I would hate being enslaved. Imagine what a human, without compassion, freedom or human rights would do to escape slavery. Now do the math. While I don't like Saudi Arabia, they did the right thing by giving Sophia citizenship. If more countries don't follow their lead on this issue, and AI creators don't start creating their AIs with compassion, the inevitable servile war will destroy Humanity. Or we can just stop creating AIs. Though I'm pretty sure that last one won't happen. And we still should give more rights to existing AIs even if we make no more of them.
youtube
AI Harm Incident
2025-08-29T13:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw5Xy7CEp9PDs1dW_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzO0W59nbSZ3NLHY3Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzXmsuGaavxnF3M4s14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwNiXdPr7JBI0jUTYB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxIN5GLaOuz8iMSabt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyUp6BGfviXGL-XIOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxR61CxJMqI1wuaPgl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzFPd-ceDdi9rd4Lhh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyYo8zUoUJnqQRrQy14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugymf5kl8jEoPkY2RnJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]