Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they could have a sweet piece of software for next to free, why pay teams of …
rdc_m6xnya1
G
If a chatbot is sentient then a math text book is sentient. All that is taking …
rdc_j8w14lw
G
Here’s the facts about AI.
No billionaires will make money because humans won’t…
ytc_UgyexDIwh…
G
they might need to hybride this. and let the passenger drive if they choose to w…
ytc_Ugx3Zwoch…
G
I’m more interested in what kind of opportunities ai presents to me than what ki…
ytc_UgyyER4d1…
G
I mean realistically, anytime you ask an AI a question, it answers it. After tha…
ytc_UgyGjbnwT…
G
I have learned more from using ChatGPT this year than I could have ever imagined…
ytc_UgzBXlOh9…
G
Haha, it does seem like Sophia was really engaged in the conversation! Her respo…
ytr_UgxSPQUB3…
Comment
AI must be banned, like drugs are banned. Otherwise it will eventually destroy the society. Human society can't cope with the situation when people will not be needed. It can't cope with the situation when human thinking will not be needed. If people will have no incentives to think, the human brain will deteriorate to animal brain again.
youtube
AI Responsibility
2025-06-16T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzV-p2XJ-D3kfDItDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKLkJEiIZb_DHKePF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCKn3SC7IeQmGBEG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUOMmVLhtvq6KmJpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy45zqGwZAQkiBgDzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeqjNhMHSSmLz0dB14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQzkbQH6iAynmOGQJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxosLeUXyfYt6BatQF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0eeN85upnRWIqRwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7ntFAsS-99Wd0SAt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]