Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Gotta love when people go for the AI argument because it’s quick and easy to mak…
ytc_UgzshY6Hr…
G
Anything to rain on the poor man’s parade. ChatGPT is not at fault, your survei…
ytc_UgyO7_w2K…
G
Slow your roll, king. I’m talking about completely AI generated images. Of cours…
rdc_k20m0t9
G
All they had to do was SAY they did it. Like, really enforce the lie and publici…
rdc_deuht8m
G
HELL NO. ai is still bad environmentally, do the research yourself, find inspi…
ytr_UgxtP4mSg…
G
While AI is implemented, let's don't forget we are human beings. Its essential t…
ytc_UgwDLOQ_1…
G
I like using AI for fun silly things but you really don’t deserve shit if you us…
ytc_UgysK5jbg…
G
Always love your work, and I support the fight against AI art. I had trouble wit…
ytc_UgzcBYRpa…
Comment
I have a lot of respect for this guy but, his is a true lack of ignorance and understanding about AI and the risk. It doesn't need to be conscious. general intelligence leads to general super intelligence. We don't know how these things work and can't predict their behaviors now. The existential risk is that we are creating things that might not even be terminators it could simply make a decision tree or coding mistake and exterminate life. Look into Roman yampolskis work. I am butchering his last name but he has the best explanation of the risks.
youtube
AI Responsibility
2025-11-14T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwwkOrKX0I4sRA41CR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxm7KXDV7-nrm_5Z6t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_82_QfTn6Ocubhzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjmKWTRi30rqyxy3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQ0XRkZyiufKB2oIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0RJQYGSKSNgCHAZ14AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXT7PwZfQGZxNsNJd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeDXD9x-_lVhRFFkV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzCgRbX6txKqEBJvr94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyVUL5IUdPLBXoO7iJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]