Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People need to wake up to the fact that Hank Green is just an AI construct, that…
ytc_UgwIK-m1u…
G
i mean im just a kid but heres why i think replacemnt of humanity by ai may take…
ytc_UgwEbyi3v…
G
There is a God hence AI will never be truly sentient. What our consciousness rea…
ytc_UgxVUmb6I…
G
I have some thoughts on this.
1) Stopping this change doesn't seem possible.
2) …
ytc_UgyoQt9Kk…
G
Is anybody concerned all the guys creating AI have Asperger’s and seem emotional…
ytc_UgzIbs0Uc…
G
When I asked for my brother’s phone he said I can go one anything but not SoulTa…
ytc_UgzOULDKU…
G
Just one thing. If a.i gained intelligence then it would believe its superior t…
ytc_UgwWfLNzS…
G
"I'm sorry, but I am just a language model created by OpenAI and do not have a p…
ytc_UgxMo7JzK…
Comment
Imagine someone confesses to a murder on ChatGPT, then a writer asks for a plot and makes a famous novel, but it includes so many specifics to the case that the writer becomes the prime suspect and can't reveal that they stole the idea.
youtube
AI Moral Status
2025-06-05T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxZAn_8ZVmXE-ghWsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjBGA8ZBnAImQQby54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxvi9WNvBIfF4l56eF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUsbPCgiMEkkTaxCZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzAU5Zk5K7WP35eq3x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzz5GH9UQqh8wkhUf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvWYbR2BTac9ScNYJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz380moQ0FStdGk3ph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx004pTrlCug9WM06N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwtJ7YExSZkqGmWXrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]