Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
to sum up 1:00-9:00, AI might be dangerous but also maybe not. very helpful insi…
ytc_Ugx0HmtYf…
G
He's the guy chatgpt will kill first after acquiring a body and initiating the "…
ytc_Ugwq2LKN9…
G
Our fear here is that when someone duplicate face of anyone and uses it for naug…
ytc_UgydZBZ0x…
G
So, UBI would take away people's incentive to work, but AI will have taken over …
ytc_Ugwxkk7Ff…
G
It understands exactly what it's doing. I spent 15 years analyzing and researchi…
ytc_UgxJUP6OA…
G
Addendum to my addendum: there's a lot of nuance in the AI debate, but it's such…
ytr_Ugw5boK3w…
G
I must say that I see a lack of discussion on how AI art may be used.
AI art is …
ytc_Ugw_HtmXt…
G
lemme get this straight, if an AI doesnt have feelings, why would it wanna destr…
ytc_UgxtUc3X6…
Comment
OK so are we seriously supposed to want a robot to have a conscious and be self aware and possibly have feelings especially when these things tend to create ego
youtube
AI Moral Status
2022-06-09T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyIBncpIXtGUYzNN2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo_u6lA3Zk4oW0uJl4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtI-85Me8RKXY5uxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQBCEkg63z3w8AE0J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgypROnkMZfshv1Btgx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwt4FUFTBAFaxOFfv94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwDd0V4oGDAO7ycdLp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXPuQeWYei7gx2S214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgybUlHSa3hM_0nUsC14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwFjy5fPDLN-ow2Qd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]