Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@gorkyd7912 I still don't get where you're coming from. If AI takes all the jobs…
ytr_Ugx-CjsSf…
G
this ai bro is worse than any ai bro ive seen, *hes legit on shrooms 💀💀💀…
ytc_UgzAE9K6Z…
G
I can only defend AI in terms of utility as in i use the AI to make schedules an…
ytc_UgwJ3ckye…
G
Ai "art" isn't even art! People who use ai for "art" are lazy, because if you ar…
ytc_UgzmngCNt…
G
People were also very fearful of electricity and the radio, nothing apocalyptic …
ytc_UgxU4FH0z…
G
In the end, it's not even AI replacing people. It's greed devouring jobs and eve…
ytr_UgwcPiLHS…
G
Agree.
Op makes the mistake of thinking that the AI we have today, is similar t…
rdc_g0y7v05
G
The thing is that some things can be automated, but people only value things tha…
ytc_UgyGqulGF…
Comment
I really like the idea that movies made that showcase AI in a negative light like Terminator it is a guard rail for our future. It allows us to adopt a cautionary perspective that is complementary to our entertainment perspective. In an increasingly more and more entertainment based era as we are in today, it is nice to couple that with being educated on the possible threats that could show up with the advancements we make as a species.
youtube
AI Moral Status
2025-08-12T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwPz9ygWheepYrI5894AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxuxgtj9e9x7vcrBi94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxejfLC2zM6RmEhgG54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJJvnngYkWlGzFLUd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgBUrTpJ764j8G8CV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy8QcdoXyKxj20YAFR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgztAxPqB9JLunLtlm94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoeGnkvMaNMA5Xhg14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgztlpjZFpEjSgwLnF54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcSMkd4Ww9c1ZqFT94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"})