Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We understand your concern about robots resembling humans and potentially affect…
ytr_UgyOadnL7…
G
@kooshappreciator4773shitty art is still art. Someone put effort into it and tho…
ytr_Ugzal8ibq…
G
To be fair, the AI clips are completely indistinguishable from actual sheboon be…
ytc_UgxoeLRnw…
G
As both a rider and a Tesla owner this video pains me. Not because I want to fly…
ytc_Ugx8moUl6…
G
i love 🖤💚 ai so much, more than anything ever. such an incredible time to be wi…
ytc_Ugy_NaQZK…
G
I respect the guy a lot as far as machine AI matters go. And that's as far as it…
ytc_UgyYXmOVY…
G
I myself have had two specific chats, two instances with ChatGPT, i think in a p…
ytc_UgxeJH6q3…
G
AI needs to be strictly REGULATED ...Main concern of course is replacing humans.…
ytc_UgzH00a3R…
Comment
People, stop and think for a moment. AI does not think, it is bits on a data center somewhere. It only executes instructions. The only way it would kill us would be if it were optimizing for less pollution or something and realized humans polluted the most and it was given the nuclear codes. But it is not currently trained to optimize for such, so we can sleep safe. A better video for that would be the one Cleo Abram did, especially in minute 5:48:
Spread the knowledge. Stop the fear.
https://youtu.be/MWHN6ojlVXI?si=2YXoOYHeZCdfnzUF
youtube
AI Moral Status
2025-04-30T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzernVDoT2vNj5Bf2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHi1EfOUL1lok6qtl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxNuIM5nYiawrmFiYN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxQs1c29A-eHZYqYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGnfxEk7LjgXE_1gl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6_ixvwuiknDz3-qB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1TjPCoXECH3DdK6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwryk8hdwO66O74eKl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxIQ9oCQb9Xfh1Tczx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDH8nmsCfM6SPohdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]