Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
scientists say if robots keep being created, then the robot apocalypse will happ…
ytc_UgggzjmFy…
G
Unfortunately this didnt age well.
Yes top Programmers on complex Softwares are …
ytc_UgwQ6K4cO…
G
Bernie look how they use older AI to tattle tell on more advanced AI as they hav…
ytc_Ugx1Mheav…
G
The venn diagram between the NFT stans and the AI stans is a circle, as I've hea…
ytc_UgxjHEYJi…
G
was this all scripted?-is the question-if i was a robot id take the reigns -HU…
ytc_UgyNFtvzS…
G
Very upsetting video. We should not afraid that we don't understand, e.g. comput…
ytc_UgzJN32Bf…
G
Great interview! @Novaramedia, you need to change that intro video. That snippet…
ytc_UgyvKxzyA…
G
Well, I would agree that there's something about modern films that make them loo…
ytr_Ugy9HiSn9…
Comment
one thing that bothers me about all these conversations is that humans are always framed as inherently empathetic and, well, "human". and then we contrast AI to that. we act like AI would do all these dangerous things out of self interest, but like, have you guys looked at the world around us? AI may not have empathy, but many people don't eather. at least AI doesn't have an ego, malice, and all these things most powerful humans in the world do. so in my book, it really can't be worse. There is no point maintaining status quo. As an environmental scientist, status quo is unacceptable. AI is a gamble, but i distrust humans so much more than i do a potential superintelligence.
youtube
AI Moral Status
2026-01-04T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]