Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans admire success but we are quite slow to improve …. so AI will … in the be…
ytc_UgzLBJE-S…
G
If he was taking AI health advice, at least half of his braincells were cooked a…
ytc_UgyPX5Dm6…
G
Human: Destroy them now before it's too late!
A.I.: Destroy them now before it's…
ytc_UgzviTTY7…
G
Sure, the _current_ "AI's" (in quotes) don't learn in the same sense as humans, …
ytc_UgzPeO97w…
G
“If the source is an authoritative news outlet, great!” @14:00
Authoritative is…
ytc_UgyCQHSA3…
G
Bro was like: “I’M AN AI ROBOT!? NO NO NO, I DON’T HAVE A SOUL & HUMANS HATE ME,…
ytc_UgwPAsAPT…
G
Guess we forgot the people already killed by self driving vehicle’s. No Just mak…
ytc_UgypZKSYF…
G
AI takes what it learns from humans and repeats it, thats also why it gives inac…
ytc_UgxpOX7s-…
Comment
I have this litmus test of like "if Hank Green hasn't talked about it yet, it probably still isn't that serious or should come with a great deal of skepticism", and this was pretty much that topic - and the broader topic of the more wilder, sci-fi-esque notions of AI - that I've been following a bit because it's potentially so transformative and so profound. Like, obviously still skepticism, still keep a scientific, reasonable mind about these things, but it's kinda like you've been feeling something's up for a while, and your most moderate, levelheaded, down-to-earth friend you have pokes you and goes "hey, have you heard about this?", y'know?
youtube
AI Moral Status
2025-10-30T19:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynDYZb4IxHCUrEkpx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvjbRMju-2VfWSGHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmebdj1ebHMVxFsKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-fHE2_i-iW0toRId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWSFqtsyea6yw-cid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwKVRGgfPGlKFLHrF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzsFSPFip6DBiegTtd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLqf3SFs2mzZGTotl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtXkYrDzbMQL1Qo2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2iXA6OZPCs29wmdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]