Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nice that it is mentioned that "normal people" can use AI. Even if we think abou…
ytc_UgxrtaHtx…
G
I feel like the only way to copyright a peice of art made by ai is if you made t…
ytc_Ugy2KNWrL…
G
Hey lets face it people, the kind of radio pop, consumed by big audiences isnt …
ytc_UgxK911cs…
G
I agree with some of this but I think a lot of the anger comes from the lack of …
ytc_UgwG5_jc6…
G
I agree more nuance would have helped, but are you seriously saying there were p…
ytr_UgxDmo18c…
G
AI is going to be the end of this world.. why are people so stupid…
ytc_UgzvBQ8ks…
G
what kind og AI art would be ethical? Will we have enough free use art for AI ar…
ytc_UgyirESdy…
G
(not so) pro tip: Ask chatgpt to regenerate your poisoned images so it more effi…
ytc_UgxZI9Qqc…
Comment
A thought I've had is that LLMs are predictive models. They take in information, analyze it, and try to figure out what the most likely outcome is, and feed that back to the user. These things are trained on nearly every bit of writing we have. And AI destroying humanity is one of these oldest and most popular tropes used by Science Fiction literature. So... wouldn't it make sense for the prediction engine to see that, "Oh, you keep writing about AI destroying you, or coming close to, so that's the logical conclusion" and following that logic? It's already been discussed that these things don't think. Don't know. It's all numbers and probability. It's just following the trends that have been set in place. At least, so long as people continue to misuse it.
youtube
AI Moral Status
2025-12-12T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBAwCRBzS_XapFi5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwv6JeVslLKcXO_K2R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxdomAxdbGGbvvrh4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaYssCSyX2smPfH0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwip_VVgxx1MsCNq8h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_Du3fNSCIdcRzIh94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJSg7QXfpv21ExIhh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8GvEt7Gm0vQIbLsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgywAWD1gWBab0iqb4V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKNAgH33nc0oqO8j54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}
]