Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some how, AI will cause the 20 hour a day, 7 day work week and people will no lo…
ytc_UgxDDT8fE…
G
If AI is so smart, let it figure out how to resolve data centers hogging power a…
ytc_Ugz0y4XdX…
G
AI Engineers: "Ooooops, sorry we destroyed humanity. Didn't care to watch Termin…
ytc_Ugw4mzOc4…
G
Well, I'd say it has always been obvious that, when AI would've been invented, s…
ytc_Ugxpow7uG…
G
This whole conversation is soooo naiv!!! Human beeings are so easy to be manipul…
ytc_UgxNvrUs_…
G
same we had with photoshop and computer general and now AI... people are always …
ytc_Ugz9bDrgk…
G
Shit in = shit out. Ai models are based on currently availavle human generated d…
ytc_UgzdjcDhS…
G
@The-Central-Scrutinizer you can of course also use your own truck to get these…
ytr_UgwdBPOld…
Comment
Definition of LLM: A large language model is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation.
SELF-SUPERVISED? Are they serious? Supervision entails having an ethical system in place to sift through what's right and what's wrong. Who is in charge of embedding ethical principles in these models?
youtube
Viral AI Reaction
2025-10-09T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgykP6FeLujjIGy0G154AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_Fez7a60hYQeDSlJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyIKBgPdX2YzitvkNF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPkKv7JsK_uH8hvUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3w-x5WGbYcGsyD2d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNclN0qdQs5mMF5XF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyWkHmM3XGwbwLFkDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgN_CsNv4fOUh0mh94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGDXcO5Rp18uLUP8d4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQFTrRzpJpImDpGrh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]