Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your humor! It's fascinating to think about the balance between ef…
ytr_Ugym0Fxt8…
G
Huge damn W right here i watched the og video and making even more is amazing as…
ytc_UgyKkhG4A…
G
I find Mr. Hinton to be a marked example of an intelligent man. Though I find hi…
ytc_Ugz2py79O…
G
Dude....Thats how the Matrix robots decided to end humanity. They tested us, the…
ytc_UgxJ-ns60…
G
But it's very ethic to give people the base of all this, by saying as a little i…
ytc_UgxNG-eF_…
G
No divorce settlement, no child support, no losing 50% of what you worked for, n…
ytc_UgwA_AqZf…
G
Showing face and talking at the end of AI related video was a good move, I get t…
ytc_UgybV2udL…
G
I'm so scared for the future, seeing all the new lab scientists come out to work…
ytc_Ugw_Nq8cR…
Comment
I'm sorry, but the idea that "I don't know" is not an acceptable answer from a LLM makes me even more skeptical of AI than I am now. It sounds to me like the goal is to keep people using the model even if you KNOW incorrect / unwanted / (even harmful) data is being returned? More proof to me that the majority of LLMs are just glorified search engines used to collect massive amounts of data for marketing research on those willing to use it. No thanks.
youtube
2025-11-19T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwmqSE39iyLF9Mzemh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzYAerrrQN1cFGzuv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxKQbGcwkQCiN22Ibx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugw-jbwUikWyu42QQLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwkmE8elRaDem8h43h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyAA6Cclk9ioTzhY3l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgsddE9q6ecLOmpZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzEUX_kG29sX3WlNcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugz9ByUof-k4ASQ3TYl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx9X-CkWUq6Km9w8zd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]