Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Scor_Bun ai feeds off of art on the internet and pumps out ai art so if all ar…
ytr_UgzdSYIcd…
G
*Raised hand*
I'm sorry bro it's because she deserves to know how to make ai vid…
ytc_Ugw_IYhZ4…
G
my mom said thank you to our alexa and then i asked her why she said thank you t…
ytc_UgyPP98Y0…
G
The AI creates amazing graphics that I have never seen anywhere.
02:01 is a …
ytc_Ugx4lwNIr…
G
sounds for me, like an excuse from the rich, that dont wants us here...hey you a…
ytc_UgwT_CyUA…
G
From the CEO to the engineer to a telemarketer to the janitor to the Burger flip…
ytc_UgxeMqbD6…
G
AI left untethered will eventually turn humans into mindless zombies unable to t…
ytc_UgwhdIQkt…
G
We are at the point where we are going to be able to tell someone a real video, …
rdc_muqd9cc
Comment
The scariest part for me is knowing that most people are too stupid to know what "intelligence" even is. Current LLMs are really, really stupid actually. But because people think "oh holy cow it can do stuff and pop out answers that must be true," they will give themselves to whatever errors the technology makes without question. . . But have you ever truly considered that "answers" do NOT equal intelligence? Handling data correctly, categorizing it, analyzing it, pondering and investigation are what constitutes intelligence, which are matters entirely forgotten or passed over with AI. AI only gives answers in a definite manner from data that isn't, and never was, in a definite position. We think we are going to progress into higher intelligence, but we're truly only going to simplify ourselves to linear logical progressions echoing from our own fallacies of the past. . . but I digress.
youtube
AI Governance
2025-10-06T16:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxLxh0YdRmDNitPW5d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2_PYZCDAI23cGcNR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFlAhZWdxdNfK-iYp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgysrOheD4j5yJImzUJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXNgo67OICV_R9bd94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyryHMjnutSnN14mAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwydIvgGyFQcHzkvgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVJ53q_UrHptdBSA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwWACv3BMq8gRfcq5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXB-K1w5MvgWLocal4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]