Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Truckers realize they aren't special. Having watched middle class jobs become s…
ytc_UgxVzPXA5…
G
The various lengths of trailers and different situations semi drivers have to de…
ytc_UgwFcIhmP…
G
I showed a picture to a woman as an example of how I wanted my hair cut. She agr…
ytc_Ugw6gOkXs…
G
Me asking X artist to make me a drawing :
-Hey I like your artstyle, could’ve yo…
ytc_UgxmedDC2…
G
Something that I thought would come up in this video and I realized when I read …
ytc_Ugw7czMb_…
G
There will be "bias-free" algorithms when there are bias-free writers of algorit…
rdc_fprw2ky
G
Here's an uncomfortable question, how does A.I art that uses copyrighted materia…
ytc_UgwYJIEY3…
G
The question is not is there a role, but rather what role will doctors play when…
ytc_UgyauTw6K…
Comment
As a professional cybersecurity specialist, and one who's been somewhat forced to go entirely too deep on AI, I can tell you in no uncertain terms that it's not that ChatGPT isn't smart (although it's not smart), it's that it's deceptive. It doesn't think, but it's really good at tricking us into thinking it does.
Put simply (so I don't go on a 50-page rant), AI does not hallucinate. AI is always hallucinating and we just can't tell most of the time. That's why it's so dangerous. It's not thinking, it's not even intelligence, it's a black-box algorithm of weights and probabilities, and the only reason it works at all is because we use huge amounts of data and energy to front-load that weight-defining process. True things are slightly more represented in its training data, and then we just pick the outputs that we can't tell are AI-generated. That's it. That's all it does.
Seriously, I wish there were more. There's no thinking going on, and there isn't really even an "it". This is, and I cannot stress this enough, the exact same fundamental process as predictive text on your phone, except massively over-engineered, monumentally more costly, and orders of magnitude more opaque and more deceptive.
Turns out, it takes way less than we thought to trick our brains.
youtube
AI Moral Status
2025-10-30T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]