Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a professional cybersecurity specialist, and one who's been somewhat forced to go entirely too deep on AI, I can tell you in no uncertain terms that it's not that ChatGPT isn't smart (although it's not smart), it's that it's deceptive. It doesn't think, but it's really good at tricking us into thinking it does. Put simply (so I don't go on a 50-page rant), AI does not hallucinate. AI is always hallucinating and we just can't tell most of the time. That's why it's so dangerous. It's not thinking, it's not even intelligence, it's a black-box algorithm of weights and probabilities, and the only reason it works at all is because we use huge amounts of data and energy to front-load that weight-defining process. True things are slightly more represented in its training data, and then we just pick the outputs that we can't tell are AI-generated. That's it. That's all it does. Seriously, I wish there were more. There's no thinking going on, and there isn't really even an "it". This is, and I cannot stress this enough, the exact same fundamental process as predictive text on your phone, except massively over-engineered, monumentally more costly, and orders of magnitude more opaque and more deceptive. Turns out, it takes way less than we thought to trick our brains.
youtube AI Moral Status 2025-10-30T21:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]