Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Studies also say that AIs are intelligent. "Studies" have absolutely no idea what heck they're talking about. AIs can in fact replace the "Bulk" jobs. It's kinda like how artists are scared of losing their jobs. Yes, you are at risk of being replaced as an artist, if what you do is the most generic overdone shit that hundreds of thousands of other people do. But the moment what you do leaves that area, the moment you make anything even remotely creative, AIs fall short. AIs are not intelligent, they do not have comprehension or understanding. They cannot perform complex tasks, nor can they ever check that what they do is correct. And if the argument is "Yes but AIs are still evolving". Turns out, no, they aren't. The current models used for AI has come to a pretty hard stop for a bit. The current models have reached a limit technologically, the only thing we can do is scale. Train them on larger and larger samples, keep correcting their quirks here and there... But that yields less and less results. The only improvement we're still doing right now is finding more applications, more tools that we can use AI into, more places where AI is efficient, and where it sucks and shouldn't be trusted. But unless we get a significant breakthrough, AIs aren't getting any better. And as it stands, we have *no idea* where to go next in AI development. The next step would be to give AI the ability to comprehend and understand things... But that's pretty much asking to create life.
youtube AI Jobs 2025-09-08T22:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxSmGEdAVrMsBpRWoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCieaeJ9gHx0z-FJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3dSKVqCi1Rq2bdVJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7feyLvEt4OB3-U5F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzWNo8LP2q23WnZY2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyw91T_ccWAFYwl6gp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxuQrZ98bEcTLIdZUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKJn1hm1ImKnIP6Ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnqzdlBEh0Uod5AoF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxdLV7cbUvgmixUSEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]