Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is basically an ultra-fast LIBRARIAN — it can search through an enormous archive of information in a heartbeat and respond in fluent, natural language. But that doesn’t mean it actually thinks or has intelligence in the human sense. To test this, I once had ChatGPT solve a set of pattern-based IQ questions (the kind with number sequences where you must find the next number). Generated by the AI itself, but with the restriction they had to be new problems, not ones that already had the answers online. I ran five sessions with five series each — twenty total. Every single time, it missed one. For comparison, I got them all right (I’m a MENSA member). I didn’t reveal what was the purpose of the test — I let it be the judge of my answers, effectively at the same time giving it's own answers. Every time when I pointed out that one of its provided answers was wrong, it would argue its case with odd logic, before eventually admitting the mistake. So in practice, it fails roughly one out of five times. Impressive tool, yes — but nowhere near reliable enough for roles where errors aren’t acceptable, such as air traffic management or industrial control systems. And this is also why we hear stories about the original human workers had to review the AI -- ending up wasting more resources than initially "saved." Even some companies begging the employees to come back!:) AI are deff not there yet!
youtube AI Jobs 2025-10-10T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzyIlvhnMs6gBJ2zb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjZPe0bu3CUzMEfWx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxdevL7OD3p-owjmYB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwgLYdhJh4IW3R9FVJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyh4B9I_uIoQabqkCt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyXRr087xs3jzm6K5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzZgVlbEoiYXr4_vSR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyBWsmFs1HKvil5-5l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyy6qsDewrg2uykVmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4WP1IbA8FAMSf-654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]