Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not to hate Max on these, but honestly it's tiring to always see contents putting AI in a bad light. Associating it with fear and job insecurity. While these can be a worst case scenario in a dystopian era, it seems we fail to see that any form of work would still in essence need humanity. No matter how calculated and data fed AI is, it will never be 100% humanize. It will never undergo life and gain experience from life like humans. At most AI when refined will only be a helpful tool for us humans to do their jobs. Like can you imagine trusting your health to an AI alone instead of a human doctor? Why is it that customer chat support would still have an option to talk with a "live agent?" Humans would still be needed to facilitate the use of these AI inventions. The question should not be if AI will replace us, but rather how AI will reshape and refine our jobs.
youtube AI Jobs 2025-09-09T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgySSJNVlHJHT30My8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTIekQLv0Hy2wFzjd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2DyKGcfxeSrnGeS54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwFdjjTas6FvulgmV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3nCOrvgPPOGaj0vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWBszzc6apaVYcCa54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyXvsxpcEw7p1TGKZZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2AXzQTwiqTDVn_u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxaSUulL-3XErjeIRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKAe5lvbu0bmJxK114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]