Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the idea of being this kind of person terrifies me to my core. being overly confident is like an ego death for me, i don’t say anything of authority unless i KNOW i’ve at least have some advantageous knowledge over the general populace. (like i usually only do it for my special interests or the thing i have my College Degree in) if you cannot explain basic concepts of something you studied for a full 4 yr degree, you learnt nothing and put yourself into debt for no reason. like how does that not scare the shit out of you? how can you say you know everything when a recreational interview can literally put your lack of knowledge on blast? if you don’t like coding, don’t fucking code, don’t dedicate years of your life for smth you obviously dislike. coding is smth delicate but also requires a lot of care to make it work. i’m not even a coder, and i would struggle to learn a lot of coding languages bc of their basis in math (dyscalculia is a bitch), but i at least understand that about coding in general. chatgpt is trained on other coders’ work, on help blogs and open access code, so while yes it will help with tiny issues, using it only for a whole project will mean that shit is jank and will not work correctly. LLM do not understand how it works, they just regurgitate what humans have said without understanding it, because it is an algorithm built on human blood, sweat, and tears; not an actual consciousness.
youtube AI Jobs 2025-10-04T02:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxXwt1g2xBqOFRF7gB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugy3oUjF1gdOaXZnHYB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwDprFxyMykeyHIK8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxJgUF3a3C_SnE1bKh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyK04ssS33HDJ5DWlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzfX2euezcbp9S1Ka14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"resignation"},{"id":"ytc_UgzgA3X7OSITbpeXFhF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},{"id":"ytc_UgyBBafmz4_9l8v_BUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwvrygPrgcQfGNnBJR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyW-2bDlLO4KeGinPd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"})