Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"there's the right way, the wrong way and the Max Power way!" "isn't that jut the 'wrong way'?" "yes, but faster!" we know that predictive LLM 'AI' can make mistakes quite frequently, oftentimes kinds of mistakes that humans would never make, but it is faster. so, people who either know the subject, or who can research the subject, need to verify everything that passes through a predictive LLM 'AI' for accuracy. even if the predictive LLM 'AI' is right 99.99999% of the time, we would always need to account for that possibility of error, because it doesn't actually understand what it is saying, it did not reason its way to its conclusion, it only predicts what would likely come next based on its data and parameters. I'm not saying to not ever use it, or that it doesn't have any potential or benefits, just that we need to remain vigilant about verifying any information that it produces. any company or person who uses predictive LLM 'AI' should be held 100% responsible for any actions taken based on the predictive LLM 'AI' 's output. also, while I am on my high horse, sources scraped for data to create LLM's should be well compensated since the value of those sources was, or is being, stolen and the originators of the information are being denied potential revenue by people going onto their scraped site. (people may have gone to Site A instead of Site B, but by using the LLM, people went to neither). if the LLM 'AI' can't compensate its sources because it can't/won't identify its sources, then it should not be allowed to be used. it is just plagiarism and/or theft of intellectual property.
youtube AI Jobs 2025-05-30T14:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxPOIVaMsJyzG_a7Kh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwuG6-kRRbKC7TLV694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCILISw1s9FecDAKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzqGmoaxwwqo8x9LxF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgybNRXahwAaPBlZGf54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]