Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, you really need to talk to an engineer about AI. It is not "getting better every day", we've basically hit cap with what the LLMs can do, and people thinking this is like Moore's Law are just swallowing the PR bullshit Altman and the other Salespeople are throwing out. Take a look at the last few releases, and you'll see that the actual functionality on AI is not drastically improving. All they are doing is adding side-functionality or product integration "look they will recognize a dog" level shit which was already there, just not integrated into their product. AI and LLM is DEAD, because we cannot "code better" than what we already have. These models are black-boxes, we engineers feed shit in and no one knows WHAT the heck it's doing on a granular level. It's just a huge mass of code that, as Todd Howard is so fond of saying "Just works". I work with software consuming MASSIVE datasets for training data, and we've already seen shit breaking because it's consuming AI-generated content and becoming less and less effective. Anyways, rant off, I work in this field, you need to talk to a senior ENGINEER, not one of these Sales Fucks, and ask them how it will improve in 5 years. The answer is "it won't".
youtube AI Harm Incident 2024-07-29T17:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxftqtk39vQhOnpxmR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzq0A4oVaKN5Bm-sTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPNkAV4pY8hqle6CN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxslErbhBV4tvHJ29V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRp2kEymQ_9Zp7luZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzhZrdOPJPHWI_TNlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXcBLkIrtsSGLM0bV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwBPs3_IE-n6gS9Usx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzjgfoxmo8yayyuyFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_6EAe3vjDTU4CoCN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]