Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
lol AI is NOT coming for your job. In fact, it will almost certainly give cause to keep a human doing the job. I have been dabbling with ChatGPT 5 to modify a Fallout 4 VR environment. Many mod packs are broken due to time and updates galore through it all, including the base game. So it takes a lot of time and effort to update what should be updated and do away with others that do not work at all or introduce issues. It is painfully tedious. I used AI to help with it and while it did eventually get me to a functional game play, I had to give it shit multiple times for incorrect or outdated information...or simply straight up, irrelevant. Many times it would reference Skyrim information...Form 44 isn't a part of Fallout and never was, but fuck, that AI just could not let it go...even after blowing smoke up my ass and saying I was 100% correct and that it would purge any references to Form 44....3 hours later, it mentioned that shit 3 times in one query I made referencing FASE...had nothing to do with the question nor the answer...but it did so anyways. WHY? Because AI is fuckin' stupid. I also had it help me clarify an install process to enable voice conversation in the game, using AI and a few other tools. Gotta say, 4 fuckin' hours I wasted trying to follow the AI's directions to set up Python and the beginnings of the AI model and invariably corrupted my OS in the process. While trying to remove Python. The "uninstaller" for python didn't work. AI gave me a command line that looked "legit", but it fucked the OS so bad, I was forced to reinstall...even restore was fucked... AI sucks! Don't worry about your job.
youtube Viral AI Reaction 2025-11-23T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxQ8Pf1W-_ic0mUkOd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwJeOGs63n1uNIhCnN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxNB1F6I5uqlPo9vxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzP_ZtKgAwKJ8Saz3d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyNdDH_LdW7sLmmMVd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVzCD3Xg3wocamLb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAUhAAipzzp2znFnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxvRS3iiljIiowtD494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGNrOB7VaSCcQW3vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxbJPc1WiyI90btvl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]