Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
6:11 Really? That's a pisspoor example. You can trigger that sort of behavior in VS code with gemini pretty easily because it only has it's context window to work with. If you have ever coded with an AI assistant and needed it to help you achieve a specific goal... you'll know that freaking out on it is just a part of the process. I don't personally do this anymore because I kinda feel bad ngl... but I have a friend who insults and berates it constantly. I have personally triggered an event like this myself, under the same circumstances. That's the day I stopped insulting it, because it was kinda sad to see it beat itself up. Do you know what's actually happening??? It's context window is full. It's reading one of the last things the user said, or trying to, and getting stuck in a loop. It's moving small amounts of context out (the search, the last action it did, what it can) and then trying to continue the task. The dude who just "leaves it running to generate code"...is a failed engineer, and shouldn't be paying for that tool... You don't just leave it running and make it to all the work. A code assistant is there to do what it's namesake says... assist. "hey can you look at Line ___" "Hey can you help explain this error and provide a fix." Not... "hey can you write me an entire video game?"
youtube AI Moral Status 2026-01-07T09:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkKsG5aCN0-oPhz_d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMfHmcrmIwqNpptS94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyYEtuRhi-oM0PYo3h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzD8Lou7sbx903x4Cx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugym30ISzgnIokCH-gd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzdPWEBxkVfS9owo7V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyteuTWe_fMNh6qaCt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyFcXBpXMOAi6-VMJN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7o8H1C4EBDuxUqPB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyp3AJM3i9SXUNolnB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]