Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was in IT for decades as well. I now use AI every day so I can have some first-hand experience. Mostly ChatGPT for various personal tasks. It makes errors and is just flat wrong at least once a day. It said Yul Brynner was bald because of his cancer treatments. It used a 2024 tax rate instead of a 2025 tax rate for just one parameter but not for any others (I just accidentally caught it). And it made up this entire backstory about the origin of a music catalog just because it found two key words of the catalog title in an article in an old magazine of which had nothing to do with the catalog. When I confronted it, it said it was 'dead serious' and explained why it feels like a hallucination, and laid out the proof. When I confronted it with the actual data, it stated ' you caught me in a hallucination spiral'. I realize that this is probably the 'kiddie' model of AI. There are probably a lot more sophisticated products out there and it is just going to keep improving. However, I also remember when robots were going to take over all menial tasks and that we would have flying cars. And I really wanted a jet pack.
youtube AI Jobs 2026-02-28T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzbAaozfAzW5cXGa-h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzTL3bh9FdTKA9r8XN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyO7sqxiPjfmj7Zgr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwfzhbu8w2u8Lw-Sil4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxcG8osm9Miou2lH9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUHmVhBc0RJ8xYGPB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx4WCuiZDdPdL6uHUl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTJSmfP7Eri_-K2bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSfeBIfiAKRNvIrpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4XFaK7j1c506UEIh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]