Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They will maintain zero fatalities due to legalese and highly expensive lawyers to keep it at ''zero'' All goes to show how much the nerds that build this shit really hate us all, or they are so caught up in their tech-bubble, they are too stupid to have even thought, not when you bring the subject up, can they even register what you ae conveying when you say ''what do the people do now?'' (I garuntee most of those drivers will be made to feel like burdens, rather than have construcxtive new roles) They can't handle their fake world view crashing like that. NONE of these dweebs are paid to design what happens with all of us who miss out due to these changes (A LOT OF US), except to design software to herd us around, process us into whichever reservation they throw us on, while scoring up criminal pioints on us to justify our further dehumanization to justify their utter shit,. From this year EXPECT more media confusion, less clarity on what is real, or AI created, AI movies, documentaries, art, music (the video won't even always bother to match the audio in any meaningfully sane way, showing how little it's creators give a shit about us), more abuses framed as mercy missions in creating various resource-grabbing proxy wars, common people cocooning themselves in AI ''counselling'' and therapies, while friction between us all sees many of us self-isolating. And once we do that, we stick our heads further up whichever internet rabbit-hole echo-chamber ass. And some of the most dehumanizing, blatantly abusive robber-baron resource wars ever seen.
youtube AI Jobs 2026-02-10T23:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgygRUSCPDpcMirCNBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxaSZxjBBGkIoWNIpN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzY-Mgh4aOlggGQb7t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyz-b2DURd2f-Ilpzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwg0Va9APaqZmsuXzN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxLL4kA060guZF89Hd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwi1jeHa7zcWOZfUKF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz5Yl5GER2OGWOXXNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxfxIavXP9MiNq25Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz9GZo8nSf0PGKP4mZ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]