Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good video, I definitely learned some new things. I will say this though, respectfully…. If feel this is fear mongering at best. They’ve been saying 6-12 months for 2 years now, with this digital D-Day always being pushed due to bumps in the AI road. AI is great, and is definitely a revolutionary tool, but when you hear things like AI is writing code that is so overly complicated it can’t be upgraded or changed by a human because although the syntax is correct, but the logic is so messy, is not going to fly in the business realm. People and businesses will not invest in something that only understands itself. It’s like trusting a 5 year old to get on the bus and hoping for the best. Thats just not a good idea and frankly irresponsible. In 12-18 months, the AI bubble will completely pop, and investments will be pulled. Nobody has made a profit off this, and it has no future plan to do so…it’s a game of risk and reward for these companies and CEO’s. AI will be like the Internet, it will grow after the implosion, used in day to day work, but it will not take everyone’s job. It can’t. People and business can say no, especially if it’s taking away their life and job, and if enough of the population say no, no money can be made as it needs us to work. As you yourself stated, it’s not increasing productivity because it’s good enough, and it will take YEARS to be at a level that is even considered adequate. If no one has a job, nothing will work, no money can be made, and you will see major issues across society. People make AI possible, and people make AI work. Without us, it’s worthless.
youtube AI Jobs 2026-03-19T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwklMc5OK6ZqTQ3TYB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyLKnbKf5L4EYqXnpx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyde2R6sAApOxU5wyd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzY8gC7482Q-rycBKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNipVgG_XGDxuSLk94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgyVNcQdnuzXCBIatBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzjs5gERbtWDQzuRRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxW_jykRum_dYt-2-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3MazuvOIfVeQQo8t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzE31MRlo86wdb33RZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]