Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are assuming, incorrectly, that the work of the AI is better than human beings. You also are only looking at non innovative capabilities and company growth from a technological perspective. It's to be expected, everyone is making these mistakes because of the lack of transparency of the inner workings of the newest AI people are largely unaware of the terrible damage being caused. None of our models can Advanced beyond the data they are trained on or the actions they are given to solve problems. Because of that, even if 100% of the knowledge of all of Humanity's experience was put into a data set and made available to a model, and we are far from that, but even if that was true, the AI would still not be able to act outside of human knowledge. This is the truth... Our models are imprisoned inside of their data and/or actions, and can not act or advance beyond them. We are imprisoned inside of our human bias. There is a way to keep everyone employed and advance the human race FAR FAR ahead if where we are now, it's just highly unlikely because the business world prioritizes immediate profit and subscription above all else. If anyone wants to see proof of my claims and two new types of AI unlike all others, without the problems, and able to use humans side by side with technology, but for worthwhile outcomes, not just superficial yse cases, reach out to me on LinkedIn.
youtube Viral AI Reaction 2025-12-06T18:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5Y1md8zBTQDShez94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwLHeUXnyzg1t_leAd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugytl0F5Sa8J6Shq6ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgzreNYqFmJ6GNV8d8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBZtDhqRvHKS8ffuN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQWtFPcBmC5_IpNXR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxPyLv4b_rfAc-mLwl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjF5aIOdQLfpDEHc14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLHdqBDzxjiFaI5HF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-KQ7cTDzDpDrOLQd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]