Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Utter nonsense and this guy knows it! First off the term AI is certainly misleading and has nothing to do with intelligence. Secondly, pattern recognition is a very small part of actual intelligence. The amount of data needed to train these LLMs along with the cost to operate the electronics and the DCs as well as actually querying these programs is simply unsustainable. The actual risk is the deployment of such programs to make decisions such as load balancing in a power plant based on some sensor, same applies to flow controls in a refinery or an air traffic control tower and so on! Not because these programs are so powerful and magical but just because they don't work or least not as advertised. This dude is making stuff up and he knows it! It's even in his academic papers... This game of applied statistics and probability is a serious business and these clowns playing with words is a crime! Homie is literally saying "yeah dude most software engineering will be done by AI" dude, few months ago an AI agent deleted a company's production database! And don't even get me started on the term "reasoning" For fuck's sake CNN when you bring a conartist like this lying pos at least have the decency to bring an honest computer scientist who will tell your audience why this dickhead is lying! Automation existed for a while can always improve but what these marketing dickfaces are spewing is wrong and intentionally misleading.
youtube AI Governance 2025-12-31T05:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugze3JN1Q2ZLKNPQOcp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydGYm3_Zx5LnNjTkZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQaktm2qMn9JFKg7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxkBTZQT5bXRkujTdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOzAxzpzXiLEQdlv14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxKJOquwR7jxV5ZyFt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTqSg-sNko06IsDFF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxG8bAhtH1FFQ7XlCN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwf0IG2WXqq2B3q3Ax4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyqs1qNhVP5KrJ1IpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]