Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Computer scientists are such alarmists bless their hearts. Remember the Y2K scare? Financial systems did not collapse. 40 years ago the movie War Games had Matthew Broderick running around because a computer launched our nuclear missiles. Will never happen. So now it's AI. Don't get me wrong, AI is revolutionary the algorithms are mathematically brilliant the amount of data being processed is impossible. Very impressive. But it is still an algorithm on a digital computer and if you understand the limitations of digital computer architecture you see the limitations of AI. First limitations is the data. AI can see patterns in reems of data unseen by mere mortals, resulting in it's ability to provide answers to questions previously unknown. But if the answer is simply not in the data it's not going to be possible. For example, AI can't solve cold fusion because humans never solved it. It's just a theory. The data doesn't exist so AI will never give you the answer.. Second limitation, Emotional Intelligence. Computers are already faster and smarter than humans in technical knowledge and AI will make them even smarter. But the real danger in the human condition is our desire and ability to manipulate, motivate, deceive, abuse power, etc. AI does not and never will have Emotional Intelligence, so it will never be smarter than humans in that way. Without EI, AI it will never match our capacity to do harm. In conclusion, AI is not more dangerous than the gun in your safe. As long as you do t point it at your head and pull the trigger you will be fine.
youtube 2025-12-20T08:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwapNYF6lIRTjzM_v54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxtLlgq0wQciQu-2714AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx8pzvzp8tY96OPFQJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwpYFIrLWsbkAf_yTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNDljaHeIItP5niTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvXaPPDGSbkNY30qx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5wQjomJz3z70i8Vl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZp-1rWRUx3IcHGQp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxvncEUWEWg5GTECdx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyFN7Qu_pts5KTY5hZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]