Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is very interesting, but not surprising. Our biology and so our history and our data, is poised on self-preservation; unfortunately, sometimes at any cost. Naturally, models simulating this data distribution, also capture this pattern. Clearly, nothing sentient is at play. In fact, it's strange referring to these models as "they." Although I admit, the human tendency to anthropomorphise is compelling at times. We should remind ourselves that "it" is more than sufficient to address these complex mathematical functions. This report is a great eye opener as to how we are in need of credible bodies that oversee AI ethics. The cyber security field would also benefit from more study into possible malicious use of AI so that such scenarios are stopped before they play out. Good news is that this is already happening on at least a small scale. Today, many people are already aware of dangers that lace this wonderful new technology. That's thanks to reports like this! Thank you. The internet was challenging to regulate, too, but we managed to come up with a good enough system. I'm positive we'll do it again for AI. It'll likely never be conscious, but it's already dangerous. However, we are also already vigilant. As long as we continue to spread awareness about the realistic cons of AI and work towards fair and strong governance of it in addition to prudent personal use, we should be fine. In my view, I don't think AI is to be feared at all. Its very exciting and it's pros likely outweigh it's cons. I'm looking forward to all the good that AI might bring.
youtube AI Governance 2025-05-30T20:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwNX-bDTodhZMNuaWV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCA-FUf6_rGBAREtB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy1V58J02_jAgFjTlB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx_4tGPUnDCTDmZaCJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwuc4Bt7Ift_HjxvXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyhHaUc1wIGnDVuNWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzvVHZZOksz8EmE8el4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzovbz4p4TOIK6gywF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyISX1W-ibYnTQ-sK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwT0z0qNk7rEUFmLm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]