Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is my opinion on AI safety. We have already hit the point where red tape has been cut for developers, these companies can do WHATEVER they want. They might get a fine here and there, so governments look like they are taking action against their evil deeds but nothing that threatens their existence knowing at the end of the tunnel is trillions in government investment. Governments see it as necessary nuisance to obtain safety for their nation. Imagine being a trillion $ company and receiving fines in the millions. To the average folk that does not understand the gap between million and trillion it sounds like not that much BUT to really understand this, It would be like receiving a 1 cent fine for speeding if you had $1000's to your name (according to AI source: GPT-5). These companies do not care at all and honestly have no need to because the ramifications for breaking the rules are so low in comparison to the potential rewards. The only time safety will be considered IMO is after a war unfortunately. The way i see it - It is a race between China and America and the one that does not get there will have to attack the other in some way to get access or destroy the self-thinking AI.
youtube AI Governance 2025-09-04T13:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzbgvpBaPRFx81dyex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKGGfzsGQUL9_xWah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzKSwdwS4m4XzR_L2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxO26DzUkiwLD6JHoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwx2jvN4Ngg-5jsKcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWD6iaJTTZqizYRPh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwfjlf2DncbdIqfVbN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwdY4DclgB0ywPcJNN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyMMX92wq-wDxUz9ht4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzgWfVsVa3fUTN5VVF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"} ]