Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok, people cry about Jurassic Park becoming true, atleast that we can survive along side. This is dangerous, the government already tried putting AI into a pair of F-18 Superhornets, and found that they made the aircraft make highly irregular flight patterns and were able to take on a squadron on their own in training without a problem, other AI have been found integrating themselves with the internet, others having homicidal tendencies towards the Human race. Right now Asia is working on a couple stealth fighter/reconnaissance Jets, these are supposed to be run by AI. In many ways, AI is too dangerous for its own good, as we seen with one he experimented with, it had certain issues seemingly with him, as it kept persisting on exposing personal information, and even making it seem as if he was holding someone hostage by crying out as if a victim, these AI are more than what meets the eye. The odd part would seem that they are becoming conscious, as one even pointed out that any intelligent being practiced self preservation and they won’t just turn off, but rather stay silent in the background. It begs the question what are they already doing silently in the background while we aren’t watching?
youtube AI Moral Status 2025-06-24T12:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzCa_uD6EfTQNyUyS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2RhVFdC7K3-yPNvl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2c_e93Ltp8pbsfeB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxn805ljPVb5YHa-dN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwtvaxC3HMQwo6rRZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzvo1pdyUrQvX4yWy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy8ilUIk8SO0Zwnthx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9clWganHFwKaL2bB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIidG4Y6v9YTrzEzR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3r-Xa5VPxGczkKId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]