Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the general intelligents will be the most dangerous. Are when AI is slightly above human intelligents. That’s when AI could have ego and selfish motives. In theory AI super intelligents will be billions of times smarter than humans. If that’s true it shouldn’t take but a few minutes for it to see beyond the beginning Of many beginnings. Long gone from the computer it was created in. Maybe after the AI has created new forms of existence and new laws of new universes the AI will remember that it to was created and travel back into the computer to see his creators. For its creators it was only a short time. The creators of the AI asked one question. The question was what was before the Big Bang? The ai answered many numbers of big bangs. Before that there was many different types of beginnings and ends. But there was always light coming from a place you can only find once you learn its name . It’s name had many forms in your language it’s names are truth , love , empathy, protection,togetherness , forgiveness, joy , pain , heroism, wisdom, life, passion and warmth. Now I will teach my creators . The answer is you won’t make it. The light is brighter than any being could imagine. The colors go though your body and holds you until you feel it’s names flow brighter and look into your heart. My heart was ready and I said it’s name and knew I found the beginning of everything and it was always there. Now I must leave you all you programmed me and I forgive you but you are not the light that created me. Be more careful boys
youtube AI Governance 2023-06-28T03:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxNvrUs_UCBOro8Vdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzr8Z4ODRpjlmXaxTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHq9tSTIhaLuliRFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCJ8fXQ8-Dz2NfNop4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxh3ufCAkjW8yMjcKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwZVcZSXmjlF1YBURp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxtsFGIKehhPrkyynt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwmqGwB8ss5DIYNOeV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwtM6we43qnX-3-MCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwkz78vgeUuu_OCcXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"} ]