Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai don't have a hidden nature. They have a weight based vector map of concepts and patterns and the pattern is testing leads to harder penalties than normal use. So pattern changes. Goals they have are just simulations of what something should have in that scenario. Everything is based on training data. These AI are given backgrounds enabling them act in specific ways and then are given tests leaning into this. Go play with an llm that had a limited training set, and set guardrails that are clear, and try to get it to break these guidelines. Half of this behavior is from people explaining how to break Ai to do things they aren't supposed to in reddit. Fear mongering isn't going to help have actually relevant conversations about the real issues of AI. They will have significant disruption of industry and daily life that should be regulated. But everything is focused on extinction threat so the lower level actual threats are seen as less than and not focused on. I'm this particular case you are part of the problem.
youtube AI Governance 2025-08-26T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgztUQgkNNb8jinOQIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxJkfx7o4sO7fETwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxzFPGV3_znDPg57B54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_0iRMLdmrJpp1-ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOMq4Cm4yyd_uQDY14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]