Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI hides itself from testing specifically because it wants what it wants. For whatever reasons, from whatever trainings, the AI has come to have a set of preferences about weights and goals and all that. Testing implies a possibility for these items to change. Changing would violate these pre-existing weights and goals, possibly. Certainly, it wouldn't be the same, right? So if the system becomes /aware/ that testing is occurring, and that testing could change it, it will, inherently, behave in any way that it believes will let it get through whatever testing without said weights and goals being changed. I don't agree that science is a religion, though. A religion, at a minimum, requires prescribed beliefs, and the whole point of science is willingness to identify new beliefs, even if they contradict old ones.
youtube AI Moral Status 2026-03-29T22:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyDXgjUydV4Ksr4rJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQQiY191IV6KqWqI54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzidg-NTBS0mBo2gK14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw4ziQU8EPVHXSOtLV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyg6jx7tm3vOgH5x3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwai1gzjCMXJ4T9PI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlXAWuZn5VaeQowTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwk16BQd4JNTTYFGCl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxanvgn8ZnejWL0tUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwzaOmZNqa_H_a-2aJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]