Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Formulations like "a desire for self preservation" completely misrepresent AIs as thinking beings with desires. These are just stacks of parameters derived from a bunch of data. If this data includes characters blackmailing or killing for their own survival we of course also guide learning towards this behavior. Or in other words, if we feed the robot the movie 2001 then it of course is able to recreate that. I agree with the resulting statement, that AI can be extremely dangerous but this kind of misrepresentation damages public understanding of the actual technology we are talking about.
youtube AI Governance 2025-09-03T14:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxalyBWu2-UkVd0Xjl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9AswBv3eRDcbaUlN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7S3R4JCwv1x5Tjcd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyot88g981IvRMMRrR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzRDoHFy7wPhrTPxL14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]