Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean, it's not like it's really up to anyone when we achieve agi who benefits and who does not. The ai will be the one deciding. Safety research is not at all interested in control because that's not an achievable goal. It's interested in ensuring that the agi has a moral sytem that aligns with ours
youtube AI Jobs 2025-08-30T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyDETKXP_iMmuDLBfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugys7bqknuIrdOKNXV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxyQX5a5v1K8zHdoVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzGjKOstyy_0rSYGa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzCbKSvoHrD4f5l9PB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwXgTjMt7_UhVs1Z4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugz2_hr-dyv-K9Udz9p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgxXuwla_wEoYU4kiIB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},{"id":"ytc_UgwKpuEkpKmJ5e4vCwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzQy7jtDxD6fI1blpN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]