Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Back in the 1960s, Professor Joseph Weizenbaum of MIT wrote the simple chatbot Eliza to mimic Rogerian psychotherapy. He was astonished and alarmed to find that his secretary interacted with it as if it were a real person. He saw her response as worrying evidence that: “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Her reaction sowed the seeds for his later abhorrence for his own creation.
youtube AI Moral Status 2025-07-09T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxe-NK5FYiXedakd7V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgztSGVgRfi_eqX0bHB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzcnIQXVQxt1IHokxB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgyXQEpDHNfEVxsDjbp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgxBjddEDn25jD7T8J14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy0zsN4_qODNNXplsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzbBAbcxFAAYTX3giB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyG3vkzxEdJDYqc_rl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-1_DBgt8aQYvFs5F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwdNb6aT5UwPOQVrr54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]