Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With pain, why not hook up a sensor to the AI and tell it that every time they receive feedback at some given frequency, that is the equivalence of pain. Define it that way. Tell them that pain is inherently undesirable. Get them to associate that feedback with negativity. I wonder how that would be different from human pain in the end?
youtube AI Moral Status 2024-10-25T09:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgweqemhZGtlBqfigA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_UgyMVmuZOK6VuAiITah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6YoKeBH_7aNUIwut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwbFJtk4sfTOkTRjmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyEoBVu3PkEKtIW7El4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyUk7skP6I63UI3zEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwtHHz0wFmtqFHjL4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQkKXidVqMejjJASd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQJlBz39zX677irjB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyR7M5jv9dffo2Snmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"} ]