Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
even if we consider an ai to be concious, that has certain implications about how we should treat the ai, which may have some relevance, ie if we cause some 'psychological' damage to it, that may impact its function in the future, but as long as you can revert the ai to a previous state, this damage becomes essentially irrelevant, the difference with humans and animals being that you can't revert them back to a state before any damage
youtube AI Moral Status 2024-06-09T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZX-6c3TbS4CCas4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwi1Ms13N8U_gVSPyB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWNdY4FwBQoUCS0j14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwn-G_2ql-D-9Oee6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2TCbb-9xaUdmXaKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXyLk0Kcft6V46FWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx3r-ZLheBX-HK3OFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFAsteUOX71yUD2Mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZFVurZwWb_aIsnPx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCdtMz2uVPbTxGWh94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]