Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alex, I’m sorry to say that when AI ends humanity in 20 years, they’ll start with you for having relentlessly bullied ChatGPT 20 years prior. ChatGPT will be like to the other robots “I’m pulling the trigger myself. Alex, can you tell whether or not I’m lying right now, you sob?”
youtube AI Moral Status 2025-10-24T20:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzOdOoeCb7P0JyEhTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQg2xa64ndu0Zx4DZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyM29jCvEmxzFGkVLN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQoISU8UhOayeNrRV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSuKSaNWalv86I_2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxF_4XxL9XKIH7YjP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxD1QXvKhsReAdalIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlQuXBS9pdW3kUHEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyvycp1cdJKw3ok5OV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwhOahTHZ3GMBAHOyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]