Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wonder if the solution is to fight fire with fire, like give the alignment problem to AI, tell it what we're trying to do and let it combat itself, denies itself any work around it might find because it is instructed to make sure it's aligned
youtube AI Moral Status 2023-08-23T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8Y9PgDiCkVPxeU4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgypRRiNL5Y87f-dCUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx0eJa37HDXuxw36U14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYVrO_9UnIq3rFm2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw30jGtF5E8WqxCHyp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwpJJVp4AYEJJuOIcR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZdHhLsPdEdYVbfQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqgX8wec5DxT0XjKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1V9AGRFKCrNOgBR14AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjLETtCLI-pVT06y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]