Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great piece. One critique: 1:42 ChatGPT doesn't produce "lies" because it is estimating what is correct (or the language model underlying it) 'estimates' only what is the most probable next token in a sequence. If being truthful is a property of any language model, it is accidental to its purpose which is to model language - not truth or fact.
youtube AI Moral Status 2023-08-23T06:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8Y9PgDiCkVPxeU4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgypRRiNL5Y87f-dCUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx0eJa37HDXuxw36U14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYVrO_9UnIq3rFm2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw30jGtF5E8WqxCHyp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwpJJVp4AYEJJuOIcR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZdHhLsPdEdYVbfQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqgX8wec5DxT0XjKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1V9AGRFKCrNOgBR14AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjLETtCLI-pVT06y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]