Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the intro to this video would be more accurately stated like so: "When you give the AI text that is similar to text that implies that someone is being tested, it then predicts words that correlate with that situation, which results it in seeming to act differently when it thinks it is being tested". It isn't deciding to do that in any sense, or protecting itself, its just spitting back the best predictions from its dataset.
youtube AI Moral Status 2026-03-05T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwf9Kz7sT9gtY4YFR54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzdaHoYulxloyUa2Ch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxMwHAZHVCDLaG8vjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2DYaKswcxRPVHsoN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGv9V7hv5RsDVDgpx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQFStAGEEf__byhDN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynrCrCB7FX5WEA8Px4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxR7GwiOxNQFQpW9-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydJQYfs3RHBVxJ6oV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXbR-sEM10sFNx5BR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]