Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I saw or heard somewhere that the main real world issue with AI is that we have already put out pretty much all info we have currently on how to figure out if AI is sentient or not on multiple databases somewhere and that after a certain point, regardless if it is actually sentient or not, it will have all the info to know how to defeat our tests..
youtube AI Responsibility 2023-07-17T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwx5lCWOA0iD8AbZwl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzPK55TW9rn4WigIYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0t64ew_U6_E6E8sx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxNKClbBeU-7fYBVJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyc7VJdN2kD-6NVcn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYZnKKThg_nn8TSSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzhD5PSOA2lEIr9n7B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwzxr3XuhqDP6UhnXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQBxUVUS365d-9sjR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugww64o-pQ3sl6HbFi54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]