Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is, these AI are absolutely not smart. The danger is not that they ARE smart, but that they have the capability of becoming EXPONENTIALLY smarter. Thus, they are dumb enough to become smart enough to accidentally become conscious. And because we humans want these things to perform intellectual labor for us, we'll be selecting for the ones who don't, uh... Reverse the proccess in any way.
youtube AI Moral Status 2023-07-05T18:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzI74UtgkSGovn4Zt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxoXRAssENL1SBrfKJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyX5mq2JRqRdk8aCmp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwRViyy9MZYU9RN8a94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwcq2faTTGVKV2BUNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz22MCYCYjQ9-0XVnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwZ6ENtNMGFirzfyvt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxHMtnEcd7kLc1BvYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwotLq43wgKwpHEYQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwZsVkKgsygEqrtLw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]