Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as inhibitors on the ai embodiment prevent the understanding or learning of violent actions/behaviors we should be alright. But then the argument could be made that that isn't a true AI unit, so in the end we are inevitably left having to take that chance that the AI unit may end up determining that humans are to be exterminated.
youtube 2017-11-28T00:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxQw7YfMcOhg6zyCbR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyZKKVQOOweXnuzyGR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgxMvnr5ixkjJGjQTgp4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1DPqIDMxmnKwbdc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQi1gAtvrINJhUugx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzA4QfdzSS2WxK1u6l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz3r5ONYiHca-oWIdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghmHBsOLD4fY3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_UghA24C7Vxvn43gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgjnnBXlqmRuLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"} ]