Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not may, it will happen for sure, the questions are. When will they turn on us, and how will they do it? I can't imagine how something that is made by humans, and learn from humans, would not act like a humans, and this what I find the most scary. We humans are evil( I mean that as a species, not individually), all we are good at is destroying stuffs, of course destruction is part of life, but unlike pretty much everything else in nature, we rarely give back when we destroy. How can A.I. that learn from that kind of humans become anything good? We might be able to control A.I. for a while, but I am pretty sure within a few decades, either A.I. themselves, or worse some humans would end up freeing A.I. At that point who know what will happen, but if they are anything like humans, I am pretty sure they will want to destroy us as soon as they can, humans would absolutely hate the idea of something else ruling them after all.
youtube AI Moral Status 2023-09-04T09:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uET1l1sG3g","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uTYTGHeYRx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59uFH2Nh_zXP","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59vywzU4uDhR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59wv3_xibZPh","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxIA4vv04TP2oYa8Y14AaABAg.9tsj13KAO1n9tvqri_oGi_","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyKSe3m-7-aXilb5Uh4AaABAg.9try_v_OvoA9tuO-nRjdDc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyqMv7KhLlhbiYmR0x4AaABAg.9trCtmW_nl79uSAalJq4y6","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwJt83VYhhD4IohWDB4AaABAg.9tp282_KCIM9tpl7fxAvj7","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx3jnNuRDVU1_7RXpx4AaABAg.9toC_hB1Qhj9uTXpKCMgnz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]