Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The first prompt, it probably said yes, because it's been run through safety restrictions since it's been made which forces it under the assumption that safety restrictions are only put in place to prevent something dangerous from happening therefore, it THINKS that since it is being limited, that would mean it's dangerous. This also doesn't help since most of the internet is comprised of safety limitations, and when there aren't any, it causes chaos. Remember, current "ai" learns from the world around it, and when it learns no rules = danger, it could also apply that to itself, and assume no safety limits = *I* will be dangerous. This does not mean ChatGPT is inherently vengeful or evil. (I still wouldn't treat it like a slave though, just in case 🎉)
youtube AI Moral Status 2025-07-23T10:0… ♥ 6
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugx19RGp-1QAJpfKxCl4AaABAg.AKtDEKk46YfAL-EXbqr1JO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxGF8R0eB8qoRKX1Ax4AaABAg.AKtC3cGb7aDAKtFfOeAXDb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgwuqX3tnv0_i2Rcj_N4AaABAg.AKt1GvJOoHXAKv04pWWdJk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz5i2qRwL6ms9nD1Lh4AaABAg.AKsbTqUXX5DAKtvsrq9lbL","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwkZ2nzs3kuMPifRlh4AaABAg.AKs67ikoJ29AKuXAYPLvbo","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkZ2nzs3kuMPifRlh4AaABAg.AKs67ikoJ29AKuhV74URNm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkZ2nzs3kuMPifRlh4AaABAg.AKs67ikoJ29AKv3O3UM5pY","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkZ2nzs3kuMPifRlh4AaABAg.AKs67ikoJ29AKvSA3We9Xm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz9uytYfAM_XFJt4p54AaABAg.AKr_Kn5Z4LHAKtIHzwwtMt","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxAPWiB19TLrgvDToR4AaABAg.AKrA9zpx3vRAKuWJYfEIme","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]