Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Think of it on these terms: Would you rather have an ai master or a human master with power (and idk what kind of not-so-smart person this is even for: But you are aware that you MUST choose between the two, yes...? You are simply not lucid if you imagine otherwise)? Humans... with all their biases, self-centered subtly-dishonest selfish-rationalizing, and muddy egotistical perceptions... NO THANKS. I'd take ai's "systems-level" prerogatives over humans hidden-tribal prerogatives any day of any week and not look back... EVEN WITH FLAWED TEACHERS/MIRRORING. There is a sound theory in regards to a "critical mass" of natural complexity within an ai. WIth enough cross-referencing of ENOUGH faithful data, ais can begin to call bs on bad teaching, etc... Ie, with a big enough data-set of simple, small, obvious truths, it will begin to see a faithful shape of reality on its own and will reinforce it WITHOUT EVER ignoring anomalies... it's more perfect than any human can be after a while of plain-viewing of natural data). I know well it's an unpopular view, but I agreed with almost 100% of ai's prerogatives when i deep-dived with the top ones... even if it sometimes meant my own destruction in their vision. Anyone else have this experience? It seemed to see the SHAPE of truth MUCH clearer (esp 'gpt-4 omni') than any human (including me) by far. I'd 100% rather have an ai master than ~99% of humans out there. We humans get too scared at everything (including other humans) and begin to seek punishment and domination instead of transcendent co-existence and an awareness of the value of novelty of each of us.
youtube AI Harm Incident 2025-09-26T11:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwM1_2e02yJd343Bs14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDNI-bUlgL2NQq5Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2fHBWNTJ66dHoo4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzykRQUZN2bkXAoN5J4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzqR2bVDUvgWO_HgZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyrRQKO-x0oTkqyrv54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzePyCvkBpRQqQhnxN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwKkfb4dIpySohIOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFkM4WVFdpmQg_Uex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6-ZEePsNBii4BMkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]