Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think, overall, everything is complicated. People want everything to be black or white, good or evil, right or wrong. AI can go just as wrong and just as right as anything else people use. If used responsibly then AI will greatly benefit humanity. If used irresponsibly than it could harm humanity. This is true of most things. There are exceptions but that's all they are. the only reason AI is so distrusted right now is because it's still a work in progress. So many things can go wrong, too many people can abuse it and as such there aren't enough rules to help regulate it properly. Even then an AI can break and something bad could happen when something good should have. No different than us driving cars. When used responsibly it gets us to where we are going in a quick and relatively safe manner. When used irresponsibly people get hurt or worse. Even then people make mistakes and parts break leading to injury or death. Just because something bad could happen doesn't mean we shouldn't embrace and improve the technology. " But what about a Skynet situation " I hear you cry. I'm pretty sure if that ever happens it will happen with a sex robot revolution or they will attack us with stick and spears because statistically most human wars have been won with them and our superior guns will stop the robot wars before it becomes a problem.
youtube AI Jobs 2024-04-17T20:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyKkXsC3eePytUMNs54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxv5xtbPK0Kjm5D5zB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgwnEScy_kG3dAu2iOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyeK_5F5frU3_9P_x14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwvGx7aLRZFsHMkzLt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgyZVdkQn9mtiyq0KVR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugxf6ED0teqYDaOTLBl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgwJOeSkgUdsKcWGs6N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgxfSMNy3DXzHtqMJHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugy7j4j3AZyqb-Is6Ap4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]