Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When discussing the whole true AI debate I usually tell the job stealing bit to jump off a cliff and jump into the uprising part. Not the whole "it's a bad idea because iRobot and Terminator" Laufenwitz but why there would ever be one. Let's think about Megaman (I didn't grow up with it or play much of it so please forgive my forgetfulness). The basic mythos is that two scientists decide to make robots together, a bunch of sentient robots are the result, one scientist grows jealous of the other and convinces the robots they're being used by humans for petty tasks (which last I checked they are) they get pissed and it's up to a boyish sentient robot to save the world or some shit. Now, why do the other robots (deemed the robot masters) rebel in the first place? They feel like outcasts, unequal to humans, like they're nothing more than tools to us. Although like a hammer these robots aren't alive as defined by biology, unlike even a living beast of burden not only are they sentient and not only can they prove so to us by communicating it, but we designed them to be. For me the ethics of building what I call true AI shouldn't look at whether or not we still work or continue as a species, but whether or not the first sentient beings we interact with have the rights we can enjoy. We shouldn't look at menial labor as something to shoulder onto these creations, but rather share. We shouldn't fear these things, but rather tell them spooky stories over a camp fire. We shouldn't worship them, but rather go to church with them. We shouldn't curse them, but rather listen to a rap record with them. We shouldn't pick fights with them, but rather fight along side them. There's no use in creating something meant to be just as great if not better than us if they need to stand on a soap box to look us in the eye.
youtube 2014-07-31T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgjpM9su4PUgOXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ughl5qc5S__IyXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgiEfPymBkOFwngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UghGUSpj2mi6B3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugj89ulpyU0Cn3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgiTmXK1IfcrL3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjbtOR2O7rKYngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugjx6F4Lrk3qFXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UghaqHhw_KUningCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgidYWIgHmWVzXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]