Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i don't think this is the case at all. like you said, llms are statistical. if the training data consistently points to this certain belief or choice of words, it will simply say things that align more to those. a right winger would simply debate, "that's because there's more liberal content than right-wing content!" are they wrong? not necessarily. this also inherently assumes that anything that's "not fascist" somehow agrees with each other. i think this is obvious. arguments and fields of knowledge contradict each other not solely on political bases. but the politics is certainly an important part. you're right though; bullshit doesn't scale. at the end of the day, elon's just trying to capture an audience that he knows already trusts him. this is business, after all. would i be inclined to say he's doing this because he thinks it's a good marketing strategy, and not because he genuinely believes that training an ai on right-wing leaning content leads to better results? i have no clue. it could be both.
reddit AI Moral Status 1750649588.0 ♥ -1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mz9pzew","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mz1pzdf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_mz4d5tm","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_mz0ag0x","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_mz0ogi8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]