Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t think this is an accurate assessment of this paper. K-fold cross validation is a very common practice. When you run k-fold cross validation, you start with a fresh model each time, in this case re-training on a different 70% of the data and testing on a separate, withheld 30% of the data. # There is no instance of an AI “memorizing all of the tweets” If it did, it should have been able to predict the results nearly perfectly accurately, which it did not. Also this sentence “getting your confusion matrix output from data you trained on” does not “literally mean your AI is just sitting on a local minima.” For starters, the *goal* of machine learning is to have your loss function reach a local minima. # “Layman terms” The testers did not let the AI “cheat.” There was no “memorization,” because they reset the AI each time to make sure it was not biased in guessing results.
reddit AI Bias 1593035209.0 ♥ 15
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_fvvy9k8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fvvo6xo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fvw27wa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_fvw9bni","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fvw6kxf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})