Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not an epidemologist, but: This is a fascinating result, because if true, it has a really interesting upshot: the Chinese government figures on coronavirus fatalities in Wuhan are actually **accurate,** or at least ballpark correct. If, and I stress if, we can take the paper at face value: by the results from Katz et al, the virus has IFR 0.66%. 3.9% of Wuhan is about 0.039 times 11 million or 429,000, which yields an expected mortality of 2,831 (or 0.0066 times 429,000); the actual numbers of dead as reported by China in Wuhan are 3,689 (i.e. around the same ballpark!) Caveats: 1. not sure how good this study is, not everything that passes peer review is correct 2. not sure how long detectable IgG antibodies last: have seen papers going up to about 223 days, three months, etc - but I simply don't know yet, so this could be an extreme undercount. On the other hand, samples were taken between March and May, so this would seem to be able to preserve at least some antibodies given the peak was in Jan-Feb? If we were able to see a time series of when people tested positive to IgG assay we could tentatively correct for it. 3. unsure how representative this was a sampling of Wuhan population this was - for example, it says that none of the participants had a known history of COVID-19: so could this actually be a mostly healthy population selected, and thus an artificially low seroprevalence? 4. with a claimed seroprevalence this low, did the author check for Bayesian-style false positives or negatives that could have skewed this result? Regardless, none of the errors would be by themselves able to cause the massive death tolls on a scale as seen in say, New York (+27,000 excess deaths), so the only inference left is that the death toll from Wuhan is as honest as they were trying to make it.
reddit Cross-Cultural 1603529947.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_g9vpp0b","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_g9vraql","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_g9w9xoq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_g9wge0b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_g9t4nh1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]