Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The main concern with bioterrorism when it comes to LLMs is their ability to assist a moderately technical person in synthesizing a pandemic-grade, lethal pathogen with lab equipment that can be bought on the cheap. This is something that machine learning algorithms are well suited to doing, as evidenced by the success of programs like like AlphaFold in predicting protein structures from amino acid sequences. With the right hardware and a localized setup, a malicious actor could theoretically get a jailbroken frontier model to provide step-by-step instructions on how to synthesize a biological agent today. And, unlike with nuclear weapons, the associated costs and technical barriers will only continue to go down with time, even if we do somehow manage to regulate compute like we did fissile material.
reddit AI Governance 1770928048.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o50nb5q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_o51l7fw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_oa4057u","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_oabz523","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oa0gx99","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]