Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@InfinitaCity I'm not sure if I agree, but if I did, would you not concede that it completely depends on the capabilities or risk of the technology? For example, the safety requirements for a kitchen fork should be low because its capability for harm is low, whereas the safety requirements and security classification for a gain of function lab should be extremely high due the capability for harm. Likewise, a technology that can help terrorists to create high level gain of function labs (as Anthropic spoke about) should also be heavily regulated
youtube AI Governance 2023-08-20T15:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxClpKCOca9l72V6wN4AaABAg.9pVxfb4Xki_9pX2CRyoLzz","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxpmtj0TS1YrNnBVzp4AaABAg.9pVxZtplBtm9pW8vSBIlj-","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzU4TtPOomA81vHOlZ4AaABAg.9pVwdDd9EtM9pXYDO0bwkj","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugxhapbt_9wMHv4IR0t4AaABAg.9t_q1vcLTxO9tdVLdtki49","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tTfUBI5b7g","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tXrFBpA8WG","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy6gczKkWt9D7andhx4AaABAg.9tT64Hf1EkB9tYZwygn3Qp","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxakbXidzyURU6t2Gh4AaABAg.AU4cyREleFbAU5KLeLQhZ8","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwAqLk9Mv-hiFAAxGx4AaABAg.AU2qxnHl8g4AU39zLZ3EnC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgwAqLk9Mv-hiFAAxGx4AaABAg.AU2qxnHl8g4AU5_LMJLQza","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]