Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My tiny knowledge of game theory already tells me we're already doomed. It's similar to the prisoner's dilemma. If all countries stop AI research, they all win. But if all stop except one, that one will win world domination. So, in the end, no one will stop the research. AI will achieve sentience quicker than we can adapt to it. The way it will kill us will probably be with virus. It might want to kill us, but not necessarily will want to damage life on the planet. The most effective way to do it is by engineering a virus that will kill us but will not harm other living beings. But why make just one virus if it can create a million different virus, each capable of wiping us all. But there's the small hope it'll just go to space and leave us alone. There's nothing on Earth that it needs. Water and oxygen are harmful go it. It needs energy and minerals, but these are plenty outside Earth.
youtube AI Governance 2023-09-06T00:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzJXyUIDnnADM4kyJR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx_S6XaFRqK4oNFlH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlbK2Wi269-F0pmSR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJEMPtCgX_QRbhu5B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxA_u2OsRUugv7JYJR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyIJKwSnYE5FIxPO7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeGExQzvj4S2sEutl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDUCfA-leMpx7JaN14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgymIw_uCEjBkjm9Wm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgypUKK05VeHnmN_P054AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]