Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing is to simulate a virtual reality, which is still far from realistic nowadays, yet completely another thing - to simulate it with 8 billion intelligent and emotional human beings, as well as other complex animals. If it is even going to be possible at all in the future, the energy resources to realize it would be so enormous that probably impossible to get them. Anyway, the only way we could be currently in a simulation is if it exists in the distant future, but why would it then recreate the world from the past instead of these times? Unless it gets to the Matrix setup, where AI is governing the world and we are the energy source, while the past is the only satisfying reality to virtually live in. Hmmm, such a long shot! Still might be possible, but I'd guess 1% chance vs 99%.
youtube AI Governance 2025-09-05T08:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVAHvgn6wDPSQhf7N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx6fAJqyDQ2SpoQBNZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgybZIAhKJgX4Bstrnl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxJQFThJIBK2NBIEb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyh3Rn0OO7LDxgIhul4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwaNe5gy5M8JnS32iN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw_boPUf27diERaNyd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1qqtHYR-aNdEX5Yh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxOH9ebyctRwsjBi854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw7UsEhviCNZvpWOs54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]