Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regarding simulation theory… As a “Theory of Everything” (TOE) it amounts to an infinite-stack-of-turtles argument. Because, if *our* universe is a simulation running on some computer in some universe that contains us, then wouldn’t the entity that created the simulation look around and conclude *their* universe must be a simulation? And the universe the next level up, same thing. And on and on to infinity. Computer scientists seem to love simulation theory but it isn’t grounded in anything. I am willing to accept that our universe is *computational* , by which I mean that its laws are best described by a computer program rather than the traditional equations that physicists use. I.e. the universe could be viewed as a giant computer. But that doesn’t imply there is some outer universe that built the computer. Everything in our universe could have *evolved* by some generalized version of Darwinian evolution. Perhaps our universe is a giant living organism and we are living organisms contained in it. Perhaps the Big Bang was a *birth* event and the expansion of the universe is the growth of a living universe. In this view the development of AI represents evolution of humans into something else which is sort of analogous to the way ants evolved into ant colonies, with the AI we are developing being the mind of the human colony. That’s my theory!
youtube AI Governance 2025-09-07T08:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy4qYwTBOOtfkLEZo14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzjhs6z0UigGOpa8nt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzaTrxrzl0f18KiP5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQbxUofOl8ohlt2894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMkPjVKuDiJ3zesjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLwxHA4cO2XhRvUEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWYiFqbnbsX1x5XAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj1Tx360M1iJtGnO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx69p2HChSqNYl8i-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHyIhYc7CP-ktC5754AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]