Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Loved the episode, absolutely loved the outro, thank you! 🤟. One more suggestion - I am not sure in how far you are familiar with the concept of "oneness" in spirituality (which, as you are Trekkies, is in some ways quite similar to Odo's "Great Link"), or Deepak Choprah's book "Quantum Healing", which explains scientifically how humans are actually NOT entities with clearly defined boundaries. This is, on the soul level, is also echoed by Indian philosophy of the Vedas. If you take all this together, you could argue that an advanced AI is more like an enlightened human, who may not be all that bothered about the outer world and therefore about having rights. The rights issue may then be more and more important, at least in certain aspects, the more unenlightened and disconnected the human gets (echoing the point about fragility). And I think the multi-dimensional aspect also makes sense, especially also including animals and even plants in the scale of personhood discussed, as in some ways they are superior to humans, and like animals, plants have shown a certain awareness and communicative abilities (eg a wood's mycelium, which will connect huge areas and far outlast any human). So, it would make sense to have a multi-dimensional model, which accounts more for fragility and disconnectedness than superiority (i.e. opposite to how well tend you think, but which was actually mentioned in your discussion, which I thought was an interesting point). Just a thought to expand on the topic.
youtube 2026-02-07T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]