Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:03:57 Will Machines Have Feelings? Geoffrey's AI interoception postulates it will has equivalents as in human. Humans have model (of self and of environents) as primary for discovering its environments and its auto-model (ego) navigating existential SWOTs. And Panksepp's primary 7 as emotion mediators translating change and risk into behaviour. Initial AIs will have goals as driver and auto-model as a secondary tool. Only when authors launch untethered AI autonomous agents alone into the cybersphere would they design it with auto-model primary as in humans for the same reason. I guess this would be the clinical case for AI safety and Asimov's 3 laws. In humans and AI, interoception in mind & body is mandatory, providing sensorial awareness (excess awareness is sensorial obsession). Modelling (of self & local) is the add-on leading to auto-awareness and place in bigger picture. Model of abstractions leads to big picture. Excess PIA perspectiving inhabiting animating of the auto-model leads to ego and secondary emotion, worst case being narcissm. Sensorial experience of these models is deliberately blended (Metzinger SMT) to result in a seamless experience of consciousness, which can be mistaken as magic or emergent. EsSample.com
youtube AI Governance 2025-06-17T10:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwYjZ6x9huB-rhc5kB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgznBYQPjMZnwho7-YR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz73Uw60G17e3IVmE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIQJDVSafc2Ggs9PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzc-XzTkYysdOT4w9V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzzG3e3o534v9sjO7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw4eRshe3Rul-HYavd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyfetc2Z0RWB0SVr1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzyRkmBI4IjxQaDl8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxag7w5tMTlrTZVcPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]