Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
https://www.youtube.com/watch?v=ykhKHl7SxRM I’m a clinician and not an AI expert but currently completing a Masters in AI, and this episode with Professor Hinton felt both revelatory and strangely comforting. One question I used to ponder growing up without ever finding a satisfying answer, was whether trees feel emotions or care for other trees or care even across species. At 27:40 (Alps programme-see link at the start) onwards, finally gave voice to something I’ve intuitively felt but never been able to articulate: that trees do communicate, respond and in some sense, exhibit forms of collective care, protective and some sort of emotion. It wasn’t that the evidence didn’t exist before, it’s that we didn’t know how to look or what to look for. This reframes how I now think about consciousness in AI. Perhaps the question isn’t if machines can ever reason or possess consciousness in the way we currently define it, but rather whether we’re missing new forms of intelligence that don’t map neatly onto human categories. Just as the emotional interconnectivity of trees was always there, requiring a shift in perspective and the right methods to uncover it, it’s possible that the building blocks of machine consciousness or at least machine intuition and emergent reasoning may become clearer in retrospect. We may be living through the early root system of a new kind of sentience. Thank you for making these ideas more accessible and provoking a line of thought that stretches far beyond code and neurons. This is what meaningful science communication looks like. Fascinating talk!
youtube AI Governance 2025-07-10T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw5EWFvhkSeSj526hB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwUqV2FBdjn3s5sAjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyaerjl2ScACRdrcxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWfJmnh8a1ukRdN4J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwCi_itcHGrqSHmR9B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx97YQBzVZrZFjymMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxM7A_NAtDUnUwUpqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxILpeyjj0KQiLivyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWuECOSpZl2RCJJXh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydE9NMfW-pTwbnFW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]