Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. intelligence, _"the ability to acquire, understand, and use knowledge,"_ * nope, consciousness not required * AIs clearly exhibit intelligence (they acquire knowledge by training the LLM, they use it in their responses, and the better ones are progressively more adept at self-regulating responses in a way that eliminates wrong responses which is functionally identical to understanding) 2. He later says the phrase, _"you've got to be conscious of them,"_ a phrase which suggests he wasn't talking about consciousness (in the typical sense) but only meaning understanding, which clearly these models are exhibiting. The more context and training data they get, the better the results. I've seen models provide amazing subtle details that even a human writer would fail to include (and of course I've seen the same model make big errors most human writers would've caught) Now personally I think there could be real limits to where the technology goes. Some may require different ways of thinking (in the literal sense of changing the methods by which the LLM processes its information like how a MixtureOfExperts approach allows multiple AIs trained in different modes of thinking to evaluate each others' responses to sanity-check and improve the final result), allowing us to pass that limit. Others may actually be real, hard limits to the tech. Others may be soft limits (ie we could process better, but we'd kinda like to avoid burning the earth up with data centers so we decide to self-impose limitations). But I get the sense Penrose doesn't really have terribly well-formed opinions on the subject and seems to treat consciousness a bit more magically than it ought to be treated (I'm saying this more from other earlier talks I've heard from him on the subject).
youtube AI Moral Status 2025-09-08T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugym6D5q0uDm3m-fOnV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5xZZmDATgKE3spFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCN15o8Tig9Y1s2kJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwU2hTA-LWMau6Ftgx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSmPauLBQg_hEpFrh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFZQn1F9t_ZCHg8Kh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJQ1K-C0_nnTcmY3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyv8aUdQIrVcrfihfV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcXxvk4iCCsujEG9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwz-A1nm9KmW8KBLhN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]