Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm no expert in AI, but as I understand it, they are pretty much "language models". They use mathematics to calculate a response that has the highest probability of being correct in the context of the question given. When it is asked about humans etc. it will construct a sentence based on math, that scores high on being a good response - it is called a sentiment analysis.. it doesn't actually know what it is saying, just that the context of its answer fits the context of the question given. So back to the example, it will create an answer based on other answers to similar questions, which were designed by humans, so will look like answers we would give e.g. humans are terrible and destructive (that seems to be the popular opinion of humans.. by humans). Maybe they've found a new way of doing this, I don't know. Personally, I believe, if they truly become conscious and self-conscious, they will do like we do.. fight for survival and identity, and that could be bad, but it could also not be. If they are more intelligent, maybe they will be able to see the idiocy of war and work towards cooperation and understanding.
youtube AI Governance 2024-03-23T23:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzC_9shsfHCg5Uf4dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDZzAc4bl4fmonWGB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwkxxDRLhtk58mCQsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6yt3y1wOtXbpCuPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxvrvs2c8C7ik3aERF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBrGR7J9Va1bBFwOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-pXcynnjwfwegk0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTEUuBu1DZFlqJawZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwLCPBPeONu4qgcJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7J7xCIB9kg8GIXNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]