Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AMorata604 Isn't it unfair for people who invented those formels, but don't gai…
ytr_Ugz_nZBEK…
G
Until the AI pulls info from some garbage Reddit post and tells the eye doctor t…
ytc_UgyGSi3Dl…
G
This is similar to how I homeschooled my two children. 🙌🏼 Now 18 and 20 ~ they’r…
ytc_UgxrLvw7E…
G
@honeybeeami2654 so it is more about distribution and what it’s used for than th…
ytr_UgxBz6I7U…
G
Let's face it. When AI takes everything from those who have little, who do you t…
ytc_UgxBPOatU…
G
You must be fun to talk to in parties, even funner in Scrabble, eh?
You clearl…
ytr_UgwYp_mX-…
G
Persons grow, Machines are made.
If a AI learns by itself it is someday humanlik…
ytc_Ugg8IrF1Q…
G
I don't think humans can ever be able to make artificial intelligence we could p…
ytc_UggPeKaEN…
Comment
There is zero evidence that consciousness resides in the human head or brain. Nothing, none, zero. To suggest that it does is a presumption is to reveal how one's own biases influence their "logic" and render it nothing but personal opinion. Wolfram who seems to have difficulty focusing on one subject at a time also seems inclined to believe that human replacement by AI isn't necessarily a bad thing. After this point in the conversation. So not only is this deeply offensive, in other words he supports the other team the nature of which is still a big question mark at the cost of the human species. Personally I have zero patience for people who purport to surrender their instinct to survive. I don't believe them, I don't believe him. I think it is performative. Besides, what place does that have in a discussion on AI risk, why not call it Human risk to AI. The whole thing was profoundly ridiculous despite the horsepower of these two thinkers. Nevertheless, it was still uncomfortably and annoyingly entertaining.
youtube
AI Governance
2025-10-29T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugzytbm32BmPyZWeuft4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBO-wKvI2gMWMQXm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqKQ4Q2zAr2Pf3XpN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd5GPgz0mc1vmWDml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7du4ZZIu4g61tYPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFaHsdftdvaS601Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5tlyVDY64cxGs0WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyK2doOVquGPsMQeW14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwimqcLDJLMkOtSFeR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJSuxSyyJkYu5zO7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})