Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone literally made lord of the rings trailer in ghibli style with that ai
A…
ytc_UgwgX8N-0…
G
You mean like ChatGPT? AI is only useful if it is trained on factual informati…
ytr_Ugy1ygeWn…
G
No, AI will not destroy humanity,
AI is the tool, the bicycle, the spaceship or…
ytc_UgyN0O2yA…
G
Is the tesla robot now saying stoer ? From the dutch ? Really hearing wird. Stoe…
ytc_UgzdgY3sw…
G
Its not just drivers, its 50% of all jobs, will likely be gone in the next 10 ye…
ytr_UgzZVYPoP…
G
We’re quick to push tests on AI to see if it’s conscious but nobody is testing y…
ytc_UgyMSL6d4…
G
Political influence through AI is up and running as we speak. As one example, tr…
ytc_UgyPpwwAw…
G
This is why governments need to create additional jobs by investing in infrastru…
rdc_gkpn7q5
Comment
> I sit and stare at the wall and I just think that that intelligence is a difference of of kind rather than o degree because there might be something different about what's what's going on in here versus what's going on in these current algorithms.
Even if this is true, quoting gwern's clippy story:
> We should pause to note that a Clippy^2 still doesn’t really think or plan. It’s not really conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages.
> It has no qualia, no intentionality, no true self-awareness, no grounding in a rich multimodal real-world process of cognitive development yielding detailed representations and powerful causal models of reality which all lead to the utter sublimeness of what it means to be human; it cannot ‘want’ anything beyond maximizing a mechanical reward score, which does not come close to capturing the rich flexibility of human desires, or resolving the historical Eurocentric contingency of such narrow conceptualizations, which are, at root, problematically Cartesian.
> When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet.
> (The deaths, however, are real.)
youtube
AI Moral Status
2025-10-31T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDlAQpJvFbgGM4r4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjUsv4wUBOyvwEwRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwaha5FvqKpPn5hTL14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQ7alvMqtC2j7XWyN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyBNlFBAtH6vnIH_7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD7d65FwNleHg6ndh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxdJNz5J6OmXBHtUHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzc0TaJYKf3z5Gpc6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDSWrHCmEbQb6BGxp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzeml3bGYbm1250Kup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]