Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I appreciate the attempt at simplifying a complex field of computing science. W…
ytc_UgyPC9oD4…
G
Googles AI will never begin personalizing results for white people which depicts…
ytc_UgzOZYCqI…
G
Fire, a great tool, in control a loyal slave ; turns into a ruthless master if n…
ytc_UgyIxd5OI…
G
Hi there! Thank you for reaching out. We actually have a curated playlist organi…
ytr_UgxOyIVlF…
G
Lex - it’s wonderful that you ask such ‘innocent’ questions - however, maybe loo…
ytc_Ugz5x6Yug…
G
That looks AI generated. Like as in animated, not real android. Even still, God…
ytc_Ugw5uwdTL…
G
It's insane to realize that someone who is a software engineer lost his shit to …
ytc_Ugz8ugXu2…
G
We don't need robots to do the jobs we hate. Just need to restructure society an…
ytc_UgwiF5u2D…
Comment
These are not rabbit holes. They are arguments that go round in ever decreasing circles.
You need to define your terms before you can begin to describe these problems. Yudowsky and Wolfram have a very good try here and are two of the most qualified thinkers on the subject however I am at 3 hours and they still haven't described or agreed the terms necessary for discussing AI risk in detail.
Define Intelligence
Define general Intelligence
Etc
What they and many others don't seem to discuss, in detail at least, is the vital element: Agency, or what Leibniz described as will. This is, I think, the missing element. When we have a machine with genuine agency then there are massive risks, end of story. Whether we can define, agree on a definition of will or agency is another question!
Is a reward function enough of an impetus? I don't know how reward functions are built and how or even if they differ across different systems. What a time to be alive.
youtube
AI Governance
2024-12-06T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]