Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great content as always. I’ve really enjoyed the recent AI content despite how t…
ytc_UgzHMXezo…
G
I was with you until you said purge the machines now.
How far back should we g…
rdc_nsz2pnk
G
As a scientist, I absolutely agree. OP probably doesn't understand what a rat ra…
rdc_ewg433a
G
Copyright is the wrong focus for the problem of generative A.I. The reason why c…
ytc_Ugydy_jNa…
G
It is more than that. I grew up in the 80s and 90s when computers were first bei…
rdc_ohp0cri
G
Artists will not be replaced by AI. Artists will be replaced by other artists us…
ytc_UgzefaRGC…
G
6:15 This reaction of "it's just data, it's valueless, I have no reaction to it"…
ytc_UgxZXGn9y…
G
The claim that the Tesla Robotaxi is "smarter" is crazy. Waymo cars are using ne…
ytc_UgwVqbGbo…
Comment
While entertaining, the vast majority of this presentation is pure fiction, the mere musings of a mortal. First, every stage is actually rule-based, not the simple rule-based of stage one, but nevertheless just more and more sophisticated rules. Second, AI can never truly have consciousness or become self-aware. Yes, it can simulate those and even deceive people, but it will never achieve actual consciousness. It can never ask "what if", it can never just wonder, it can never advance any values apart from strict utilitarian ones, it can never love unconditionally, it can never have intuition. It is just a tool that could be enormously useful, even cause our own destruction, all based on the rules and values of a human.
youtube
AI Governance
2024-04-13T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxt_8HTpVINV-Pzpdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5LGatvNDO0y7P2Pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0Eycb112-NxfVnVZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugzrwd5I13OX9ykx2ZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2VxjUhKGUSZuhaW94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwz4gJBCMC735BNAOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6e7xqPvtr3969DA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWwW2QDqB1Cmpjlnt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwBWomd7dzhsYugY1p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6vBthBPLBzroIBBJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]