Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Consider the processing power of ai vs the human brain and how it basically equa…
ytc_Ugz7MeCsT…
G
I think it is a simple matter of not programming "things" to need rights. So for…
ytc_Uggf3fSG7…
G
12:12 I think I just realize some implications. Emotions aren't human. It's not …
ytc_Ugz7MbanY…
G
AI will always remain artificial, it can never match human ingenuity. While it a…
ytc_UgzZOMDEu…
G
I'm at a company that does a lot of AI and the executives are pushing hard to ge…
rdc_mva6xkk
G
All eyes are on you, Norway.
You have every reason to do the right thing.…
rdc_dsbfz4k
G
A few years ago I was in Frankfurt airport on transit. The restrooms have automa…
ytc_UgwG272GY…
G
I feel like usually I don't get freaked out by AI I'm more impressed then anythi…
ytc_Ugz-Qabbw…
Comment
Clever not intelligent. Saying machines are more intelligent that humans is like saying cars are better runners than humans because they are faster. What machines do is algorithmic, whereas the consciousness and intelligence that humans and even arguably octopuses display is a fundamentally different process, because they are "being- in -the -world". Dreyfus citing Heidegger, cleaned Minsky"s clock on this issue year ago. Scientists, like their machines work in a very narrow abstracted space
that can at times imprison them in a dogmatic reductive materialism that fails to appreciate the true and incomparable nature of human intelligence. Although there are exceptions, most notably Sr Roger Penrose thoughts on the non algorithmic and possibly quantum nature of the human consciousness necessary for intelligence, scientists and also certain noisy tech billionaires, make poor philosophers.
youtube
AI Governance
2023-05-05T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFB9meqjeGABYy4bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOd4M95zSbzdFnVw14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZYISp6oOOwl987Fh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUvJOd2CDPbPjcd4Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrN8QIzYFlt6jKv4B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzv4ryNXsMRY8-9KxB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhZ_34WDu_FBtNxjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSkS9JirZGT_dVyz94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy08fn_D7zcsJKclkp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXci4JUPmEmwq-yn94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]