Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I guess it's possible, but it seems like a bad idea for Alphabet to do so. They'…
rdc_dfti1vi
G
We have had the technology to implement autonomous cars for at least 10 years no…
ytc_UgzzYHKOn…
G
I think the ironic part about this is the fact he claims to be ‘great’ at things…
ytc_Ugx5i0hKm…
G
Taking people's jobs and lively hoods aside. The fact driverless cars have less …
ytr_Ugzc4lsnU…
G
That abomination is not human. It’s not even an AI. It’s not an Earthling. It’s …
ytc_UgwQA53bo…
G
AI will have a phylum within the animal kingdom. Seems like we are at the cusp o…
ytc_Ugw2jsrom…
G
I had a left side ablation done. My doctor used AI to guide the wire to the left…
ytc_UgzJNFXti…
G
Like a comment further down said, Snapchat is conducting facial recognition, but…
rdc_h93dwd4
Comment
I think some of the analogies are very generalised and in my view wrong. For example, comparing the arms race (which by definition was about war and destruction) with the AI race which is about who has the better more intelligent system is wrong. If we do a comparison, then we conclude that AI is about war and destruction, which obviously it is not. AI should be stopping us of doing bad things, making bad decisions or prediction bad things. Historically the largest number of human deaths were caused by religion (man-made), wars (man-made) and disease (man-spread). So what if AI stops all the bad things? Also about this gorilla problem, this is not a proper comparison or analogy. The gorillas will never understand the difference in intelligence between them and humans. With humans, you can reason, communicate, understand. It is not about us being more intelligent that gorillas.
youtube
AI Governance
2025-12-06T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgywCcOS-qa6xCJxT3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLMHUK_Ydr4CVKFtt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFzcjQTwFi8wKHtC94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoLDNz5PX5rHFvRVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdDByjokAHZCRsyc54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzicl3veLvMyjzKRt54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEZ12voOS-Cu-tLBZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPBXax3hfTyFkR71F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzyiuH83f1r97718Ax4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJaXZRz4jqI9OA3bB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]