Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the antagonistic nature doesn't do anyone any favors on either side. The…
ytc_Ugw-6oERr…
G
If anyone knows anything about ML and AI its not foolproof. Anyone that takes th…
rdc_esq4wj1
G
AI Yai Yai Yai Yah !! a daily expression from the working public especially in t…
ytc_UgxVj8bh2…
G
Isn’t it ironic that a professor is concerned about his books being copied by An…
ytc_UgwvSMYiY…
G
As everyone has pointed out, it wasn't the drivers fault.
But it was the cars fa…
ytc_UgwpFxWKu…
G
Research shows development is 20% slower with AI, not developers THINK it’s quic…
ytc_Ugx9v50rR…
G
I guess I'm missing your point the first time a real AI speaks it's actually min…
ytc_UgypvmG1r…
G
Ok and what are you gonna do about that, even if by some miracle you could make…
ytc_UgyNnHkGU…
Comment
The entire topic starts with a very foolish idea. "If we could see aliens arriving in 10 years, we'd have to do something". And then it continues into "safety" and compliance regulations against AI products.
If those said aliens would indeed arrive in 10 years then humanity has 10 years left to become peaceful, get rid of those who can not be peaceful or isolate them into far away enclaves with high borders.
Those said aliens better have a good impression as their intelligence is infinitely higher.
But IF mankind would develop safety features, weapons and policies to FORCE the infinitely higher IQ aliens to comply, then mankind would be counted.
What you can see here is that the "godfather of AI" who was found to have plagiarized vast sections of his nobel prize work from other "godfathers" has found out that only one message is giving him the attention he likes.
Alarmism.
He certainly is decades past his peak and regretful that he actually is not participating in AI.
youtube
AI Jobs
2025-11-02T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy2q5g11CkdudNQEYN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_5q-aM7oq5xcR0WN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOUegVk79KoFEXv7t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkrV6Rxh_aMtej2Ol4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxYc138ekCngIVeOtZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyoYimEo1bpgmJQbWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN-2xQ9oOqmVkJw1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ4NeY6V58pAtKxSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDLclhNNTr1HxsAmV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJ3TjChKWlpUXjZTZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]