Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai programs, image/movie/audio access to people to pacify them, thinking they ar…
ytc_UgzltZhlB…
G
I think you've voiced a valid, balanced view of the situation - cheers! AI at th…
ytc_UgyoAQLZa…
G
My one and really only real requirement for a self-driving car is that it won't …
ytc_UgzlSHi2G…
G
The problem is that humans are gullible and easily manipulated, even the intelli…
ytc_UgzSk4Zh3…
G
The day Senior management job ai takes .
That's the day Ai will create new job f…
ytc_UgyToLQi_…
G
I don't get how the 'it only took 10seconds' is the argument to introduce the ki…
ytc_UgyDtsRDT…
G
Harlan Ellison wrote about a wrathful, super vindictive AI entity in "I Have No …
ytc_UgxREpTCQ…
G
The moral dilemmas were fun and enjoyable, but ChatGPT responses were quite pred…
ytc_UgwcSnvnz…
Comment
In any case, the chance of solving alignment in 5 years being 1-3% and the chance of an existential risk being 80% in this time period, is pretty consistent: Most large AI models will give similar outputs. This result, more-or-less, I have to explain I got also from "Dan" of ChatGPT-3.5 before the nerf. Not exactly the same numbers, but very close. And this is also why Mo Gawdat says we should try to enjoy life now and not have kids, until this AI situation gets clear. So far, it's not getting any better.
youtube
AI Moral Status
2025-06-05T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzQvxPqRmOkR8pjKSR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2pzwMSgRK9cttc654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdnoimEm49txoGhcd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6-lP11ILim4iOur14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQhSFRxwWjmwY9COV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYMmQrHjrbkRYQ_xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfrxKweixIKmK5-dZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxJflUmzJWIgAPOkJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydMA-Id6aSMrpjkrZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRKnQaIIgnw87hA354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]