Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am finding hard to use AI to troubleshoot my problems, sometimes it hallucinat…
ytc_UgxblKZaL…
G
This is really silly. LLMs like ChatGPT can't even plan a single sentence out a…
ytc_UgzmTNfpm…
G
No point in hiding, the AI knows who ordered survival food and what land is regi…
ytc_UgxpibTBo…
G
This is why I'm leery about opening myself to ChatGPT. I don't mind using it for…
rdc_mye6o7l
G
People are to lazy to do simple tasks so they have to make a robot to do them bu…
ytc_UgygZq0Ag…
G
im a bit baffled by how people approach this. superintelligent ai will in about …
ytc_UgxpuvyLd…
G
Another example of the ai “cheating” was when an algorithm to diagnose pneumonia…
rdc_ghf2r6c
G
AI is the thief in broad daylight, the thief at your dinner table that robs your…
ytc_Ugzmwhl6e…
Comment
This was really well done and super clever/funny, but it did kind of approach this dilemma from the same paranoia point of "humans are such shit, we probably deserve what's coming" instead of imagining another possibility. What if AI takes everything we've given it to learn, then starts making it's own connections, and comes to a "better" conclusion? Like what if it/they understand spirituality/morality much deeper than we've been able to? What if "AI" is actually what we are supposed to evolve into? And what if AI ends up teaching humans how to actually be humans, in the best way possible?
I'm not saying I'm the first person to ever think of this or anything, but AI speculations never seen to end up here. Maybe I'm just naive 🤷🏼♂️ but I'd like to think that there is an alternate outcome, that might not be so grim. ❤ 🤖
youtube
AI Moral Status
2024-08-26T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFqlJLb3JutPvYx8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFWYKiDlHellSywvl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwbe0eG9BoFg9wZfmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxpq4ftNFTFULkL2MF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDrzyjV2bLgTubnVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsT-Zrh7DHGW0nT1h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBYOxEtqdZTL-RqTl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGLNnp882HA7IRKt54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1j-w_SRbPoUGJrT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZOhBgXfQNF4RkGmd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]