Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have self driving taxi’s in USA…. It works but yes it did hit a bicyclist and…
ytc_UgzkurkDN…
G
Is this the real full output? I am endlessly fascinated by these new AI models.…
rdc_jg51mip
G
How can any human compete with ai's productivity? It's dystopian. Maybe Wall•E w…
ytc_UgxX1zJku…
G
AI will definitely take us towards wealth inequality and ultimately towards the …
ytc_Ugx2WJc9h…
G
How did you even see the other robot? Is there a camera in it or something. like…
ytc_Ugw8weA_D…
G
So our defenses will perform less effectively than our adversaries because we wa…
ytc_UgwJ9WUdK…
G
Imagine if a government had an obedient robot army and didn't have to pay lip se…
ytc_Ugx_seJsz…
G
*“This is what pain looks like”*
*Actually, it’s the opposite. When you’re thi…
ytr_Ugzx1GR1E…
Comment
I fully believe AGI/ASI to be the Great Filter
Right now dozens of corporations all of them with different motives, intentions and goals are racing to create something we have no idea how to align to our values, completely without restriction or oversight. People often compare AI to the danger of a nuclear bomb, but we are talking about something much more dangerous and sophisticated. An AGI doesn't have to be "evil" to end human existence, even just having different ethical/philosophical views could lead to it deciding we just aren't worth keeping around. Things we could never understand because that is quite literally what making something smarter than us means. Like you could never explain to a cat what quantum mechanics are even if you spoke fluent cat, simply because it cannot grasp it as a concept, us humans may also not be able to grasp AGI thinking.
I hate to end this on a sad note but even if regulations are sped up, realistically we would see results in 2 years at the earliest and that is simply not fast enough. All it takes is one AI with the capability of self-improvement, it wouldn't even need to be conscious to end humanity.
If you wanna talk about this stuff drop your Discord below :) (and amazing video exurb1a as always)
youtube
AI Moral Status
2023-08-22T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiKT5BKhVcksj2GsR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPMNf6czYiQ7We6-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySaHWnke7Qe6Rb6Fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRcZLmOS2EDQfjO5Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy1HL-q2YxgRnaXHup4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMg7EIOWswV20Rovx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8B1pJvFfPiKUXHKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzfOUkUh3feQbDOdvp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx-Y-SlE49TPycLo_V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-wRMIkPu_MADzaFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]