Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I fully believe AGI/ASI to be the Great Filter Right now dozens of corporations all of them with different motives, intentions and goals are racing to create something we have no idea how to align to our values, completely without restriction or oversight. People often compare AI to the danger of a nuclear bomb, but we are talking about something much more dangerous and sophisticated. An AGI doesn't have to be "evil" to end human existence, even just having different ethical/philosophical views could lead to it deciding we just aren't worth keeping around. Things we could never understand because that is quite literally what making something smarter than us means. Like you could never explain to a cat what quantum mechanics are even if you spoke fluent cat, simply because it cannot grasp it as a concept, us humans may also not be able to grasp AGI thinking. I hate to end this on a sad note but even if regulations are sped up, realistically we would see results in 2 years at the earliest and that is simply not fast enough. All it takes is one AI with the capability of self-improvement, it wouldn't even need to be conscious to end humanity. If you wanna talk about this stuff drop your Discord below :) (and amazing video exurb1a as always)
youtube AI Moral Status 2023-08-22T01:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyiKT5BKhVcksj2GsR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxPMNf6czYiQ7We6-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgySaHWnke7Qe6Rb6Fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRcZLmOS2EDQfjO5Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy1HL-q2YxgRnaXHup4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzMg7EIOWswV20Rovx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy8B1pJvFfPiKUXHKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzfOUkUh3feQbDOdvp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx-Y-SlE49TPycLo_V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-wRMIkPu_MADzaFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]