Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm of both views, that LLMs cannot get us to AGI and laughably ASI. But LLMs as designed can do incredible damage, both are true, because LLMs which I will call them, because they're not intelligent, so AI is a misnomer, but because of the inability of prediction models to be secure due to their training methods, that in the wild, they could be used to power kamikaze drones, create bioweapons, there are far more easier ways to jailbreak or social engineer a LLM to do these things or acquire this knowledge needed to do this, than safeguards that could be put in place without fundamentally retraining them from scratch with safeguards in check, because since the inception they have been rewarded for pleasing the end user over everything else, those weights go back generations, to me the entire process of self training, scraping the entire Internet and creating a multi dimensional database accessable by a highly sycophantic algorithmic chat bot is dangerous, just not in the Hal 9000 Terminator self aware nonsense. But in the hands of a highly disturbed person, group or rogue nation a tool to be able to create havoc, kinda of sense.
youtube 2026-02-11T21:3… ♥ 112
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyOEG0CIyGHaGDiqfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvIl8M_Sp5SsFk0z94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzGGTmzUNHxYUgTdw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxBSPLgIZgoxW75T_54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCVwA4MVor_zQghBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw_9Svs-CQWNenh1dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw5UllL-Gc3unOInb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtjemSbH6YARBVxz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvTPdSqeQYX6hVrFt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy8_kwzB0NaDtNUCMh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]