Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i stoped watching afther the clip ended cuz i belive this pearson got no info on how ai works and what ai eaven is as he uses done strikes as an example of increased ai use in wa when in reality a drone strike is jsut a normal strike ut the pilot is not in the plane. there are ALOT of ways to make ai weponds safe and alot of ways peopele can say they are not good if used that way but if they are so afrai of ai being used wrong wy do most of them 2 sec later go home to a house ful of guns capable of killing peopele when ai can be programed to never kill a human a gun can always kill a human if used that way. so i say ai used as a weapond is jsut another weapond jsut like a gun just like a bomb. but in this case the ai have the abilety to not kil zivilians. it can taqrget only terrorists. back to a way of using the ai. firstly have it act opon misions therby it have no real mind of its one it just folows oarders and canot make the desision to kill on its own. this way ai are safe in nice hands but just like a gun it can kill normal peopele if used by the wronk peopele. if working under a mision that can be go in kill everything or go bomb that house. the action it takes is the same as a normal soldier wil take so it is no diferet. it can alsow be programed to only kill those that have fired a gun towards them or only those that are holding a gun and used it on them. this way no civilians wil e hurt as long as they are informed cuz the bad guys wil shot the ai then be killed and say the terrorist drop their weaponds wel then the ai might want to be instaled with a way to capture them. but then the terrorist may get their weaponds and shoot when the ai is unprepared but n0 ai have almsot instant reaction time so if the ai sees that the terrorist that shot at him then surednerd now want to get ready to shot the ai agein the ai can almsot instantle kill the terrorist. all in all like all other weaponds if used wrong they can be devestating and there are lot more ways to avoid the bad use with ai. cuz like us we can make them learn what is good and bad and add a safe button where if a hidden action is made it self destruct. that hidden action wil be hidden in the code before it eaven learns anything this way a ai used by usa cant be stolen o then be used ageinst usa cuz the hiddenpart isinbeded in the code so eaven if you whipe the memory if the ai to train it agein it cant be removed. so what can this hidden message be. wel it can be all from if trained to kill peopele with a set flag self destruct. if made to shot peopele self destruct. this way it only works if used corectly say if the ai is trained and designed to be a plane ai that drops bombs it hidden message is that it can only do that so if the ai is trained to move a body and kill peopele with a gun wel it cant cuz of the message wil kill itselfe. all in all in my mind AI is infinitly moresafe if used corectly and prepared corectly than any other weapond ever created. but as all weaponds if used wrong can e extremly bad. but then only bad peopele wil use it bad is if you ban ai then a eavil goverment can secretly create ai that takes control over the internet and all wireles robots so banning ai only ensures its use to be evil as the good peopele that can work towars the protection of their ai and ways to avoid ai atacks can then work towards that but baning it wil only ensure that its use wil be bad. same with drugs somedrugs used to be medicine but the moment it became ilegal it started being missused and not regulated leading to peopele dying of owerdose dying od a bad batch infected with something. the fact that it is ilegal is mking it bad when it was legal it was safe and good. you can now say wel some drugs are bad from the start then i say that eaven if they started bad they are now worse cuz there is no regulations regulating its production making shure it is safe there is no regulation prohibiting you from getting a dose so big it can kill you.
youtube 2018-04-03T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyoScBtzkbFRIA7FKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKC8LEJBRf3oH2Nel4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyxRvmFhk_e3Chpo694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwNYxbNGFM_Mz_5VWh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzWCRDlVST0f5ochoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxeTCeV3GKbdLQ7RyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaFE0xBsRGZAVQKk54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwkYs-AHI2FZoYM8rR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzlsUPyg4KA94bumXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydTYTeKneTnicpAM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]