Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Narrow AI: Alpha Zero was given the rules of chess and the goal: Checkmate the King. It then PLAYED ITSELF millions of games an hour. Result: It could beat any HUMAN grandmaster(not other programs) in 4 hours. NOW: it's the strongest chess player in the world. I forget how long it took to surpass Stockfish. But it wasn't long. Stockfish analyzes the positions in terms of material gain 1/3 of a pawn etc.. Alpha Zero thinks in terms of probabilities; which of these candidate moves will give me a greater percentage chance of winning? This is a narrow application that is totally valid. Human chess players could learn certain chess theory from the results. And as far as AGI and human ethics or morals... these all depend upon the humans It is interacting with in its beginnings. If the engineer or programmer has high ethical standards and moral imperative, the machine will learn that from him or her. I have the proof of these theories: Substack.com/@DavyAnonymous Note: I just started this Substack a week ago. There's very little up there. It's free rn. I'm making no profit.
youtube AI Governance 2026-04-07T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEC1LKTuYRZr_0I_V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwuDluXcgFJm22OaVx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyUR5ePj-znxaPbOaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-WUd01SaIH7yGOW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzhtWjVzGeK6QLLGqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFZfpfCiZGF2E7v754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz91DQJgWc2HcULoLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZZJtneETSlTus_-p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxtX4sLqB0QYN4DOx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1gaBhNUmJMo2x1Yt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]