Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Geoffrey Hinton, I'm shocked. You created some scoring algorithm, back propagation, that was useful. Your neural capsules, not so much. But this, saying if the AI knows it's being tested it will act dumb or misleading, that is 100% complete nonsense, and if people at your level really believe this, then the non technical people you speak to have no hope of understanding the truth. These models are nothing more than frequency counters of words and their order relative within a sentence (attention) and relative to themselves (self-attention). They don't think and they don't have an agenda, and although there can be specific behavior wrapped around them to detect certain things and speak accordingly, that is 100% programmed in by the AI companies themselves. If you really don't know the difference, then you have no right speaking as any kind of expert. Frankly, I'm embarrassed for you on your behalf, and if you weren't damaging people and causing Mass stress and panic two non-technical people that don't deserve this, I wouldn't say a word. But you are hurting people by spreading this nonsense and there are plenty of things about these newer types of generative AI that are legitimately harmful and by you not talking with any knowledge on the subject you are making a bad thing worse while also making a fool of yourself and increasing people's danger and stress while increasing their risk of danger at the same time. Reach out to me on LinkedIn - Trevor Chandler in Thornton, CO, USA, and I will be happy to get you up to speed and show you code line by line, data by row and behavior for each so you can correctly understand and escape the deception.
youtube AI Moral Status 2026-03-03T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw2lC8T6rEQGjgwLPR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz0_RMtm6G_eREqEQd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzCdladb4_DlqJeoaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzmXqhiLm3KbEpR7iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwW5Xp5tFx1LiOq1BF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCh3wo_i8GmcynC_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzxQ57i3d5w2WCyrBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxARRFJPpZ-3LsS07N4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxSEBFDFflcbMi7MP14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyP_cNNdkGIRF-Tv4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]