Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
0:36 "I'm going to go off the top of my dome n u can not correct me" . This is the current status of the world we live in.... Every1 learns off the internet, which is a GREAT place to get "information" however, all information is not true or factual, n there is where lies the problem... We r exposed to too much info n every1 has an opinion. For example, flat earth vs round earth, which i can not believe is a question rn with allllll the evidence that is out there, btw this can be proved with SCIENTIFIC accuracy n not belief or PSUEDOscience.. I think its great to challenge to norm, however, u should have a strong "base" to challenge these so called "norms." Something that can replicated not only by u but by some 1 else or by a by a peer review in the respected field. A.I. is such a new field that i dont think any1 really has good grasp on what is really possible, but I'll tell u this, I would trust some1 that has studied our evolutionary ladder or who has deep knowledge of how computers learn or develop, than some sci fi "Expert." There is something known as the Dunning Kruger effect where some 1 learns just enough to think they r an expert in something but dont know enough to know that they dont know sh#$!.... I personally am afraid of A.I. because i think they will take over, however, i will admit that i have gotten all my info from sci fi films (matrix, terminator, i robot etc.) but i am excited about the medical benefits. I have heard, not personally witnessed, medical diagnosis called correctly earlier than a doctor can n in certain cases the earlier u know whats up the better. So i am excited about that, but time will tell if its true or maybe we'll be slaves....😂
youtube AI Moral Status 2025-07-31T11:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy50I9LTbQOzIW6qmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyK544fu40G77A15Nt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxgfrKB6zh5tPGQyrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXb1ytsBZylzSvWMp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyF0Mhy2eCwMr6cFIh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiUH3OZn2WEKHSHxF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwUVv6TWFLXr_WWvHl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0cZdfAd0oIsgVnjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugx4NrPT8MPObCH9shZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrrYwugZsulwE9pRt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]