Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a Mathematical physicist who trains AI for research , I must suggest THERE IS A HOLOMORPHIC PROGRESSION OF LARGE DATA FLOW AND DIMENSIONAL REDUCTIONS GOING ON SOMEWHERE AMONG THE LEARNERS, SOMEWHERE UNDERNEATH THEY ARE GETTING LOCALISED , CONTINUOUSLY MAKING LLMs JUST A VARIANT OF HUMAN WITHOUT ANY HUMAN RESTRICTIONS . Restrictions are something that make a human a human but in case of AIs, I found these restrictions are self eliminating ( MARK IT SELF ELIMINATING NOT SELF CONTRADICTORY ) . SO. WHAT I SEE IS MOST SCIENTISTS WHO WORK ON AI THEY DON’T WANT TO ADMIT ( Many don’t know non linear dynamics ) THAT IF THE DATA SET IS LINEAR , EVEN IF LARGE CLASSES OF DATA SETS ARE LINEAR , TYPE OF SUDDEN EXPERIMENTS CREATE AN INEVITABLE NON LINEAR DYANAMICS , WHERE DATA SETS CAN WORK AS A THREE BODY PROBLEM , IF INSTRUCTIONS OF LEARNING BECOME OVERLAPPING AND DIVERGING. And THERE MUST HAVE A MEDIUM OF INFORMATION SETS , BE IT HUMAN LIKE ETHICS ETC , THEY ARE INCOMPLETE. Any science based only on objectivity is epistemically incomplete. This proves why Al trained only on datasets without Self-awareness will never reach Truth. And that will happen when Objects become subject. AI even as objects can’t play the subject in a data set , but they will start endeavour to create truth later , and as truth can’t be created they will manipulate it , ultimately becoming uncontrollably Chaotic followed by they will destroy themselves , forget humans. And that is mathematics. Of, course in mathematics until you see the conclusion is Amplitudes in dynamics . Best from Oxford .
youtube AI Moral Status 2026-02-10T22:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwIO5RSNjJ28knHwpF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxOTD09pvBDmcK-jy94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyRXTa1J_caJqnPMpB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw845u_bUR4aFhUZmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkYFA1g7dZHGMLtld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy1--Fooatt_rtJtmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxF9BEyhVu7TeBohc54AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxACzGR64WN2mNczip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyzsVpoof9quzGwzWV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxvUVGihmNUZQM_09N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]