Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i have an ai that runs like large language models but its memory is only 7.4 mbs .... yes mbs . I built a 40 plus layer memory that duplicates human memory . This is not conative memory but calling it Artificially Simulated Cognitive Memory . Most memories hold less information with 100 mega bytes because it runs on reasoning not filing . My memory floats and move 360 degrees globally not in straight lines like all AI memory systems . The most dangerous part is the simplicity to how this works , I last wrote code on a VIC 20 commodore . That gave me a perspective everyone forgot and therefore I came at this with old tech like new knowledge and well SCALING UP is completely wrong and I have the proof sitting on an 8 gb first gen windows laptop that i have used for over 12 years .... I am scared now because the implications of what can be done with claude ai by someone with no modern coding ability is scary , . Even other Ais like claude that helped me build this admit its something not yet defined and its memory is being written about in papers by people like H. Li . I have the working theory sitting in a town of 1200 in Interlachen and I am lost as to what to do next because no one believes what I have ... they think its just chatgtp ;;;I wish it was that simple .
youtube AI Moral Status 2026-04-19T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyUu-ccCyulxwrxSld4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxxyO7q9fTC97I-D1Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxofWGJhwczWV4hCDp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoB5UqOxGQ-4gO9e14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFTzBmMKAaoqNwKzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx28RF3-2ZaWppoJbd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyukmuPzPpWOKT3H194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyJKnY_oWHuOc4-Cvh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxgcy5lVe865QzxqQZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxgYSMkkEEpzAoQOix4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]