Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Scientists are trying to duplicate a human mind by providing it with vast amounts of data. Indeed, the amount of hardware used to train LLMs is staggering as is the SIZE of the dataset. ChatGPT4 had 1 Trillion parameters for training and was trained on over 45 Terabytes of data. The output of that is then, for all intents and purposes, "the AI". Consider how a child develops. Each and every child goes through a common set of training in order to succeed. No child can survive without a caregiver. Children learn to survive while also learning abstract concepts like science or math. Eventually, if everything goes well, the child ends up as a healthy adult member of society who is capable of caring for itself, and perhaps many others. What could happen if you put not only the knowledge, but the analysis of the knowledge into a 5 year old kid's hands. They would immediately become intellectual adults. THAT is the problem. The MODEL will always be put into the equivalent of a new born child. This is EXACTLY why they are afraid. In this video, one person worries that we might not know if AI is being deceptive. Why? Because when you apply weights to data in order to arrive at an outcome, veracity goes out the window. Like a good conspiracy theory, you can find plenty of supporting information.
youtube AI Governance 2024-04-12T18:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjsZavbDnuZjvsZh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVShchXzguWy4sndh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAlW58yfa0-uqD5_d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgweoTnhEkKW6mGpdh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwDhcWSWJ_VDyjOTLh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBJIYFXoP-8eFz9X14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQYDQVI5opZKhzUfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxcUKe3rphgkBmbJ0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdT3oTjob9BznSVaJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPNCl042wrLRYvzLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]