Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good interview. Valid questions and enlightening answers. Who decides what to teach A.I ? Who decides what A.I's purpose is? -- Here's one question that's been bugging me for a while, and never really gets any airtime. Why are we creating A.I ? What's the intent? -- This is inflammatory so I apologise but I think it carries some weight when we view history and predictable behaviour; Why did we split the atom? What was the intention? -- OK. So, the intention was exploration of the fabric of existence, right? The intention was to learn more, and explore the potential energy benefits but the scientists that pushed forward with the work were mainly concerned with how and if it could be done, and to see what might be revealed about the nature of our existence if it could be done. -- The upshot is all those curiosity-based intentions went out the window as soon as a few people could "reason" to a conclusion that it was in the interest of national security to use the technology destructively. -- How is that not inevitable with A.I? If you think about it, one of the simplest and most destructive uses for A.I would be to set it on a path to make money, right? How quickly could a small group in control of an A.I system become the most powerful (based on money), group on the planet? We should hold off. We do not know how to properly explain ourselves or our intentions in relation to all life on earth. We have a lot of growing up to do, and this A.I situation seems to demand, at a minimum, a self awareness and respect for all life, which all societies are woefully lacking on.
youtube AI Moral Status 2022-06-29T11:3… ♥ 11
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwP3cSvp3BP2hPid7x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQlAEIJXlbYc4GSuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzkbsI-hTociEX8WTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxC9sAaEuyrFWJ3zph4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxFxphwt5_rqlZ4nER4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]