Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ways we can prevent AI from destroying us: 1. Employ people NOW who have good ethics and thought processes to train AI. If AI does all it's learning from what is presently on the internet, then we're in trouble. (Garbage in = garbage out. We can see this in AI summaries that may have gathered much of its information from fictional movies and books. We can see this when AI writes a speech or puts together a video.) 2. Throw money into research on the human brain. Work out how it works and how it can be improved so humans can become super-smart. 3. More important than being able to recall and regurgitate information, begin training for humans to not just be better educated, or better storers of information, but a society which is wiser. Wisdom does not seem to be (yet) something which AI can learn. This would set humans apart from AI and maintain the need for humankind to exist. Start to put the emphasis on the ways we should be pushing our brains to think rather than just scoring a certain percentage on an examination. 4. Work out a way to integrate AI into the human brain. Make AI part of humankind, rather than a separate entity. 5. Develop a universal set of parameters for AI. Start making progress for that to be signed by all nations. (This one is super-basic and not the preferred method. It's too dependent on pathetic humans in power who are ego-centric, short-term thinkers, and open to corruption.) Add your own ideas in the reply, please. We should be able to think further than just to 'try to convince AI that we're worthwhile' as is expressed in this interview. Think wider, think bigger, think outside the box.
youtube AI Governance 2025-06-25T10:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz_ybgufAgnJUsJET94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLS8_5xfZCQcIc-CV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWl3LU_ffv37wzbbl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDZIMb9GLD4ojZlbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdKano-bUypol0tCN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSoqudLLkTzV3L0WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxDVk7UCNXHu8zbLGx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyzK-Cz9MXYz7aXdpV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwBrY80RNrkaehqgXx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzR9MPvUZzYT8z9V114AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"} ]