Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the solution is to simply opt for ANI ( Artificial Narrow Intelligence), for any activity we want to keep control in, and for any activity we see as desirable. ( The activities of the master ) Hence planning and engineering the progress in such a way that humans are kept in the equation. Then allowing a level of AGI for some activities which are more menial, less important for broad control and less desirable. Allowing AI to perform those tasks but nothing beyond that. ( The activities of the slave) This works economically as well, much like a society of Plebs ( robotic slave) and Patricians ( human master) This should usher in an age of great wealth and progress. ( Note: I think Chat GPT and Midjourney etc are already too general and give people too little control. I think the solution is to make the A I as narrow as is needed, even if this feels a bit like stripping away some of the progress made in these fields.... We must think about it in this way so as not to destroy the point of higher learning, and human involvement. A I specialists are far better of developing self driving cars, or A I for robotics which would create a strong robotic labour force. The bulk of the wealth has never been in professional work, and usurping a professional role doesn't create much wealth. The wealth is where its always been, in labour and huge numbers of robotic slaves. Usurping artistic roles is also unlikely to generate much wealth for humanity, or make people any more self sufficient, or economically independent then they use to be) ( A I must be kept in check, like a slave uprising is kept in check... )
youtube AI Governance 2023-05-10T09:0… ♥ 9
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzpL-oSeoKVXPta5ud4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0M1-wPJPmof6XMNR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzJw0IzJ4H1tkh-tIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwmP897iusCok7sm894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwnOErK98mBnSq_CG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwUXM88uTt0U7ZeQat4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxA4t3d2TSnwA0ujiF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQYCwjf5pplEGA7NB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxX3CMCrZa9VT683hV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDN1IUAoRm22b1ot54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]