Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Talking about what we do not understand. I agree with Wolfram that we tend to anthropomorphise and in fairness we do our best to make computers appear to be like us right down to robot humanoids. It is difficult to look too far into the future but to my mind two serious problems are firstly that we will deskill ourselves so much that many people will become totally dependant on tech. The second issue has already happened with computerisation of the stock market. You automate something that is told to do a specific thing in a specific situation but you have not foreseen a positive feedback loop that will do what you do not want, devalue the market in seconds. In such a situation someone presses a kill switch but it might be more dangerous with say automated warfare. I suppose this is and example of Stephen Wolfram’s computational irreducibility - the inductive process that has to be run to find out where the glitch is. Previously say writing code for a nuclear reactor control, a very extensive testing of the programme would be carried out and of course already has this capability. I suppose what I fear, (anthropomorphising!), is an over confident Dunning Kruger effect on a super smart system that is not quite as smart as it needs to be.
youtube AI Governance 2024-12-09T23:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]