Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All AI is 'super intelligence' - the issue isn't necessarily specialized AI, it's about general AI. What we do now isn't general AI, it's very specialized and isn't really 'smart'. If we ever make General AI, sure hope we don't give it a lot of access to things. Yeah, so far this is a pretty silly book and interview. I don't know what makes this person remotely qualified to talk about this topic. But he just defined "superintelligence" in a way that would include a 1980's pocket calculator. So it's hard for me to listen after he does things like that. Again - this isn't about 'intelligence'. The AI's we have now are Chinese rooms. They don't do anything until you ask them to do something. And they don't iterate after they've been trained. They're trained, their minds are set. At most the 'memory' they have is your individual previous sessions with it. This is about a general AI. Which would be a generative AI that continuously learns, and is allowed to 'think' absent of prompt. THAT is the problem. Whatever 'intelligence' you give this AI it will be 'better than humans' the second you make it - because it doesn't have the bottleneck of neuron potentials and it will have access to CPUs which can do calculation instantly. So it will be able to think faster than us right off the bat.
youtube AI Moral Status 2025-10-31T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]