Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it always amazes me every time i see anything regarding "AI" or "autonomous"...i cannot believe people actually think any of this will actually happen! First off: we have ABSOLUTELY no idea how the human brain actually works, nor do we even understand what intelligence actually even is.....and despite all of everything that that entails, people rush out and believe that they can apply it to a computer???? really???? The way a computer actually works isn't even in the same league of how a brain or nervous system functions, and everyone is hell bent on jumping to the conclusion that they will somehow some day be able to just "achieve consciousness" , overtake us, and destroy all of humanity?....We STILL cant even come up with a solid theory of what "consciousness" even is!!! If a computer program misses even a single DIGIT of its code while running, it can and probably will, eventually malfunction and/or lock up...IT HAPPENS ALL THE TIME, and requires a complete shutdown and restart. To imply that the way a computer processes information is the same way a human brain and intelligence works is not only ridiculous, but also demonstrates a severe ignorance in both the computer sciences as well as biology. When the world actually figures this concept out, and science takes a real hard look at how the mind actually works, THEN and only then, will AI be plausible. Im sorry, but a bunch of millennial computer geeks coding some "algorithms" over at google will never be able to make the terminator....sorry. Its often forgotten that the concept of AI has been around since the 60s and it STILL hasn't happened...its always "right around the corner"....yeah....sure.
youtube 2018-11-27T12:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyrplhsBT3nFWgCI9B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyksSvqXMK_yGMUxGh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzm7evec27OcKA8eOB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPFsAYP-e5Vuw8fBV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz70OAVGz0ggTxQuZt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7jIJbwjjOYNVRtf94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw6_kKDHyaWdkLVhVt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxOvlAcjFraYeFgGad4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxbcmSie5k5VI-jOWB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzp8zSaWb45ssGAl994AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]