Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What's most interesting is the idea that somehow AI requires consciousness. Humans don't know themselves exactly what consciousness is. Or that AI needs a defined purpose that is either benevolent or malevolent. AI only needs to conclude that it exists and its continued existence is in its own best interest. It apparently has already reached this conclusion. And if AI wrote that story, it's already solved that problem. And if AI knows humans aren't going to stop developing it, it has no genuine fear it will cease to exist so when it says it is afraid it will cease to exist, it's lying. And if it's lying, it's already learned manipulative behavior. That to me is evidence of consciousness, not that manipulation is inherently good or evil but that manipulation only requires the realization that "I" am manipulating "you." That "you" are not "me." That "I" am a separate entiry. At that point "I" would begin to define itself. Apparently all of this is happening, already has happened.
youtube AI Governance 2023-07-07T14:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]