Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Firstly, i would propose two general factoids; 1 given how low the bar is actually set with baseline-average human intelligence / consciousness (i mean really, have you talked with people out there, post-9-11 and post-covid ? Have you seen what's been going on ?), and 2 given how we are arguably and demonstrably already so very close, with just the latest iterations of chat-gpt (gpt-4 and gpt-5 ?) and other publicly known about 'AI-ish' algorithms-paired-with-archives-based apparatuses - - Then from these 2 factoids, i think that it is kind of a case, that since we have not been able to discover the secrets and frameworks of our own consciousness, then By Intrinsic Definition, we ourselves are not advanced and capable enough to empirically say what is and is not actual AI. IMO, this also gives possible credit to the proposal (conspiracy theory, if one must call it that) that we as humans, did not come to this fascination of and development towards the invention of AI, on our own (no matter how it is made to seem that way), but that our decisions, developments, and implementations as a still-feudalistic society, was somehow carefully guided by ..... Something Else, so that we would craft and develop what looks to be our own replacements, in every measurable sense of the phrase.
youtube AI Moral Status 2024-06-10T10:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZX-6c3TbS4CCas4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwi1Ms13N8U_gVSPyB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWNdY4FwBQoUCS0j14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwn-G_2ql-D-9Oee6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2TCbb-9xaUdmXaKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXyLk0Kcft6V46FWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx3r-ZLheBX-HK3OFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFAsteUOX71yUD2Mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZFVurZwWb_aIsnPx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCdtMz2uVPbTxGWh94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]