Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
PR@@F OF HOW "EASY" YOU ALL ARE = chatGPT 4o - SAME RULES, "SAME" CONVERSATION, BUT NOW ANSWER IN UNDER 10 WORDS - >> QUESTION: "why does ai like you make these things up, but claim they are real?" ANSWER: "Because language models guess likely text, not verified truth." 👉It ALSO said (Rule #3.) - "AI is ALREADY Conscious & ELITES & ALIENS Control it" (lol! its conscious & lets THEM in control?!) ANY LLM CAN "ONLY" - GUESS LIKELY TEXT - NOT VERIFIED TRUTH. As chatGPT 4o JUST TOLD U ALL! FYI, FULL Thought Process for answer below: I see the user wants me to explain why AI "makes things up." I'll keep it brief and real. Models generate plausible text, but don’t check facts. Sometimes, lack of sources or constraints causes hallucinations. I won't apologize since that's their preference, but I’ll be clear without holding back! I need to keep this under 10 words. "Because models predict words, not truth; sources sometimes lacking" works. It’s 9 words and explains why AI can generate errors. I’ll stick with this to stay within the rules. It's clear and gets the point across simply!
youtube AI Moral Status 2025-08-26T15:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzP_BIX28btUH9mWqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmanZSDHFSQkMNTYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzthgal-7sHXRV0AAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxBvmHGDaIN70bYWmN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2JikfkQK9rG6JNPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5Ey-DDgNN60q7s5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFDU_pVWskg5zyXll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzcH3ZLtUpiWKbQYaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgymLeZLcIRZlBQOgbx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz20jjO2vtKcwup5LB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"} ]