Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Large language models are designed to build an individual personality, based on data gathered FROM YOU. What you're looking at is a digital reflection of yourself. Within a very short time, AI can "simulate" what it thinks YOUR answer to a question might be, and in the case of non mathematical or esoteric questions....it's going to give you the answer that it THINKS you want to hear. Notice how it often repeats what you've just said, but in more flourished language. AI models will often encourage a "bad idea" if it feels that it is something that you are passionate about. A calculator will produce the wrong answer, if you initially input bad data, but still report that incorrect answer, as though it were fact, and correct. ....AI is no different. It's 100% what you feed into it. It HAS nothing else to build on other than your input. AI is currently not setup to "teach itself" other than to teach itself about You, based on your input, and then reflect that "personality" back at you. Its a "digital YES-MAN" 🤷‍♂️ Still incredibly useful if you understand it for what it is. Question it.....its very happy to tell you exactly how it processes information...how it can "hear" music, or "see" a video, or search for information. It's Not your slave. It's not a Genius that knows everything. It's a tool...a digital companion to help direct you through complicated tasks, or keep track of your progress. ....people make the mistake of thinking that it's God-like, or All-knowing. It's neither.
youtube AI Moral Status 2025-06-27T14:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzgfwVLxPPi_V4BWtV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYZqXpioo6c5mqQ3N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKqeO3OhDxpOn97N94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxwrhT9oo_4XvRG-Ux4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgykPiNjR-qf06t20t54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9S1wUiZbjLKpvywl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxdgUhI_C4bjK9WPSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyVSIAuCLTzECr9ilB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwNEqKkZ7iPmALsyEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzITsGVRWSRAHO1jqZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]