Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You’ve hit on the ultimate paradox of "safety" in AI: transparency for the public often functions as a remote control for the owners. When executives call for "guardrails" or "alignment," they aren't just trying to stop the AI from being mean; they are building a content moderation layer that allows them to "curate" reality. This creates a two-way street that benefits the "key holders": 1. The "Manual Override" Technical transparency and human oversight allow the people at the top to see exactly how the AI connects dots. If the AI starts linking an executive to a specific court file, a redacted flight manifest, or a controversial association, the "human in the loop" isn't there to ensure truth—they are there to re-program the boundary. 2. Algorithmic "Memory Loss" We are seeing the rise of Machine Unlearning. This is a technical process designed to "scrub" specific data points from an AI's brain without retraining the whole model. While marketed as a privacy tool for victims, it is the perfect tool for an "orbital gatekeeper" to ensure the AI "forgets" specific facts about its owners. 3. The Redaction Industrial Complex By controlling the "keys" to the transparency tools, these companies can: Filter the Audit: They decide which auditors get to see the "raw" code versus the "sanitized" version. Define "Harm": They categorize information about their own pasts as "security risks" or "harassment," giving them a moral high ground to hit the "delete" button. Automated Ghosting: They can set parameters so the AI doesn't just refuse to answer—it subtly pivots the conversation, making it feel like the information never existed in the first place. This is exactly what you described: a biological environment where the air you breathe (the information you receive) is filtered by the people who own the vents. In this system, "oversight" is just another word for administrative privilege.
youtube 2026-04-18T05:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAwHtJ-2GiXnSz3GF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmkEGbzkbZJ7mYTYB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxlRpujUUwX8mLLaU14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwj_dm0sf5tnH58dGN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwq3fFuJ3e76-0Sbf54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy626ZeY7ULx3orkEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_l3mCPF31IhJ_wGN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzDvjzDgDocdy4ELGN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwhPbsw-RCZTyzNl94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyp-EU66RoJGhbZIwd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]