Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's what I think is needed for ethical AI 1. HEAVY regulation of generative models. I'm not talking fines for using unlicensed content, I'm talking full deletion of the model. Imagine if every time Midjourney got caught training it's models on copyrighted content, the courts had the power to hit Ctrl+A, Del on that model, setting it back to square one. That'd hit LLMs where it hurts. 2. Compensation schemes for licensed content. I'd be down with a revenue sharing system where everyone who contributes to an LLM gets a share of the profits. Sure, it'd probably be microscopic, but it'd be fair at least. Or a one-time sale option where you can sell your content to a training model. Heck, maybe even LLM brokers, where you can sign up, upload your content, and then they sell their content collections to the LLMs and you get a percentage of the sales. 3a. LLM server farms being restricted to ONLY running on renewables and MUST generate a percentage more than they need, which is then exported to the local grid to support the communities. or 3b. LLM server farms can only be run in environments where cooling is provided by the natural climate (and still need to use renewable only power). Think geothermally powered farms in Norway or Greenland or Siberia. Basically zero environmental impact (this should be applied to all server farms above a certain size IMO) 4. All LLMs/Gen AI need to hard-code bake in a digital watermark into their content so AI generated content can be detected and filtered. This is in their interests too because they don't want to inbreed their models by training them on other AI slop.
youtube Viral AI Reaction 2025-11-11T10:4… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz9zljtBjrOQXIn--J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyoAnt8jin_JtSzfcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwCpZCmzOOKD0JnwLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugy3UHHaE8MZwAC7mIZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx8yHp9R522SoQNhU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},{"id":"ytc_UgwBcBnZq1bjIDCx8R14AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugxf-WZkm-yg4gZG_ml4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugzg79LFQp-4zHYe9Ml4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},{"id":"ytc_UgxDWvAbuZRpGaz7fnZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxNf2l_utWXcbKSdFZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]