Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Leading the way on ethical AI deployment" functions as legitimation theater—corporations positioning themselves as responsible stewards to preempt binding regulation. IBM's "governance frameworks" aren't constraints on corporate power but voluntary commitments easily discarded when profitable. Notice how "ethics" always means self-regulation: companies drafting their own principles, auditing their own compliance, defining harm on their own terms. This isn't accountability; it's regulatory capture before regulation even exists. The real governance question isn't whether businesses need "reliable frameworks" but who controls those frameworks and in whose interest. Corporate-led ethics initiatives systematically exclude affected communities, labor, and public interest advocates while centering business concerns like "innovation" and "competitiveness." Responsible AI governance would look like external oversight, mandatory impact assessments, worker councils with veto power, and enforceable penalties—not executive pledges and industry partnerships.
youtube AI Responsibility 2025-11-17T09:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugx9OLAA3Z4FOwfk20l4AaABAg.AS_vOhIRKvSASaINwoceoZ","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy1BotzE-zR5CfQlnV4AaABAg.AC2LEyLGc8iAC2PXx2mX5X","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgjjhVmopdBPnngCoAEC.8BsAm4xHtuS8BtL1-4lmhu","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytr_UghK9JWzzYfksHgCoAEC.8BrrReLGHpH8Bs2iLAUyj7","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytr_UggrhJ60UdmN_3gCoAEC.8BrrEAPregI8BtAGWVJG1k","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgigfbsD3xVn6XgCoAEC.8BrcF8D9mNF8BsC33aqVbL","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgibUnXWq06xDXgCoAEC.8BrbvRz8MRl8Bru_8BqsTy","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugwh3v-8GeoSdSmne4B4AaABAg.A3T1yHwit-FAPcJByxRXmy","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzF-wZJ569On403PCR4AaABAg.A3QWkoAMNS1APcJO2RR4NT","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz1b6hJ0hdS_gdTq4F4AaABAg.AHFbF0naRvIAIoHqlCUyiT","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]