Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do think the best way is a global committee, each nation represented by 1 scientist, 100% consensus required, they have the full responsibility of regulating time, how far progress etc, that's unlikely to happen. We can all contact politicians but I disagree, I do think as a collective we can self-regulate others, YouTube for example, report those that are promoting hate, division, if someone's using AI generated content and is not using a disclaimer, report them, if you feel comfortable and think someone is not doing the right thing but their intent isn't bad, then reach out to them and convey your thoughts respectfully. YouTube is making an attempt to regulate these types of things, I don't believe that the intent behind the algo rhythm is driven necessarily by money, more so in providing the viewer with content that is of interest to them, it's about keeping the customer happy and repeat business, same with Google search. I use paid Google search ads plus have a YT channel, what Google wants from me is to be a reliable source and one that gives the customer what they're after as easily as possible (that's where SEO comes in, sites that are slow to load or being directed to a site that has nothing to do with the search, all looked down on), the better that you can do these things, the better the traffic you get, lowering your costs. Ultimately their great success comes from looking after the customer first and foremost, same as any successful business. I don't think that sitting back and doing nothing is our only choice. In 5 years' time if I looked back and did nothing, I'd regret it, alternatively, at least I would know I tried.
youtube AI Governance 2025-12-15T15:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy_KJIF54nsX8ABnjZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxiYPwDd-RenHY92VF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0BIHI03t4YPfFvKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7cny-ciazG9sDmSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzShTbphvk_W9sNsb54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNx2T5ACixVQQWaLV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaA-f3MSN4rdh7nsJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyEgsds7AJrSt6_6S54AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpBbZxCvYoPUCBDLF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz10pPlxxlVr1qedGV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]