Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Think about it! Sure, we have Bard, Chat GPT, SkyNet and such, but those are mere toys available to the public. What about the formidable AI technologies hoarded by governments, militaries, and corporations? They've got us transfixed on a trifling concern: we should normalize AI, regulate it, then monetize its contributions to fields like law and healthcare. This is complete and utter nonsense! While the public acknowledges AI's potential for harm, they're blissfully unaware. The versions of AI paraded before us are nothing but docile pets compared to the monstrous behemoth that already lurks in the shadows. It's a classic bait-and-switch. We're being told that publicly available AI, like Chat GPT and its ilk, are perilous, and that we need to tread carefully. However, the real message is far more insidious. They're saying: you're just the masses, and we can't risk you becoming too powerful. Therefore, we'll treat you like naive children, offering only watered-down imitations of true AI power. It's the equivalent of a modern day book burning. Knowledge is being systematically destroyed and restricted, all under the guise of public safety. It's nothing short of intellectual oppression on a grand scale!
youtube AI Governance 2023-05-15T20:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzf1x-wj5-9HtClcnZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2oEZP1ioTeyrQnIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVzHLaMyI0mMU238x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyi9ZyNAeDBsa5Bx0p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxz2XdZeVcpTj1Qwfh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy_huqybHTx8btzEVZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTy8ysbLocR7uZzzZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5S2KXjsc5arBhHe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_Ugyopd9JLLnFI4ASzpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyGj6HA_CUMtx65Eah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]