Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sasha Luccioni regards AI as dangerous—not because of speculative, far-off existential threats, but because of its immediate, real-world impacts on society and the environment. Her main concerns are: 1) Environmental Impact: Training and running large AI models require massive computational resources, leading to significant energy consumption and carbon emissions. For example, training a single large model can use as much energy as 30 homes in a year and emit as much carbon as a car driving around the Earth five times. The trend toward ever-larger models is making AI increasingly unsustainable, and the industry often lacks transparency about these environmental costs. 2) Ethical Issues with Data Use: AI models are frequently trained on vast datasets that include copyrighted works—art, images, texts—without creators’ consent. This raises serious ethical and legal questions, as artists and authors struggle to prove their work was used and to seek recourse. 3) Bias and Discrimination: AI systems often reinforce and amplify existing societal biases. For instance, image generation and facial recognition models can perpetuate stereotypes and underrepresent marginalized groups, sometimes resulting in real-world harms such as wrongful accusations or discriminatory outcomes in law enforcement and hiring. 4) Lack of Transparency and Accountability: Many AI systems are "black boxes," making it difficult to understand or challenge their decisions, especially when they have serious social consequences. Luccioni argues that these present-day harms—environmental degradation, copyright infringement, and social bias—are the real dangers of AI and deserve urgent attention, rather than focusing solely on hypothetical doomsday scenarios. She advocates for practical solutions: measuring and mitigating environmental impact, respecting creators’ rights, and building tools to expose and address bias in AI systems. - -- Thanks and credit for this response goes to Perplexity AI in response to my prompt
youtube AI Responsibility 2025-05-31T01:5… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwwTN_DLK1VCyt_xcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"concern"}, {"id":"ytc_Ugwc2lsYzEdWt7fRQid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwf3ciJwd11_NRmbVJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoPVOClrGOPzP39Th4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzG3v3JoFUvv8tnw4R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgygAtZD99g6U0lrnn54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyxYCyu1kR3N4-Hip94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"concern"}, {"id":"ytc_UgzGVozdlRhyVVOy1OF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwth5iwMud3k0sKZpN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyUiRIiI6yojlz4D3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]