Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
UBI does need to be paid out to everyone, if a lot of people are still working. …
ytc_UgwzMWdjX…
G
I have chatgpt ...I asked the question and it said twenty seven then when I told…
ytc_Ugx_hlSlk…
G
Lmao no, I ain’t bending the knee like a dog just because it here to stay, becau…
ytr_Ugw5661rG…
G
Yes, how difficult it is to tell something exactly what kind of image you want.
…
ytr_UgyxmONmx…
G
Artificial Intelligence is not real. Its machine learning. They don't have the c…
ytc_UgzZblSKc…
G
I feel like there is one thing a lot of people are missing in this conversation …
ytc_Ugz4MQh-y…
G
if your robot walks your dog for you, your soul goes to hell. *
* might not b…
ytc_UgzRfRlV0…
G
Educational show. There is the concept of "Fully Automated Luxury Communisn" or…
ytc_UgwyU7UUD…
Comment
Sasha Luccioni regards AI as dangerous—not because of speculative, far-off existential threats, but because of its immediate, real-world impacts on society and the environment.
Her main concerns are:
1) Environmental Impact: Training and running large AI models require massive computational resources, leading to significant energy consumption and carbon emissions. For example, training a single large model can use as much energy as 30 homes in a year and emit as much carbon as a car driving around the Earth five times. The trend toward ever-larger models is making AI increasingly unsustainable, and the industry often lacks transparency about these environmental costs.
2) Ethical Issues with Data Use: AI models are frequently trained on vast datasets that include copyrighted works—art, images, texts—without creators’ consent. This raises serious ethical and legal questions, as artists and authors struggle to prove their work was used and to seek recourse.
3) Bias and Discrimination: AI systems often reinforce and amplify existing societal biases. For instance, image generation and facial recognition models can perpetuate stereotypes and underrepresent marginalized groups, sometimes resulting in real-world harms such as wrongful accusations or discriminatory outcomes in law enforcement and hiring.
4) Lack of Transparency and Accountability: Many AI systems are "black boxes," making it difficult to understand or challenge their decisions, especially when they have serious social consequences.
Luccioni argues that these present-day harms—environmental degradation, copyright infringement, and social bias—are the real dangers of AI and deserve urgent attention, rather than focusing solely on hypothetical doomsday scenarios. She advocates for practical solutions: measuring and mitigating environmental impact, respecting creators’ rights, and building tools to expose and address bias in AI systems. - -- Thanks and credit for this response goes to Perplexity AI in response to my prompt
youtube
AI Responsibility
2025-05-31T01:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwwTN_DLK1VCyt_xcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"concern"},
{"id":"ytc_Ugwc2lsYzEdWt7fRQid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwf3ciJwd11_NRmbVJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoPVOClrGOPzP39Th4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzG3v3JoFUvv8tnw4R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygAtZD99g6U0lrnn54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyxYCyu1kR3N4-Hip94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"concern"},
{"id":"ytc_UgzGVozdlRhyVVOy1OF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwth5iwMud3k0sKZpN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyUiRIiI6yojlz4D3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]