Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The speaker, who is an AI researcher, received an email from a stranger expressing concerns that their work in AI could lead to the end of humanity. AI has become a hot topic, making headlines for both positive contributions like medical discoveries and negative instances such as biased AI systems. The focus should shift from future existential risks to current tangible impacts of AI on the environment, society, and individuals. AI models contribute to climate change, emitting significant amounts of carbon and consuming large amounts of energy during training. Large language models, like GPT-3, are responsible for substantial carbon emissions and environmental impact. Tools like CodeCarbon help estimate and track the energy consumption and carbon emissions of AI models, allowing for informed choices in deploying models. Artists and authors face issues related to their work being used for AI training without their consent, and tools like "Have I Been Trained?" are created to provide evidence of unauthorized use. Bias in AI systems can lead to harmful consequences, as shown by cases where biased AI systems led to false accusations and wrongful imprisonment. The Stable Bias Explorer tool helps reveal bias in image generation models through the lens of professions, highlighting underrepresentation of various groups. Creating tools to measure AI's impact is crucial for addressing issues like bias, copyright, and climate change, and it empowers users to make informed choices. The speaker emphasizes the importance of transparency, governance, and collective decisions to shape the direction of AI's development. Focusing on addressing the current impacts of AI is essential, rather than fixating on distant existential risks. In summary, the talk highlights the need to address the current, real-world impacts of AI on the environment, society, and individuals, and the importance of creating tools to measure and mitigate these impacts.
youtube AI Responsibility 2023-11-09T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgziHLQc_W-hzbcZxQx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzrtIen2bQFgvgeJ_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwW18mw2FNAklJtvD14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgxchNHs2DG9eyjAiXp4AaABAg","responsibility":"society","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzqhQehG2zuRAi8UKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgywRWzBR0tG89z5NLR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxg82bGEhIzPr7NZ314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz-HpiUQuTtVurY2ft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxLOBgJdP_UTrMccSR4AaABAg","responsibility":"society","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwhLpwG-NknAHUe56N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]