Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Me: what are the dangers of AI? ChatGPT: boring answer given about 1. Job displacement 2. Bias and discrimination 3. Privacy and security 4. Autonomous weapons 5. Lack of transparency and accountability 6. Unemployment and socioeconomic impact Me: you missed out existential threat to kill humans above ChatGPT: I apologize for the oversight. One of the concerns sometimes raised regarding artificial intelligence is the concept of an existential threat. This idea suggests a hypothetical scenario where AI becomes extremely advanced and surpasses human intelligence to the extent that it poses a direct threat to humanity. While this concept has been popularized in science fiction, it is important to note that the notion of AI autonomously deciding to harm or destroy humans is currently speculative and not supported by scientific evidence. The development of AI is guided by human values, and responsible researchers and organizations prioritize safety measures and ethical considerations. Nonetheless, it is crucial for the AI community to continue researching and implementing safety measures to ensure that AI systems remain beneficial and aligned with human values. The field of AI safety focuses on developing techniques that prevent unintended consequences and promote the safe and reliable use of AI technology. By taking precautions, establishing ethical frameworks, and maintaining human control and oversight, the potential risks associated with advanced AI can be mitigated, thereby ensuring the safe and beneficial integration of AI into our society.
youtube AI Governance 2023-05-24T21:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyGcQCpUU0K7UX9yMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNp0UGOs2IDRd419N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8aghO76KJMhZbw1R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5exZfqrrjjgawTVJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqkMZ6VkLTH6SH3xl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxeauozBIsGpY99-SB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxT5nf6DkmEze5lmBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzU-T2vVGDhGcsTox14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx3sxSMyosrqfoHNLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGIRPiLLH1TQ9blQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"} ]