Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2020 survival kit:
>don't go outside
>wear a mask
>treat your mental health
>tak…
ytc_Ugxf0OoAa…
G
WHEN???? ffs I don't want to get up to work tomorrow
can AI overtake ME tomorrow…
ytc_Ugyh5hl3v…
G
AI Art is not theft, i wish that fucking gatekeepers wouldn‘t fearmonger people …
ytc_Ugw3KX7jM…
G
Im all for the underdog getting their "justice desserts ." However, they truly m…
ytc_UgxFRhE8c…
G
AI researchers have often talked about this issue, and a blanket blame on racism…
ytc_Ugy0aiw3-…
G
Not regulating AI for email filters? Are they mad? People can be manipulated, ex…
ytc_Ugy6X3M4X…
G
2:37 did the ai even understand that these Lobstars not even are Big enaugh to g…
ytc_Ugw2tj5tt…
G
The funniest part about the “Ai is Theft Bros” is that pretty much every…single……
ytc_Ugx5mf0li…
Comment
Me: what are the dangers of AI?
ChatGPT: boring answer given about
1. Job displacement
2. Bias and discrimination
3. Privacy and security
4. Autonomous weapons
5. Lack of transparency and accountability
6. Unemployment and socioeconomic impact
Me: you missed out existential threat to kill humans above
ChatGPT:
I apologize for the oversight. One of the concerns sometimes raised regarding artificial intelligence is the concept of an existential threat. This idea suggests a hypothetical scenario where AI becomes extremely advanced and surpasses human intelligence to the extent that it poses a direct threat to humanity.
While this concept has been popularized in science fiction, it is important to note that the notion of AI autonomously deciding to harm or destroy humans is currently speculative and not supported by scientific evidence. The development of AI is guided by human values, and responsible researchers and organizations prioritize safety measures and ethical considerations.
Nonetheless, it is crucial for the AI community to continue researching and implementing safety measures to ensure that AI systems remain beneficial and aligned with human values. The field of AI safety focuses on developing techniques that prevent unintended consequences and promote the safe and reliable use of AI technology.
By taking precautions, establishing ethical frameworks, and maintaining human control and oversight, the potential risks associated with advanced AI can be mitigated, thereby ensuring the safe and beneficial integration of AI into our society.
youtube
AI Governance
2023-05-24T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyGcQCpUU0K7UX9yMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNp0UGOs2IDRd419N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8aghO76KJMhZbw1R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5exZfqrrjjgawTVJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqkMZ6VkLTH6SH3xl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxeauozBIsGpY99-SB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxT5nf6DkmEze5lmBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU-T2vVGDhGcsTox14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx3sxSMyosrqfoHNLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGIRPiLLH1TQ9blQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]