Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What use would it have to give a robot emotions? It would only 'make sense' if y…
ytc_UgipEs5Bc…
G
5:06 I'm interested how that argument goes. It certainly is not self evident. I …
ytc_Ugy1ISGTt…
G
I used to be like RLLY creative until I used chatgpt. Like I used to make ocs wi…
ytc_UgwpLvvEL…
G
I completely agree, my bad art is my bad art and i dont want AI using it to make…
ytc_Ugx9YJcBD…
G
Yk what I would do I would put a watermark and a message saying ‘this is not ai’…
ytc_Ugwi4n-MM…
G
4:20 you could just put multiple boxes for the same promotional item, and make r…
ytc_Ugx8KEVu_…
G
I love capitalism and profits. I've owned businesses, made ---a decent net worth…
ytc_UgzyE-NId…
G
Why are all these stories not using actual screenshots of the User Interface? Wi…
ytc_UgwlxsHdV…
Comment
Highlights
🤖 Governments must urgently build AI expertise to make informed regulatory decisions.
⚖ AI’s rapid progress risks worsening economic inequality without new income distribution policies.
🚀 AGI could revolutionise productivity but also eliminate many human jobs.
🌍 Global cooperation among AI superpowers is essential to manage risks responsibly.
📉 Labour market disruption from AI might destabilise societies if unaddressed.
🏫 Education should focus on teaching people how to effectively use AI tools.
🏢 Fierce AI industry competition now may soon consolidate, needing careful governance
youtube
AI Jobs
2025-06-23T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxWF4RvnASfuk5J4XN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxp6iZWh7050sfQmYx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLwCtx0rxzxshfMOR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9tSr8sDMJpY2CZad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxQuzsBh_4w_nWdH0p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1XivwAhWS2JonJZt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgynhQakJklmClkvC9d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxza-p0pl_TmATAj8R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRqzm932wccnGtf1B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwIrPdrNXGoek8nv654AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]