Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is not to blame lol. it literally told him he just didn't listen. lol…
ytc_Ugwn7vo3q…
G
We could require every AI innovation to include a human rescue plan and a layoff…
ytc_UgyM9zKkR…
G
the dudes arguing that AI is already conscious fail at the first hurdle of stati…
ytc_UgwooZhiV…
G
Thank you for your comment! While Sophia's responses are indeed based on her pro…
ytr_UgybFvoz2…
G
getting an entry level job has ALWAYS been a pain in the behind. back in the day…
ytc_Ugw85V3M8…
G
I kinda like that ai still recognizes it as Rhodesia. Means it's watched blood d…
ytc_UgyUJwHmE…
G
I still don't see.the problem here. Use different names.for people not their fir…
ytc_Ugz-Wwmx4…
G
because they are right and they can see it, how AI will reduce labout by 70-90%.…
ytc_UgwR31OPB…
Comment
Well done scientists. How about an enquiry into the entities approving such research from the get go? This is precisely like the problem atomic energy created. Unlike his flippant optimism expressed, we will not and do not collaborate internationally to minimise the risk of nuclear destruction. There is no evidence humans/nations will treat this technology any better. He did say the military are most interested! Also, it seems rather late in the day to be voicing concerns that AI poses an existential threat. I believe these scientists should be accountable. Knowing the risks they have released AI already into the public domain and now admit they cannot control it. Well done for introducing another existential threat to humanity. Very foolish, now we must contend with nuclear destruction, life ending climate change and uncontrollable AI.
youtube
AI Governance
2023-05-14T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyTwIYUBDb5I_rtHjR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwk0uuwjOC7lf5ke6B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_vHn-k_pb5_0y7px4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6fCk7C4QVwJa1ja94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9P_ZlLPQqA7mmbyd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzwGJsCsgJ9zV46N8Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyoHqiStiywSpFf4aN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzaVorEQs87Hei4Cbh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsJ6-HeXrNlwGoeBN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHHTAXGz8RHlrQlrB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]