Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’d put Sophia s head on Atlas, the Boston dynamics robot, just to scare the end…
ytc_UgwOR4nQ7…
G
Anyone else feel concerned when this guy says ai can assist with 'war' why not h…
ytc_Ugz7iJOHw…
G
The robot was just malfunctioning, not attacking. No more than my 57 Chevy when …
ytc_UgyIv7o_N…
G
Isn’t there a country that does this already? I think they have one of the best …
ytc_UgzoDIjLR…
G
Well I don't think ai can have consciousness in any near future. If any ai can p…
ytc_UghfjZEpf…
G
Completely get this. AI and the bigger ethical questions can hit hard emotionall…
ytr_Ugz71U6yL…
G
Ai prompters put the same effort in as googling an image and think it makes them…
ytc_UgyuYl5fa…
G
we got AI slavery before GTA 6, at this point the apocalypse is bound to happen…
ytc_Ugz-IpKoT…
Comment
I think ChatGPT is amazing and can be super useful, but I also think governments must pull the brake on it now until we figure out how to make sure we control it, rather than it ending up controlling us. Because if you let tech companies regulate themselves, you end up with devastating implications for society. Social networks almost led to a coup in the US. What do you think a smarter-than-human AI will end up doing if we don't strongly regulate the whole thing?
youtube
AI Governance
2023-05-06T20:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFB9meqjeGABYy4bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOd4M95zSbzdFnVw14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZYISp6oOOwl987Fh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUvJOd2CDPbPjcd4Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrN8QIzYFlt6jKv4B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzv4ryNXsMRY8-9KxB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhZ_34WDu_FBtNxjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSkS9JirZGT_dVyz94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy08fn_D7zcsJKclkp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXci4JUPmEmwq-yn94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]