Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI kills someone, it will be the fault of whoever wrote the algorithm for it …
ytc_UgzfSpDbi…
G
Story- AI realizes we aren't good for each other or the environment and shuts us…
ytc_Ugx6RqEiC…
G
if the artist isnt dead, they should get the money, other than that, i say let t…
ytc_UgzplFZqA…
G
one positive thing with LLMs and AI is that it could improve literacy. it gives …
ytc_UgxvTvNGE…
G
The conversation started off pretty good but then the CEO showed his true colors…
ytc_UgxU7k4wV…
G
Seems a helluva lot more practicalband intelligent learning than the bullsh*t cu…
ytc_Ugym7IjKY…
G
AI is the single biggest threat to humanity, even worse then the threat of nucle…
ytc_UgyNdwksA…
G
Yet for all these interviews, all the speculations and advances, the competition…
ytc_UgzjVcSAf…
Comment
Who is in control?
Answer: the guys who ‘control’ open ai & palantir have built a doomsday bunker for themselves in NZ…
…the two guys who know the answer to the question, have built a doomsday bunker based on that knowledge etc
This is basically episode one of Battlestar Galactica (reimagined series), and, the last scene of the last episode.
Without intervention from some imaginary beings, we’re doomed.
Now I can relax, I guess
youtube
AI Governance
2024-05-27T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzhhwiMhFlFgVNo0CV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxehudTNp3ji6n_Eop4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwuBEC28kUXB2-Blwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwhkofuEm5oAmfiEud4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqAYnsYiYigHqBK9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxwyY8eb5M73ukRRV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZIlvD2nijzCu_azl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyrv_ybduJ11DAKKZ94AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2upw_yAankxDMTAB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcWMqubDVjfhEwjzx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]