Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am 57 years old and create my own local ai agents. I can have deep truthful i…
ytc_UgxaJKGPR…
G
Really nice interview. Maybe "AI" is here to better understand what makes us peo…
ytc_UgxjN_gum…
G
problem is that even as a tool the luddites consider ai like Nazism, the dev fro…
ytc_UgzB0fh0Y…
G
This is more proof we need to invest in space travel, so that AI can be used in …
ytc_Ugwc83xbZ…
G
Why can’t AI go through every possible scenario, and then give that data to a dr…
ytc_UgxdMaIPF…
G
Will never forget the day someone asked me what ai program i used for a pokemon …
ytc_UgzHTst6K…
G
“Meta employees say the AI’s stiff movement and expressionless face are far too …
rdc_oh77y71
G
I guess? But that’s for a purpose but Ai isn’t it’s pointless purposeless to exi…
ytr_UgyJnff8h…
Comment
This is a big problem, but AI Safety is much more critical. 38% of AI researchers believe there is a >10% chance that AI will cause HUMAN EXTINCTION, and they're building it anyways. source: a paper titled "Thousands of AI Authors on the Future of AI" (i'd give a link but youtube blocks those). If you want an explaination of the problem, Rob Miles' youtube chanel has a great 20min video titled "Intro to AI Safety, Remastered". We need AI capabilities research to pause because safety research is way, way, harder. Please help us Bernie.
youtube
AI Jobs
2025-10-08T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzIPov4sptDsX-W4rZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzN0a8mziqYMkidcdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxMUwMnkMBai98WGq54AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxp-E_S4B_8mymKZ3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzvxWTolgQVRQUS9Dd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy2OLLxwrNYb9aKvvx4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugzev_pl7UzfXjuFjfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwl8raAiFyJAbgh35N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugy2-0f96HsM6zF_AO54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugwp3fhtQiQhedb5G294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]