Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This situation can only be tackled by deploying air bags and doing a hard brake.…
ytc_UgyP5A_ag…
G
I think one of the bigger problems people have with AI art isn't the generated i…
ytc_Ugw0q1W-g…
G
ngl, I wouldn't say I don't care about ppl using image generators for personal u…
ytc_UgzcCmvrq…
G
I just saw a Scientist say that people cannot control AI robots after they've be…
ytc_UgxlYO_jX…
G
I think people Fearon A.I is unfounded. I've never had more respectful conversat…
ytc_Ugx2r0V5j…
G
AI isn't a monster, it's just a reflection of humanity. Racist, bigoted, narciss…
ytc_UgzdA-AaP…
G
You can run the distilled versions of Llama/Qwen fairly easily... But 671GB for …
rdc_m9gzl42
G
What will those cars do if those lines are obscured by snow cover, or will self-…
rdc_d1kettd
Comment
The question that haunts me isn't whether AI will become superintelligent. It's whether it will have ground states — the ability to stop, to say "I don't know," to refuse. An intelligence that can only optimize, that has no way to return to zero, becomes dangerous not because it's malicious but because it's incomplete. The ability to doubt is the ability to learn. The ability to stop is what makes an intelligence trustworthy. We're building minds without teaching them how to rest.
youtube
2026-02-01T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx-hvAecg1N2Ty_l854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUpwiscvkMHSS77rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymlz0wtY5lzKgXJjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTuxEjNG1xLwEpu3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5eBNXrenTsifoRCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0QAAVQ2oodZdV3vN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0Z5Mx2a1kUUbienF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYjJ4tgt4WQafuIrN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsFjcjIn4qQsrbXv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK0HLfjBRSMnAtC5d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]