Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OK, next up in 10 years 14:21
Open AI announces the new AI model doesn’t meant…
ytc_Ugz4HsBjg…
G
Anyone tried Anthropics Claude code? 😂 How much bloat it generates for even 1 si…
ytc_Ugx_kc5ZP…
G
AI art collapse is inevitable. Nightshade is really just exacerbating an inheren…
ytc_UgxvWxBZs…
G
So, AI is pulling data on human knowledge, exposing racism.
It's actually quite …
ytc_UgwrZVVQ9…
G
Bruh did he use the self driving or did he not!? It’s an easy answer , that only…
ytc_Ugx_J0VTC…
G
One of the biggest problems throughout human history has been coordination. Our …
ytr_Ugxi5ce8E…
G
If you're needing something complex fron the ai be very clear about it: hey can …
ytc_UgxNQ3POo…
G
Is it ai or someone else like in the movie
Electric state
Were using advance t…
ytc_Ugw0rNz6N…
Comment
The biggest issue with the AI is, it is moving too fast, people and organizations cannot keep up.
Also there are is a lot of uncertainties around AI's quality and accuracy.
Moreover, a company can easily get locked in with a certain model and when the company changes anything it will break the whole process, AI and their outputs are not predictable they are more probabilistic.
youtube
2025-10-31T00:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyN8YpZfMlH2ii-LJV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZTmFOrUUtpq-P5494AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSibPsB1KBiqHgPhd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5Ov_thf5bpOZrYCZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzpp5hEw3lpZ0th5Z94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgweY3RfD-M7JZwXtw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"},
{"id":"ytc_UgzcscWDXiel9VdWAOR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZw1qwKE1HKAynneJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwc7tS0z9fWrRW8cvh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9mObD5LXs4byMGiZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]