Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If chat GPT could access LexusLaw or WestLaw, this would be a totally different …
ytc_UgyQU4_eH…
G
I know this is probably the wrong place for this, since most of you already made…
rdc_oa85whv
G
I think the scientists definition of "smarter", "learn" or more "intelligent" sh…
ytc_Ugx8ikakN…
G
We need to focus on ETHICS before the science. Science without ethics is what br…
ytc_Ugyz5Rhpq…
G
Billionaires talking about how AI will give everyone UBI is insane. They don't e…
ytc_UgxJlDzIn…
G
Not AI. it’s visual effects. Meaning we made it from scratch with CGI. it was a …
ytr_UgwyMA4xj…
G
The danger that humans are putting into the data of an AI... is the issue of hum…
ytc_UgxRkdx4y…
G
When artificial intelligence can create other artificial intelligences
So the w…
ytc_UgyDO6H8_…
Comment
This garbage everyone calls AI is not AI. Grow a brain people. This tech does not have automated learning; it is orchestrated by people that call themselves AI Engineers, it does not rewrite its own code without direction and input from people. What that actually means is that these people use exception reports on the data gathered by their systems to determine what data should feed back into the system to build what they believe is relevant links between data points. Don't trust me, ask ChatGPT or any other "AI" service yourself. Ask it if it learns on its own in real time. This "AI" is as dangerous as the stupid people using it to fill gaps in their knowledge of a problem space they're using it for. Without said knowledge, how can stupid people verify that the result provided by "AI" is actually correct?
youtube
Cross-Cultural
2026-02-10T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwYVly-Qzya-RBUWOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugws38YkxsmZximBvKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyjbg-W_ywoDQqslGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMNvD7PDwzJJZKYVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE0tFPZt1QxMgom7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKaPWRShTQjOhJpFp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKZjACL5mUzoZPwRJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdbmFbTLYllWV9UKR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxT5-cIWCQGqOksWCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzZXKt8yj9l7VaP5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]