Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are in cs and yes we know that problem. But it doesn't matter for us so much …
ytc_UgwJOHvhw…
G
Is it possible to code AI with a universal set of morals, like those that are fo…
ytc_UgzNobJa9…
G
Daniel raises legitimate concerns about rapid AI self-improvement, but I think t…
ytc_UgzdKJrDZ…
G
@ 1:08:19 , finally after over one hour of discussing OpenAI , AGI, and the impa…
ytc_UgzflsYmu…
G
These two didn't use ChatGPT as a resource, they used it as a scapegoat. Surely …
ytc_UgweKBRmX…
G
Ai can't even make a flyer properly yet so I think we' re safe for a while.…
ytc_UgwBfNyXi…
G
I just asked ChatGPT if it was controlled by George Soros and it told me Apple.…
ytc_Ugz1G47ef…
G
I'd guess it's because how that stewardship is actually sold to people. It's sol…
rdc_degcj4g
Comment
One can't presume AI as it currently is capable of working, will ever function well enough to replace much of anything. That assumption would require new non existing understanding of context and logic AI doesn't have and can't be presumed to have in the future. For example AI today can't tell the difference between a boulder and a tumble weed in the road, your assumption that it will ever be able to identify and realize the difference and whether or not it would be safe to run over it or not with a car - is beyond what AI could ever do now. This level of understand is not within AI's ability and would require new learning abilities - to presume those would be easily or ever programmed is false. Hence current AI ability might be refined but not fixed. You could program AI to slam on the brakes no mater if it thinks it is a boulder or a tumbleweed - but if there is a giant Semi behind you - that may end up an improvement with some drawbacks.
youtube
AI Responsibility
2025-11-04T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKVa6b-8QLMGHoBqp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-a02W6GBoe5rtag54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxw5810wvt-7KW7B514AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSkAmsspY5o3XjF5R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ER2C3_76_Vg_scp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuDVch8H4ZZrRDgul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5NrG5xO_uacNBQqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzoJk0gCwBvXRlH6R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA4aRiM-7CaqUZOMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzdFaH0Iviv-23hqp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]