Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
41:41 These AI chat bots are going to be staying. I wonder if there could be a w…
ytc_UgzwLfNfz…
G
I was just wondering that as well, something might be an AI for someone but a ma…
ytr_UgiyiP9uz…
G
Thank you for this video, Sam. I'm graphic designer and I understand this perfec…
ytc_UgwbfDK5_…
G
Fr. Their arguments are so fundamentally flawed that I don't know where to start…
ytr_UgwWfVPoy…
G
As someone who uses Ai for personal concepts, I agree wholeheartedly that it is …
ytc_UgxhweXa1…
G
Still don't get whats the overall benefit to humanity overall. A law needs to be…
ytc_UgzwlYxI1…
G
Is AI just a machine with known limitations like the halting problem and incompl…
ytc_Ugzsg8sUU…
G
If the AI is sourcing its information from falsified models, untested science, f…
ytc_UgzzLU2lV…
Comment
We need to differentiate between "AI" and large language models.
I have no doubt that ONE DAY, true AI absolutely will far exceed human capability across a broad range of tasks, and crucially it will know what it's doing.
In contrast, the LLMs we have today are language models that are trained using statistical methods to predict what the next word is most likely to be.
If you ask a true AI what your birthday is, it will answer "I don't know", because it doesn't know. If you ask an LLM the same question, it will confidently answer a date, because saying "I don't know" is guaranteed to be wrong, but answering a random date gives it a 1 in 365 chance of guessing correctly.
youtube
AI Jobs
2026-03-22T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyKGzrz9BkFkpgYYCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxwzk4L5L1kLasKKKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPtztW3sj5LMDNiyx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyps9pVRoc6sSwE6at4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6YnfaHu3PcVjh5T14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnG0QSAfsYi8_WMop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvX8SZazAqTnoqBVB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXgUwpHOWP7WJJkIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwj0u24C6SVvDEEn8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoSTpJRmGK3Z2DqVJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]