Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Production speed, maybe. Quality not so much, since it is still gonna depend on …
ytr_Ugz_TIHce…
G
AI doesn't credit the artists whose work it's pulling from, either. So it's not …
ytc_UgyWJHZTK…
G
Come to AI, I trusted Elon's concern.
Regarding AG1, I did try it and had to st…
ytc_UgxcQDx4A…
G
You know what they were doing was on its own was a great selling point they coul…
ytc_UgzHpj1Fo…
G
The title of this video is miss-leading. There's nothing scary about what you sa…
ytc_UgwAHhZay…
G
AI is not a problem, human greed is. People like Peter Thiel, Alex Karp, Elon Mu…
ytc_UgzlaNJXN…
G
This is like Trump blaming Cohen for paying off Stormy Daniels. Who imports the …
rdc_gx81uk7
G
I do find it funny that they didn’t mention Israel once during the topic of Ai w…
ytc_Ugypk_9m1…
Comment
The first question I would ask: what does being "objective" really mean?
Say- if we were to ask the AI to grade a whole student body, a school for example- on a piece of creative writing- how would the grades received by the students from the AI, correlate with the grades between different teachers. What would that tell us about the AI and the teachers?
How would we make an optimal "marking" machine, while improving ourselves by using it?
How do we even begin to decide what a "statistical anomaly is", when the question is no longer- concensus, but optimization.
The obvious answer is trial and error, but the thing you are playing with here- education, is not unimportant. In fact, if the "error" part goes really wrong, that's a recipe for apocalypse. Terrible education and unsustainable beliefs are the most likely thing that might end our existence. That's my opinion though. I just don't want to see us fail.
youtube
2023-06-24T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5WXaXziw05GUjU9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvG5HCRRkDlOodkpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKxS4ZfsEy3hn4tO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz62lgq0rNZwmoRmCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzbtukhe2ba8bb7PfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRwpCXIFXkHr9hqxF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGbXpeympB340d-RR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTytVoOxTf29rg9cV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEcy1JBN5T9vN8kWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHyyZUvjvI3wGisz94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]