Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if we slow down on the race, China will still be in it. I fear that the whe…
ytc_UgxoIzPw8…
G
"Oh, it ended up being good for everyone." you mean like the way automating auto…
ytc_UgyH3vwBt…
G
This is myopic. You need workers to introduce innovations that AI will either n…
ytc_UgzzSXYiZ…
G
Nice video. Great to see you! - Although, that is exactly what an AI *would* say…
ytc_Ugw3kWj-H…
G
To be fair, there's nothing weird or wrong about studying nutrition. Dude was ju…
ytc_Ugz8QY02X…
G
Ai just hype…it still cannot fix simple code..it can write some fixed known code…
ytc_Ugyjkhxph…
G
Current LLMs have an intrinsic limitation that is language itself. Language isn'…
ytc_Ugxs2lO24…
G
As someone who is doing research on AI, I can tell you that in most cases you ha…
ytc_Ugz_atSeI…
Comment
The generative AI genie is already out of the bottle. Unless we develop a way to verify that any given creative work was made by a human without the use of AI, there is no practical way of putting it back. While we may be able to push major AI companies to pay for training data, individual users are already creating finetunes on consumer hardware that use open-source base models. At best, we might create a scenario where the big guys have to play by the rules while the small guys are able to carry on ignoring them. After all, how would anyone know? Unless we demand that creatives show their process and prove that they put in the manual work, nobody will ever be able to trust them.
youtube
2025-04-08T16:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyNpiWWVrjNR-hf3EB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9GtaapYNn8UaBQxJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_ZQCRNmYrmDmLZn14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_PU3QxH-R0hms0yV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo9hCTO3KWQstiBgZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQANnKNXwEpwvAOmd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJuLf1Dgeks_PxYHx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgytDo8sP3G8TByDCf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8OSSPHtwwkDZbe4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkfTYDWThQdNPVkfR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]