Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
tbh the argument that "ai art is really good for people who are disabled" has al…
ytc_Ugy4kyA-a…
G
@bergweg These airplane softwares would be done by Electrical and Computer Engin…
ytr_Ugxrma_9j…
G
When products really doesn't make values in ground people its wipe out soon. Sim…
ytc_UgzpJSnZu…
G
But the thing is -- there often still be what *appears* to be an artist's point …
ytc_Ugxy3T2KK…
G
The last thing I want is any AI, benign or not, to be or to behave any smarter t…
ytc_UgzUsl3z1…
G
We appreciate your engagement with the content. If you have any questions or top…
ytr_UgxJKaOY1…
G
Why I’m against not the use of Facial Recognition, it should be properly regulat…
ytc_Ugzsig0Dk…
G
It's a bit callous to say "Oh well, after the progress dust settles everything w…
ytc_UgwIMriQa…
Comment
Also important to keep in mind that the best AI solutions will be the ones with the most capabilities, which will be the ones with the most technical requirements. Yes, even if they're 100% prompt based - which i anticipate that they wont be. At least not single prompt. Lets not forget that a language model is exactly that, a LANGUAGE model. The best outputs will be producible by people who understand all that language the model is using (programming concepts and overarching ideas).
youtube
AI Jobs
2024-03-11T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzqP8Cu_2H5u3zF4gp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvgYhUsWQ8i2jWmQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw342YlNkwg7-YHkB54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzeNf1JWyR4iptZI8h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUwjDetSu0mfJrpcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPvt0E64M1iO2m6WN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw2tBtZxjvNkee-itZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzD8NIsUqIVlJF5lwt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnjcDje4QySAJyuI54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwg8_HhqYmU62WsufJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]