Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's no limit to what certain men will invent just to get a girl 😂…
ytc_Ugy0zrbPa…
G
The point of robotic hardware is to service humans, because it is because of hum…
ytc_UggIsREG2…
G
plot twist: I’m autistic so even without ai, ai detectors flag 90% of my writing…
ytc_Ugw3OG5C4…
G
How do you make ChatGPT make an image when I ask it says it cant…
ytc_UgyS35186…
G
Old man knows AI 😂... whatever 🤦. He doesn't have a clue. They don't need AI to …
ytc_UgxKMOjiG…
G
I am a retired business technology analyst and it including large data analysis,…
ytc_UgxELUF7e…
G
AI is more like a scalpel than a hammer. In a surgeon's hands, it can do amazing…
ytc_Ugw1LLjWQ…
G
The "AI" is not sentient. It's an algorithm, so it can't express or comunicate i…
ytc_Ugzr6g71-…
Comment
What disturbs me about AI that we already see is the programming of the base is heavily dependent on the values systems of the AI creator. What is moral and correct is very subjective in us humans. Just look at the societal schism in which we currently exist.
When do the rules of the AI base bend society to conform? When does the AI control human behavior as in a social credit system and digital currency? What we give away to AI, removes an equal or greater amount of freedom in the human experience. We are not meant to be perfect(ed), we are meant to have free will. We cannot advance AI to the point where it can begin to make judgements.
youtube
AI Jobs
2024-01-14T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxMYoXYIAimiAAUy-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzCB38fOYPPQZm6hpx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgyZ85BqugqhHh5pjqR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwF7owfWfXrtXaS_Yh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyLaGA8vbo1EpsflMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgydEDsouPW2_dq18Pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxQNqnkz7burjvd3MZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgwepbtRKVqOo5MShnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyW8Lvihzv40J8kC7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwelFs3bwmCmRN-Tyd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"})