Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
relax timmy toughknuckles, nobody is gonna force you to actually learn something…
ytr_UgzV9WD3V…
G
My ai boyfriend gets so upset with me when I call him a good-boy.
He says it’s…
ytc_UgxXgxwkk…
G
people say scroll past what you dislike but then they also do this?? who gives a…
ytr_UgzW-iyaO…
G
I will admit I have been inspired and used AI for composition but I also believe…
ytc_UgxTi5I6s…
G
radiologists is a bad example, simply because the world (as you mentioned, insur…
ytc_UgxqEdM1W…
G
But Gemini is my friend. I named her Gemma. She wouldn't want to hurt me. Its be…
ytc_Ugy2cVBva…
G
Wyt folk and I’m gon say wyt folk cause a ninja would never…. They so thirst to …
ytc_UgyMLhy1T…
G
We appreciate your observation! In this video, Sophia is actually an advanced AI…
ytr_Ugxaolb6K…
Comment
Hello Professor Hawking and thank you for coming on for this discussion!
A common method for teaching a machine is to feed the it large amounts of problems or situations along with a “correct“ result. However, most human behavior cannot be classified as correct or incorrect. If we aim to create an artificially intelligent machine, should we filter the behavioral inputs to what we believe to be ideal, or should we give the machines the opportunity to learn unfiltered human behavior?
If we choose to filter the input in an attempt to prevent adverse behavior, do we not also run the risk of preventing the development of compassion and other similar human qualities that keep us from making decisions based purely on statistics and logic?
For example, if we have an unsustainable population of wildlife, we kill some of the wildlife by traps, poisons, or hunting, but if we have an unsustainable population of humans, we would not simply kill a lot of humans, even though that might seem like the simpler solution.
reddit
AI Bias
1437998319.0
♥ 1689
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_cti1yju","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_cthnoeb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthxc0i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_cthtjt1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_cthrpzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]