Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI aka ROBOT is soo interesting technology. This is just the beginning, maybe so…
ytc_UgyGsH9-t…
G
@Vi-El what you express here is just your rage, not truly valid arguments.
I s…
ytr_Ugz5byaaG…
G
cant wait till they realize the only time they see AI is the bad AI.…
ytc_UgztQeapB…
G
I applaud Sasha Luccioni's efforts. We are in the early stage of making AI gener…
ytc_UgxOVSzoo…
G
The company that offers the self-driving software should always be at fault in t…
ytc_UgyiK2ldf…
G
Considering AI is a hype train and not actually very helpful CEO replacement wou…
ytc_Ugx9GGwa7…
G
"This job is bori-"
"Did you just mess up?"
"DID YOU JUST MESS UP?!"
"HE MESSED …
ytc_UgwoTfcSL…
G
Think about it the robot prefers men and Caucasian people maybe because they are…
ytr_UgyyHdNUu…
Comment
There are a couple of catches with actually using this technology. AI does not 'think' on it's own so is not really intelligence. It does crunch data and give answers that seem human to us (mostly). When AI does make a mistake or do one it isn't just a little off, it is straight forwardly weird. Finally, If AI does actual doctor work, it will not be able to doubt itself, know there are limits, consult with others who know a procedure might have an uncertain outcome, not will if fear accountability because for it, doing harm can not have consequence.
I like it like I like google. I think AI is a good tool to supplement human education and abilities, not replace or supersede them.
youtube
AI Harm Incident
2024-05-31T16:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeRIUzWibLNMrEnQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyx60GwAmPsAACY1-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHlgO2QKiD6XKFUo94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFuP_2V-b8gy5017d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyTaQbGoahOm5Q5RV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyBlBOAfDnOr4_wim14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9b7IQyWfxsPsVeWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Nf-jAVJkaIWrX2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKpp4tSrs88CGZ8zR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4rC1BorCZ5sAmW814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]