Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank goodness he had a full recovery. No more sodium bromide and NO more chats …
ytc_Ugz6WwCjV…
G
Ya, because non-AI personalities totally put "If you're curious about whether I'…
ytc_UgztoeIvW…
G
We appreciate your observation! Sophia's appearance might be a bit different, bu…
ytr_UgwsfMHnv…
G
What did you train Claude on? Why did you "tell" Claude you were going to replac…
ytc_UgyTpRVol…
G
I mean.. we should be vetting data source, but isn't there the chance that the u…
ytc_UgznnHNYw…
G
"Better than a banana" isn't saying much, it is saying that your fake AI art is …
ytc_UgxnUObbP…
G
Lol this guy claims to know his stuff but he claims Google search is an objectiv…
ytc_UgxXhqS3T…
G
AI is a robot. Basically robot or computer is a soldier, will always follow orde…
ytc_UgxOcRfG4…
Comment
Ai is never gonna get dangerous or anything. We don’t give it a body and it doesn’t have a real brain, it can’t even think it can only predict sequences based on knowledge. Ai can have all the knowledge in the world, but that doesn’t make it smart. ChatGPT has no idea what he’s saying, if you tell them they’re wrong and give them a wrong answer and tell them that’s the right answer, ChatGPT just blindly agrees.
youtube
AI Responsibility
2023-10-18T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxuxJA1WUhcQMBNI814AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyPFBmrsPr8HLO_KOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoaNik1hEB2rTTLxJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuiOyAgCCOihGraAZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwARruJihzVBbcadFt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6cEUJlaek2cW_yj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxN0f1j-Q2kpt6Ekdd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxgwfmv47G-V0U1eNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzl4jtQEtQinbF9LHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3iKtoQX0hnh9MQPp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]