Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean... the question isn't really whether surgeons working with AI have fuckup…
ytc_UgzsnWXw_…
G
I’d never trust a driverless car to ruin my life potentially I rather trust in J…
ytc_UgxbS57Oq…
G
This question is everything. We're so busy debating whether AI CAN take jobs tha…
ytc_UgyyHxBug…
G
Pictures of herself.
Her face right? Thats the issue? She would never have actua…
ytc_Ugxhg4eUn…
G
The time to clear up AI errors is estimated at ten years per app, let that perme…
ytr_UgygnbQec…
G
You would be surprised how many people are deeply concerned about AI !! I am de…
ytc_UgwgQKjpM…
G
If you’re trying to cause ChatGPT to experience a moral dilemma, you will surely…
ytc_UgyOrsnF_…
G
Unfortunately they’ll likely not distinguish that it’s ai and will push their vi…
ytc_UgwKgRdw8…
Comment
"The good news is, if you look at the last 2 to 3 years there have been very very few downsides. It's very hard to say explicitly what harm an LLM has caused."
Well, that's a relief. I think I'll go back to bed.
youtube
2025-03-04T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyNved0ZOeqj4VTwPJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2hVjQnI_sRKJ9Kc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy16wJr1osWJMt73uF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxn8TKylzvd150PSOh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhMgqDe0H2VMSSNC14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2tU_gpB2t4YFwt6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxp4KVrBZYYq14srbJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhEGU6LUTE_PpwNOh4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzApIU3DserV8DTO9J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXawm7a5Bn3TaQeI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]