Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your feedback! Sophia's responses can sometimes come off as straig…
ytr_UgwbHMBex…
G
Because it's something strange to those people. They're probably not aware of ho…
ytr_Ugxwu2VMI…
G
People will treat "vonscious" AI the same way they already treat others based on…
ytc_UgykFtIzY…
G
I see written all of the time. I had no idea that wasn't allowed.
Edit: I see …
rdc_e6bmog0
G
Pandora’s box has been opened. I get why creatives don’t like it, but at this po…
ytr_UgyxqIxWS…
G
Honestly id agree the importance of art is the artist that put effort into it an…
ytc_UgwY73LVk…
G
Working in a call center, this sounds a bad idea... If someone insults you, and …
ytc_UgzE1-Vnq…
G
Depending on how they generated the art they could have learned the entire proce…
ytc_UgzlhRyXu…
Comment
This is outrageous that a lisenced lawyer fully relied on ChatGPT for their arguments. The reality is though, this happened because ChatGPT was used improperly. You can't use it to provide philosophically or logically valid outputs with 100% consistency. If you're doing legal research and you prompt ChatGPT something like "what are the potential damages for a plaintiff suing because of a slip and fall at a resturant" and you take the potential damages they mention and compare that to your personal knowledge and what it says in case law and then you put in your arguments damages that are within legal reality. You have to process and verify every single output from a language model because they are not perfect and they do not have the same capabilities as a learned knowledgeable lawyer.... But it sure as hell can compliment a really good lawyer or paralegal by helping them have a broader view of things they can consider.... No human has the same amount of data that a language model has, so it follows that a language model may suggest something valid that a lisenced legal professional wouldn't have thought of. Don't rely on it. But I don't see a problem using it to help brainstorm with you.
youtube
AI Responsibility
2023-06-10T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwKowQJIbVOLHypGg14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTknYs240QRTsYgex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxixMF6IhLCURDj_1x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1ERDapyGxRbA3p5h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC1dt0qD3tGf4TmvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMiZINhRGEBzDMKiV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHisd3-2iFGDDa9b54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm7aI1UCbbpuAitf54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxw9ojsOAB9bjelWXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzEynVuBwMi-F142Op4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]