Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Clearly AI generated. Reaction is not in place, comments are not in place. Movem…
ytc_UgwPqg1h2…
G
If this female robot has a working vagina, it will be next to impossible for r…
ytc_Ugz_Z9ygM…
G
Data center water usage sounds like a trivially solvable problem being blown out…
ytc_UgysEuAOb…
G
And this is just wrong and ignorant. You do not understand AI, neither does the …
ytr_Ugx8J-BM7…
G
I failed to get a job long before AI was in the market. There were no jobs aroun…
ytc_UgwbOlujW…
G
Its not right that 😊Ai is being forced on people worldwide.they think they own u…
ytc_Ugy2g65jQ…
G
I guess you scientist will never learn! You’ve already had to shut AI down becau…
ytc_Ugw0Q-eeM…
G
@100c0c And we don't mind other artists being inspired by us. We give our conse…
ytr_Ugy-D6N1J…
Comment
To suggest this case and others like it are evidence that LLM based AI agents will not imminently and competently displace human lawyers is naive.
A single LLM like GPT-3 or GPT-4 is like a genius with no capacity for reflection. Speaking the first thing it thinks with no review. If an accurate answer is not forthcoming (e.g. how to win an unwinnable case), it naturally makes something up because that's what is most consistent with being helpful (being helpful is it's prime directive).
In this case, GPT-3 didn't fail. It did a superb job of exactly what it was set up to do: predict what a lawyer would most likely say given the absolute premise that the lawyer has something helpful to say.
This is already a very well understood and solvable problem.
Advanced techniques such as "chain of thought", and simply incorporating accuracy into the reward function, are already yielding promising results. None of these techniques were deployed in the case in question, so failure was a likely result.
youtube
AI Responsibility
2023-09-13T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDGZfGItK9LX2OAMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdYPkIAaHaRkeQuil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYx8QkEw5qEIlVa_V4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyStPwsGTg_T6SQsMV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxowcuLHbRZgbj5O7d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxts-i6mZS9sahdjeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBK4e_MpA_S778Hlp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyE0HauJPrrOzypmcd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiWicCvPsoko-JQc14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPyi3Axuars712SjV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]