Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the AI proves to be the absolute opposite than all the fearmongering, probabl…
ytc_UgwFr3bxD…
G
>Also if you think about it a lot of things get machine made these days and h…
rdc_j4z48m4
G
Honestly, eye witnesses are even worse than facial recognition software at ident…
rdc_ghbwzl2
G
Humans aren't perfect, but as proven time and time again since computers and aut…
ytc_Ugw6Oj0lT…
G
Haha, she does have a sleek design! But remember, Sophia's focus is on wisdom an…
ytr_UgxPONmu6…
G
Ai? All i see is Indians even in this damn video colonizing everything and then …
ytc_Ugx5Ht15m…
G
So by the time I get replaced by AI, I'll be fired by AI since the HR lady will …
ytc_Ugxu9Qe_F…
G
The vocal tics remind me of this quote from Gilfoyle in Silicon Valley (the TV s…
ytc_UgzA0Nf2U…
Comment
I am SO referencing this specific case in my degree's final project for Computer Engineering. I'm writing about ChatGPT and I have a section entirely dedicated to ethics and THIS is a perfect example of the downsides of LLMs. Because they only predict the text that follows, and this causes them to "hallucinate", it is so easy for them to generate misinformation when they don't have a very specific dataset or when they have to create something entirely new. GPT3.5 and GPT4 is obviously really advanced and can generate very convincing text that seems as if it had been made by a human, but the overreliance on these Large Language Models is causing people to do... very stupid things.
Even as someone who isn't a lawyer, the mistakes made by Schwartz and Lodoca are so clearly easily avoidable by FACT-CHECKING. And it's very telling that Schwartz thought ChatGPT was a "search engine" because I'm sure a lot of people think that (and I'm not going to get into the can of worms that is Bing Chat, which must not be helping this confusion that people have with what LLMs are).
LLMs and AI should be approached with a degree of skepticism, because they make PREDICTIONS according to a dataset, they can't spit out objective facts.
youtube
AI Responsibility
2023-06-22T13:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyvONssAtPiQd8nQ754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKUTDvS_WODcFuMkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1GGxyjlVbhEngQ_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX48gPD3tgbVlbHpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4fr9MCbpwqkVXswl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyq0kAvNFGLZw3rWLd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbGThnct8U6zDYUO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygKaXohmripXAyaZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxtd8u5Wll6kqVg8W94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_DzXBW3yOtTiuqId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]