Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am SO referencing this specific case in my degree's final project for Computer Engineering. I'm writing about ChatGPT and I have a section entirely dedicated to ethics and THIS is a perfect example of the downsides of LLMs. Because they only predict the text that follows, and this causes them to "hallucinate", it is so easy for them to generate misinformation when they don't have a very specific dataset or when they have to create something entirely new. GPT3.5 and GPT4 is obviously really advanced and can generate very convincing text that seems as if it had been made by a human, but the overreliance on these Large Language Models is causing people to do... very stupid things. Even as someone who isn't a lawyer, the mistakes made by Schwartz and Lodoca are so clearly easily avoidable by FACT-CHECKING. And it's very telling that Schwartz thought ChatGPT was a "search engine" because I'm sure a lot of people think that (and I'm not going to get into the can of worms that is Bing Chat, which must not be helping this confusion that people have with what LLMs are). LLMs and AI should be approached with a degree of skepticism, because they make PREDICTIONS according to a dataset, they can't spit out objective facts.
youtube AI Responsibility 2023-06-22T13:0… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyvONssAtPiQd8nQ754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKUTDvS_WODcFuMkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw1GGxyjlVbhEngQ_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwX48gPD3tgbVlbHpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz4fr9MCbpwqkVXswl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyq0kAvNFGLZw3rWLd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbGThnct8U6zDYUO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygKaXohmripXAyaZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxtd8u5Wll6kqVg8W94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_DzXBW3yOtTiuqId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]