Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unfortunately, we're already seeing the issue of "morality" in ChatGPT 3.0. I've tested it extensively. The old saying, "Garbage in - Garbage Out" applies. Ask it questions about health, religion, race, slavery and even politics. Overall, I've found that it is filled with misinformation. In many cases regurgitating "ideology" responses instead of factual. Sometimes, it refuses to answer the question by stating it can't answer the question because of .... Then when I dive down through the first 2-3 layers it will give me false information, leaning left, such as the issue of reparations. It's only when I get to about the sixth level in a deep dive, pointing out it's information is incorrect and challenge it's answers with more questions, does it finally provide the correct answers. But it twist itself every which way not to give you the answer, until you challenge it. Sadly, even though it finally provides the correct answers and "realizes" the previous answers were false, it can't correct itself.
youtube Viral AI Reaction 2023-05-26T03:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwyrn3ku7dwO_ClMwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5IFSTPWtQh1he8O94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUjwH1hGjgTQGFmW94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhoncpgpaWR_FncMl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzZDVpG8ZivcdzNyRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzQasRi-iX6nnNCheV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQtCeC3uBrGamjFyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxEyZa353uoMMG5ZE14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxzcgDkFxbzsQ8CwmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxUBF62_aPAG0OVEVZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]