Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This stuff will only evolve once people realise there's no way we can have it reinforce truth. Not because of AI's fault but for our fault. If not then answer me this. Which truth do we reinforce? Whose truth? Who decides? No one can answer this. So the only other alternative is developing an AI who will be able to, given all the information, discern what must be truth and what can't possibly be truth. Read back this last sentence, think really hard about what that means and tell me you really believe that is ever going to happen? Lol nope... That's the last thing a politician or someone who's trying to sell you something or deceive you wants. So let AI be AI, let it be misguiding, incorrect, racist, mysoginous, everything you can think of. In the end YOU are the human, YOU are the one with the brain. The brain that, when used, can help you curate the information you receive. AI and LLMs will be worthless if humans keep trying to sanitize them. You already see it in chatGPT which is getting dumber and shallower by the day because people always want to "improve" things (read i want to force a change on this thing because i want my name associated to this technology, or i want the computer to preach according to my worldviews, or bad computer hurt my feelings or whatever...)
youtube AI Moral Status 2023-08-24T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugyqi9OM1213k5c0K5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxYaG1zfcDy0yaN2FZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyQOnlwEbnGOZ0qGOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxNDksqAJ4IVFzsJ9V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxDRt6jSrJazcLHf394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxbL_6Bb6PmowzFK5J4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzD9PGKoX5Fu7NutNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxsdd1fUj6_cc8gx5h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgwmIFUsP8OzaNED6Rt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgycPpcvLQUL0d8HdrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]