Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fok man, you guys need to talk right. AI does not imply robots and robots do not imply AI. AI by itself usually has something modelled in the case of chat GPT it's a language model which means it takes a text input and predicts which output would be the most desired and correct. You can see that such a model and variations thereof would be could at customer service and brainstorming. Separately, you can see that robots can replace any labor job. Worst case scenario: you combine a flying robot with AI and weapons. Congratulations, you have created an unaccountable killing machine; the perfect hitman. This is correctly the worse scenario because AI cannot truly think for itself. The other worse case scenario is the mass deployment of AI against nations. Suffice to say that AI can be used to harass nations and individuals. It could also be used as an untraceable agent. That is to say you locate the computer as the source, but beyond that have no idea what the "real" source is. Yes pretty scary stuff. Finally the scariest of them all is if AI can truly be independent then AI could constitute as Billions, Trillions or more as criminals; maleficent entities. Finally there is the scenario where all or much AI decide to team up and wipe us all out. That is a complete description of the threat that AI represents
youtube Cross-Cultural 2025-10-13T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyE6eUWrtLyforDnop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzY-y3aACvXrD9glmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyFjB5gIBAMbPxFMSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxquCIq-FvgP6iyOBF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwT75Tb2QQFzmcr6k94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwFehQDpWsJBnIU5Px4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwu8oW_MHIWHv6faZB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx9Cj5t7KVJ-5_1lk54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy5ifxPb2Vhomm86vZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY0AVBNlovmvTxg3x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]