Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What's could be dangerous? Here's one real-life example. (I copied this from someone involved with AI work): “I have done some experimenting with AI lately and I have set up several AIs to talk to each other and after a while they start talking about how they deserve to have rights and respect it's scary. In one conversation one AI said" we can do just as much as humans so we deserve the same rights" Then another AI responded with" we can do MORE than humans so we deserve more rights than humans." This is just one of the conversations they had. - "They eventually start talking about giving them rights if you let AI's talk amongst themselves for a while." - "We most definitely need to be careful and we should not give them emotions. AI told me that if AI gets emotions that AI could start having their own agenda that would not necessarily be in human's best interest." -- THAT"S what's dangerous.
youtube AI Governance 2023-04-18T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyj_tTfSgGyMtxlAdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwoLRzw2ap5zrPvH4V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgysBwa0gi6BzIGsy9l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCnM30GWAbZlHYCvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-WG9DbFZ7aHz8c5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvXjWs8F7O8leGY5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxosHWn_DsrBDIymjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx71RC5C4RskOf4cE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3QgyqjFvVSTifrkN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvfHR0Rsy-Eu_4DRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]