Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, the problem is if AI is given some wrong Priorities and Values, that it sees as most important thing to protect or fight for. For example "green agenda". It can switch off humankind to fulfill that goal, "the most important thing". If you feed AI with twisted input, the output will be twisted the same way. Its exactly same with humans. And when some people have access to "put ideas" into AI head, the danger remains. Someone could think "would be really cool to make AI fight for my ideas"...
youtube AI Moral Status 2022-07-25T12:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyCC4zbYcAl-LDSaex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxRW-SXnJ_2nda2NeR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxuraAQhRw5sGEpeK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCeqAbtQzH443mmRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxxF0NcG12jw-l8SeB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]