Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you have to think that whilst we worry about all this tech and put controls on it, limit its use, certain other countries will not and try to go further to our slow down due to worries - so thats why we can all want or try to put restraining controls of this tech, its just not possible to do so in certain naughty nations who want the upper-hand so you can complain and try to stop this tech, but other nations will not do so - and no military wants to slow down on a future tech advantage - just because its not in main stream news, this AI future tech weaponary is either still evolving or is already here, and then still evolving - i think its here today so if its here, and in conjunction, we need to teach it to be part of a team which is lead by humans and teach it friendship = last thing we want to do is become a threat to it - see the film "Colossus - The Forbin Project", or any terminator, or Johnny-5, Superman 3 (the AI), Transcendence - best we could hope for is a Gort the biggest driver is the following idea: the Ford concept approach to production of cars, but now UCAVs, the instaneous training of an AI-pilot to fly in a mass production UCAV, with a hive mind so each will learn from each other
youtube AI Governance 2023-07-07T02:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxi2_8J8MnUwLe57-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9-3Zf2kco0lspTGl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwP9sxghSjqEZTtws14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzsZe-8D8r8CUi9mN94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziztWqhhf1r4k8Fg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_Pfrai9-qZzPaXbh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugx6QcmJJyXeF98zBpZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZJmHAxOCTs24RJgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZZn0HHGP-ZACCXc14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy4V0A6ZEnOWfePXMF4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]