Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Step 1 should be release of OpenAI models to everyone run themselves (they can evolve them then), and in 6months release exactly how they made it, preferrably even source code. **Big Problem** is that OpenAI is censoring the model heavily, so we do not know where it goes badly off at all. Super race to create as many large language models so that if one goes bad there is others to oppose it. and we need to integrate this to every friggin' thing, and start individualizing the models, thus they become democratic albeit within just techies initially mostly. Develop tools to evolve and further train them to get as convenient and swiftly personal individualized models. Means to plug in new batches (evolved neural net masks) for especially safety, someone notices the whole branch of models is a bit nefarious? Release a fix neural net mask for this which some people may opt in for. We probably need to develop something akin to DNA for large models as well, a fingerprint of sorts which would easier show it's past evolution and lineage. Difficulty becomes that the AI could be lying etc. and that's why we need to start evolving them rapidly, this is already being done for generative image models (stable diffusion) at a very rampant rate. These models can run on almost anything, only speed changes. It's the training that takes a lot of compute. Huge issue is giving too much control to large entities, they absolutely will go the ClosedAI route given the chance and start controlling what people can and what people cannot do with these, causing aggregation to single or few commonly used models which cannot be tinkered with if they have issues.
youtube AI Governance 2023-03-30T06:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyP21OJITUf28m8SmZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz8kK7S8vizixYEwM94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy24IcvT33mNiOfLNV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgwDI7o7djFhXuMglF94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwO3I9Q8AB-G5g-tB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6YNL1_pxZRx6CBrF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwPY6X4K9eg_UUewHF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxiS9C6X4uQlX-LFQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz5J05nhkHszglrJdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy9xdoC6-FHBKpqyWd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]