Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think first of all a categorization of AI companies is required. First are the ones who does research and come up with new powerful general purpose models and want to deploy that to the world. Second are the ones who depend on first ones to create use-case specific software. Third are the companies who just buy the end product and deploy for use. IMO, the first one should be regulated by agency and licencing process if what you are putting out are something usable beyond just a research paper, is fairly general purpose and beyond a certain threshold of capabilities. The second one can be use-case based general rules so that someone trying to build a software on the powerful models shouldn't need to go through government approvals. For them High risk use cases like employment, healthcare can have higher requirements to satisfy while medium risks have lower requirements and low risk ones should be virtually regulation free. Third case, a company who is buying an end product should not be stifled with regulations at all because that will slow down the adoption of the technology for good. They should be able to buy a product once it is compliant with the regulations of the risk category it belongs to and works on a agency approved model. Still the following things are unclear to us: 1. What to do with very powerful yet use case specifc models, that independent researchers create? Should research and publications be regulated? I am leaning towards a no. 2. What to do with Open Source models? On one hand Open Source models are the best way to assure transparency yet on the other hand it gives a lot of power to everybody. Should creation of open source tools be restricted or should the offering be regulated? What if a SaaS company uses such a model in the background, which rules will apply to them?
youtube AI Governance 2023-05-20T14:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzZm19vwQkQl6KV7xd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxKdWzkSOpeCVgKj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAqod00ZRxj9dEJoF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxPEUo1WRd7GVEJD_t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxnkg9Pu9pk0dBSzGB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRcfh48Mk3g7ovLcJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGETzKrBCwBgtdicd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxOXA6s98zdf6zLu1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxf_TXzr9t3rRcfSnh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWjX1Oo-kwDB2eHup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]