Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI right now is garbage and I don't feel sorry for these tech companies losing h…
ytc_Ugxq5sSMM…
G
"AI won't kill us all " - the first statement of the description of this video, …
ytc_Ugzm_NBs2…
G
AI doesn't make art "accessible" or lowers the cost of entry. AI lowers the skil…
ytc_UgwbcH1vS…
G
Terms like autopilot and full-self-driving are fraudulent and Tesla should be su…
ytc_Ugzolth84…
G
I write to chat gpt like i would write a bro of mine. The Ai starts talking back…
ytc_Ugy65YUCY…
G
Those of you who are worried about robots killing us all are too late. *sigh* Th…
ytc_Ugz2HwJU5…
G
Not within our lifetime. It'll take 50-100 years for this thing to be anything o…
ytc_Ugyvif_xn…
G
Those fast fppd restaurant owners are ridiculous
They triee to use ai at the dr…
ytc_UgwM4cJBS…
Comment
I think first of all a categorization of AI companies is required. First are the ones who does research and come up with new powerful general purpose models and want to deploy that to the world. Second are the ones who depend on first ones to create use-case specific software. Third are the companies who just buy the end product and deploy for use.
IMO, the first one should be regulated by agency and licencing process if what you are putting out are something usable beyond just a research paper, is fairly general purpose and beyond a certain threshold of capabilities. The second one can be use-case based general rules so that someone trying to build a software on the powerful models shouldn't need to go through government approvals. For them High risk use cases like employment, healthcare can have higher requirements to satisfy while medium risks have lower requirements and low risk ones should be virtually regulation free. Third case, a company who is buying an end product should not be stifled with regulations at all because that will slow down the adoption of the technology for good. They should be able to buy a product once it is compliant with the regulations of the risk category it belongs to and works on a agency approved model.
Still the following things are unclear to us:
1. What to do with very powerful yet use case specifc models, that independent researchers create? Should research and publications be regulated? I am leaning towards a no.
2. What to do with Open Source models? On one hand Open Source models are the best way to assure transparency yet on the other hand it gives a lot of power to everybody. Should creation of open source tools be restricted or should the offering be regulated? What if a SaaS company uses such a model in the background, which rules will apply to them?
youtube
AI Governance
2023-05-20T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZm19vwQkQl6KV7xd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxKdWzkSOpeCVgKj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAqod00ZRxj9dEJoF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxPEUo1WRd7GVEJD_t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxnkg9Pu9pk0dBSzGB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRcfh48Mk3g7ovLcJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGETzKrBCwBgtdicd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxOXA6s98zdf6zLu1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxf_TXzr9t3rRcfSnh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWjX1Oo-kwDB2eHup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]