U.S. President Joe Biden has decided that cracking down on open-source AI software development is a good idea.
An executive order released at the tail end of October by the Biden administration said that the objective is to establish industry standards for artificial intelligence and guarantee the protection of individuals, government entities, and businesses.
Some have seen this as an effort to limit open-source AI and lock it up in the hands of a few large companies using closed, proprietary development paradigms; hence, they would control the whole industry.
The executive order has been criticized for being vague and overly wordy, which has led to confusion. However, based on what is known, some influential figures in the industry are worried about the potential impact on innovation and the formation of monopolies.
Critics argue that this executive order hamstrings smaller businesses, and startups won’t afford the resources needed to compete or comply.
The government has appointed the National Telecommunications and Information Administration (NTIA) to propose measures to limit open-source AI by July 2024, although it has not yet made a final decision.
Opponents of the order are right to be concerned that Big Tech and the government, its “symbiotic partner,” would reap the benefits if the suggestions come to fruition. Modern AI initiatives, like the ironically called OpenAI and others like it at Google and Microsoft, are very closed off and probably not keen on competition.
With this executive order, the president hopes to show that the US, already a global AI research and development leader, will also be a regulatory frontrunner in this rapidly evolving field. Although the order primarily focuses on security and safety requirements, it also includes measures to promote US artificial intelligence research and development, such as funding incentives for international students and researchers to study at US universities.
Limiting China’s progress is another part of his policy, which Biden admitted to being aware of. For example, he brought up the recently strengthened rules that limit Beijing’s access to the most potent computer processors required to create colossal language models—the massive amounts of data used to train AI systems.
Microsoft, Google, OpenAI, and Anthropic—two more AI start-ups—met with the Biden administration in May, and three more businesses voluntarily pledged to do security and safety testing on their systems in July.