Clicky

Shipping News and Reviews

Big tech's stranglehold on artificial intelligence needs to be regulated

Google CEO Sundar Pichai has suggested more than once that artificial intelligence (AI) will affect human evolution more than humanity's use of fire. He naturally spoke of AI as a technology that gives machines or software the ability to mimic human intelligence in order to perform increasingly complex tasks with little or no human intervention.

You may dismiss Pichai's comparison as the usual Silicon Valley hype, but the company's dealmakers aren't laughing. Since 2007, Google has bought at least 30 AI companies that work on everything from image recognition to human-sounding computer voices – more than any of its big tech counterparts. One of those acquisitions, DeepMind, which Google bought in 2014, just announced that it could predict the structure of every protein in the human body from the DNA of cells – an accomplishment that could spark numerous breakthroughs in biological and medical research. Of course, these breakthroughs will only happen if Google gives broad access to DeepMind's knowledge, but the good news is that Google decided to do so. However, there is a "but".

Google CEO Sundar Pichai has suggested more than once that artificial intelligence (AI) will affect human evolution more than humanity's use of fire. He naturally spoke of AI as a technology that gives machines or software the ability to mimic human intelligence in order to perform increasingly complex tasks with little or no human intervention.

You may dismiss Pichai's comparison as the usual Silicon Valley hype, but the company's dealmakers aren't laughing. Since 2007, Google has bought at least 30 AI companies that work on everything from image recognition to human-sounding computer voices – more than any of its big tech counterparts. One of those acquisitions, DeepMind, which Google bought in 2014, just announced that it could predict the structure of every protein in the human body from the DNA of cells – an accomplishment that could spark numerous breakthroughs in biological and medical research. Of course, these breakthroughs will only happen if Google gives broad access to DeepMind's knowledge, but the good news is that Google decided to do so. However, there is a "but".

For one thing, Google isn't the only gatekeeper whose decisions will largely determine the direction of AI technology. The list of companies setting up AI startups around the world is also dominated by the well-known big tech names that accompany the search and advertising giant so often: Apple, Facebook, Microsoft and Amazon. In 2016, this group, along with Chinese mega-players like Baidu, spent US $ 20 to 30 billion out of an estimated global total of US $ 26 to 39 billion on AI-related research, development and acquisitions. With their dominance in search, social media, online retail and app stores, these companies have near-monopolies on user data. With their rapidly growing and increasingly ubiquitous cloud services, Google, Microsoft, Amazon and their Chinese counterparts are creating the conditions to become the top AI suppliers for everyone else. (In fact, AI-as-a-Service is already a $ 2 billion-a-year industry and is expected to grow at a rate of 34 percent annually.) According to a soon-to-be-released study by my team at Digital Planet, the Highly concentrated AI talents in US corporations: The median of AI employees in the top five – Amazon, Google, Microsoft, Facebook and Apple – is around 18,000, while the median for companies six to 24 is around 2,500. From then on, the numbers drop significantly.

The potential of AI is great and widespread: from efficiency gains and cost savings in almost every industry to revolutionary impacts in education, agriculture, finance, national security and more. We just saw an example of the many AI-powered changes: lockdown restrictions imposed in the wake of the COVID-19 pandemic have led many companies to adopt bots and automation to replace people. At the same time, AI could also create new jobs and increase productivity. In other ways, too, AI has two faces: it accelerated the development and adoption of COVID vaccines by predicting the spread of infections at the district level to inform site selection for clinical trials; It also helped social media companies spot fake news without having to employ human editors. But AI-optimized algorithms in search and social media have also created echo chambers for anti-Vaxxer conspiracy theories by targeting the weakest. There are growing concerns about ethics, fairness, privacy, surveillance, social justice and transparency in AI-powered decision-making. Critics warn that democracy itself could be threatened if AI runs amok.

In other words, the mix of positives and negatives brings this potent new suite of technology right up to date. Do we have confidence that a handful of companies that have already lost public trust can steer AI in the right direction? Given the business models that drive their motivations, we should have ample cause for concern. For ad-driven companies like Google and Facebook, it's clearly beneficial to highlight content that travels faster and attracts more attention – and misinformation usually – while micro-targeting that content by collecting user data. Consumer goods companies like Apple will be motivated to prioritize AI applications that help differentiate and sell their most profitable products – hardly any way to maximize the positive impact of AI.

Another challenge is the prioritization of innovation resources. The online relocation during the pandemic has resulted in oversized profits for these companies and has put even more power in their hands. They can be assumed to be trying to keep this momentum going by prioritizing the AI ​​investments that best align with their narrow commercial goals while ignoring the myriad of other opportunities. In addition, Big Tech operates in economies of scale markets, so there is a tendency towards big bets that can waste enormous resources. Who remembers IBM's Watson initiative? It aspired to become the universal, digital decision-making tool, especially in healthcare – and failed to live up to the hype, as did the trending driverless car initiatives from Amazon and Google parent Alphabet. While failures, false starts, and pivots are a natural part of innovation, expensive major failures caused by some hugely wealthy companies divert resources from more diversified investments in a range of socially productive applications.

Despite the growing importance of AI, US technology policy is fragmented and lacks a unified vision. It also seems like an afterthought, as lawmakers are more focused on big tech's anticompetitive behavior in its main markets – from search to social media to app stores. This is a missed opportunity because AI has the potential for much deeper societal impact than search, social media and apps.

There are three types of action that policymakers should consider to get AI out of the clutches of big tech. First, they can increase public investment in AI. Second, mechanisms should be put in place to ensure that AI is deterred from harmful uses and that consumer privacy is protected. Third, given the concentration of AI on just a handful of big tech players, the antitrust machinery should be adapted to make it more future-oriented. This would mean anticipating the risks of a small group of large companies controlling a technology with such wide-ranging applications – and establishing a system of carrots and whips to get that control right. Such proactive regulation needs to be in place even if, given their scale, technical knowledge and market access, policymakers ultimately have to rely on the same companies to drive AI development.

While the federal budget proposal for 2022 provides for $ 171 billion for public research and development, the budget does not specify how much to spend on AI. According to some estimates, federal AI research will receive $ 6 billion, with an additional $ 3 billion allocated for external AI-related contracts. In 2020, a major federal agency, the National Science Foundation, spent $ 500 million on AI and worked with other agencies to allocate an additional $ 1 billion to 12 institutes and public-private partnerships. Budget allocations for 2021 include $ 180 million for new AI research institutes and an additional $ 20 million for studying AI ethics. Other federal departments such as energy, defense and veterans affairs have their own AI projects underway. For example, in August 2020, the Department of Energy allocated $ 37 million over three years to fund research and development of AI to process data and operations in the department's scientific user facilities. All of these numbers are dwarfed by those from Big Tech.

In addition to public investment in AI, there is a need to envision future uses of AI and regulate current investments. The US National Defense Authorization Act aims to ensure that AI is developed ethically and responsibly. The National Institute of Standards and Technology is tasked with managing AI risk. The Government Accountability Office has also published reports highlighting the risks associated with facial recognition and forensic algorithms to public safety, and has provided an accountability framework to help federal agencies and others use AI responsibly. However, all of these guidelines need to be integrated into a more formal legal framework.

With the vast majority of AI investments and talent concentrated in just a small handful of companies, Biden's emerging antitrust revolution may play a key role. The government is targeting the overwhelming dominance of big tech in social media, search, app stores, and online retail. Many of these markets and their structures may be difficult to change as the tech companies act preemptively to strengthen their scrutiny, as I described earlier in Foreign Policy. However, the AI ​​market is still emerging and potentially malleable. The large technology companies can be given incentives to prioritize socially beneficial AI applications and to make their data, platforms and products available to the public. To gain access to these AI vaults, the U.S. government could leverage the leverage created by the multiple antitrust proceedings under consideration against Big Tech. Bell Labs' historic precedent can provide inspiration: the 1956 federal law against the Bell system, which at the time had a national telecommunications monopoly, kept the company intact, but in return Bell Labs had to license all of its patents. free for other companies. This use of public levers led to an outbreak of technological innovation in several economic sectors.

You may or may not agree with Pichai's statement that AI's impact on humanity is akin to the use of fire, but he made another comment that is much harder to argue with: "(Fire) kills people too." It is thanks to Google-owned DeepMind that it provides open access to over 350,000 protein structures for public use. At the same time, it is still unclear whether Google gave life science companies within Alphabet's corporate empire proprietary early access to the protein treasure, and if so, how those companies could leverage it.

When the emerging world of AI is dominated by a handful of companies with no public oversight and engagement, we take two risks: we limit others' access to the tools to light their own fires, and we could burn down parts of the social fabric if fire these companies in the wrong direction. If we can create new mechanisms to avoid these risks, AI could be even bigger than fire.

Comments are closed.