Post Image
By GeorgiosMay 9, 2023In AIOpenAI

US Fails to Adequately Address AI Regulation Concerns: Limited Funding and Exclusion of Ethics Researchers.

President Joe Biden’s recent meeting with CEOs of major AI companies, including Google, Microsoft, OpenAI, and Anthropic, aimed to emphasize the importance of ensuring the safety of AI products. However, the meeting has drawn criticism for the exclusion of ethics researchers and the limited funding allocated to AI research. This article explores the concerns raised by the AI community and the potential consequences of the US administration’s approach to AI regulation.

Limited funding and focus on the wrong actors

The White House’s announcement of a $140 million investment to launch seven AI research institutes through the National Science Foundation is insufficient when compared to the scale and potential impact of AI technology. As AI continues to advance at a rapid pace, proper funding and support are essential to ensure that research and development remain focused on addressing the pressing ethical and societal issues that AI presents.

Moreover, Biden’s decision to consult with the CEOs of major AI companies has raised concerns among AI ethics researchers who have been warning about AI’s dangers for years. AI researcher Dr. Timnit Gebru criticized the meeting on Twitter, stating: “A room full of the dudes who gave us the issues & fired us for talking about the risks, being called on by the damn president to protect people’s rights.” By excluding these researchers from the conversation, the administration runs the risk of perpetuating the very problems it aims to address.

Risks of relying on industry leaders

By inviting the leaders of companies that have been responsible for creating the issues with AI that the White House seeks to address, the administration inadvertently reinforces the narrative that corporations can self-regulate. University of Oxford AI ethics researcher Elizabeth Renieris expressed her concerns on Twitter: “Unfortunately, and with all due respect POTUS, these are not the people who can tell us what is ‘most needed to protect society’ when it comes to #AI.

This approach is problematic, as it undermines the need for a more comprehensive and balanced approach to AI regulation, involving not just industry players but also independent researchers, ethicists, and watchdog organizations.

The inadequate response to AI’s impact on society

The current approach to AI regulation in the US does not adequately address the potential negative consequences of AI technology, including privacy issues, employment bias, and the potential for AI to be used in misinformation campaigns. The Biden administration’s “AI Bill of Rights” is a step in the right direction, but it is crucial to involve a broader range of stakeholders and voices to ensure that AI technology is developed and deployed responsibly.

Inadequate Leadership: Vice President Kamala Harris

No alt text provided for this image

Another point of concern in the US administration’s approach to AI regulation is the leadership role of Vice President Kamala Harris in this area. Harris, who chaired the meeting with AI company CEOs, may not be the most suitable person to lead the discussion and address the complex issues surrounding AI technology. There are several reasons why Harris’s role in handling AI regulation may be problematic:

  1. Lack of technical expertise: Harris, a career politician and former attorney general, may not possess the necessary technical expertise to fully comprehend the complexities and intricacies of AI technology. Her background in law and politics, while valuable in other areas, does not necessarily qualify her to make informed decisions about the future of AI research and its impact on society.
  2. Potential conflicts of interest: As a politician, Harris may be influenced by political considerations, which could potentially sway her decisions regarding AI regulation. This creates a conflict of interest, as her primary responsibility should be to protect the public’s best interests, rather than to cater to the interests of industry leaders or political allies.
  3. Failure to involve AI ethics researchers: Harris’s decision to chair a meeting that excluded AI ethics researchers, who have been warning about AI’s dangers for years, further calls into question her ability to effectively address the challenges posed by AI technology. By neglecting to involve these experts, Harris risks perpetuating the very problems that AI regulation should address.
  4. Insufficient emphasis on the broader implications of AI: Harris’s statement following the White House meeting focused primarily on the ethical, moral, and legal responsibilities of private companies in ensuring the safety and security of their products. While these are important considerations, they do not address the broader societal, economic, and psychological impacts of AI technology. A more comprehensive approach to AI regulation would involve considering these wider implications and engaging a diverse range of stakeholders to ensure that AI serves the best interests of society as a whole.

The US administration’s approach to AI regulation, as evidenced by the recent White House meeting and limited funding allocation, falls short of addressing the critical concerns raised by AI ethics researchers and the broader AI community. To ensure that AI technology serves society’s best interests, it is crucial to involve a diverse range of voices and perspectives, allocate sufficient funding, and prioritize a comprehensive approach to regulation that acknowledges and addresses the potential risks and challenges that AI presents.

toroblocks Protection
svgMax Tegmark on AI: "A Cancer Which Can Kill All of Humanity".
svg
svgAre we just a bunch of atoms that could be used in a better way? #ai