Assess risks and promote democratic values: AI experts weigh in on Fintech and global competitiveness

Assess risks and promote democratic values: AI experts weigh in on Fintech and global competitiveness

When regulating artificial intelligence (AI) in financial services and in the context of the United States’ global competitiveness, we must be vigilant against risk while emphatically promoting democratic values, said experts who testified at the US Chamber of Commerce’s AI commission. field hearing in London.

The importance of AI in financial services

“Financial services need AI … There are many, many older technological and manual data processes,” testified Rupak Ghose, Chief Operating Officer at Galytix, an AI-powered FinTech firm.

But before Ghose took full advantage of AI, it emphasized the need to examine the impact of potential bad actors and the interplay between different AI models. AI robots, for example, have the scope and influence to move markets with a single tweet.

Ghose added, “Rules are only as good as the police we have implementing these rules … the question is, do you have the right people in place in the private sector and the government to monitor this?”

Regulation of AI

According to Philip Lockwood, NATO’s Deputy Head of Innovation, the main driver of innovation and cutting-edge technology has moved from government and the defense industry to the private sector.

“If you look at the list of technologies on our [emerging and disruptive technologies] list, AI, quantum autonomy, biotechnology, human improvement, this kind of thing, the vast majority of the expenses for this actually come from the private sector. ” So the defense and security use of AI is inextricably linked to commercial use. At present, the EU draft regulation on AI excludes defense and security or military use from the scope of the regulation. However, “if most of the AI ​​development is really driven for commercial purposes, most of the AI ​​that we are interested in at a basic level is actually within the scope of the regulation. And then it has a very significant impact. [on our work]. ”

See also  UK FinTech Banked Raises $15M for US Expansion

When it comes to AI regulation, Kenneth Cukier, executive editor and host of the Babbage Podcast at Economist, articulated a difference between input privacy and output privacy.

“Input privacy is the data that goes into the model, and output privacy is how the data is used … Often, in privacy law, we regulate the collection of the data, because it’s easier … but in use, it’s a little harder, Said Cukier. To illustrate this difference, he discussed images that people upload to social media, which we want to keep. But if there is a platform that uses our images in ways we are not comfortable with, such as law enforcement, then we will regulate the output privacy.

AI’s impact on society

“Most technologies in recent centuries have been a democratizing force … The problem with AI, at least so far today, seems to be very hierarchical and not democratizing,” Cukier said. “It requires increasing scale levels and resources to be extremely good at it … the companies that have adopted AI surpass others 10 to 20 times the baseline in their industry.”

But the answer is not to pull down the winners. “We should let the winners flourish, but help people, not companies. I think public policy should focus on that,” he added.

Carissa Véliz, associate professor at the Faculty of Philosophy and Institute for Ethics in AI and Tutorial Fellow at the University of Oxford, also highlighted how AI can affect humans.

“The way we distribute AI is changing the distribution of risk in society in problematic ways, especially in the financial sector,” she said. Referring to the financial crisis in 2008 in how responsibility for risk shifted from banks to individuals, Véliz warned: “There was a link between the people who made the risky decisions and the people who will pay the price for when things went wrong … And I we think we may be facing a similar type of risk where we use an AI to minimize the risk to an institution … but it’s actually just pushing risk on the shoulders of individuals. ”

See also  AFF: Hong Kong prepares for "Mega Financial Event"

Global competition for AI influence

Witnesses emphasized the different value-based approaches between Western countries and more authoritarian regimes such as China, Russia and others.

“We are going to have spheres of influence on AI, just as we have had in international relations,” Cukier said. “We’re going to have a Western taste of artificial intelligence based on Western values ​​- it’s going to make the balance between America and Europe over the GDPR seem like a small trifle because there’s so much more that brings us together than that separates us. – against the authoritarian country, China, Russia, many others, and their taste in AI. ”

In addition, Cukier touched on how this struggle for influence will be played out in markets such as Latin America, Asia and Africa, “So the stakes are very high. And I think the Chamber of Commerce has a big role to play in ensuring that cluster values ​​are part of the AI ​​conversation. ”

Is the United States behind China?

Some speakers discussed the growing gap between the United States and China. “In financial services, I think more than any other industry, China is ahead of AI,” Ghose noted. “They are far ahead when it comes to mass consumption of artificial intelligence in the financial sector.”

“China is actually surpassing the United States in terms of STEM PHD growth,” said Nathan Benaich, founder and general partner of Airstreet, a venture capital firm that invests in AI-first technology and life science companies. “In fact, they are projected to reach double the number of STEM PhD students by 2025. In the meantime, you see in the Western world many examples of losing the STEM budget, and that is driving this emigration in the industry.”

See also  Leading US-Africa trade and investment initiative, Prosper Africa partners with the Africa Fintech Summit as a Gold Sponsor

Export of democratic values

When we compare our progress with China, our goal should not be to imitate or compete against their model, Véliz emphasized.

“Instead of moving away from a system like China’s techno – authoritarian style, we’re actually trying to compete with them. And I think this is a mistake, she said. “This is a time to defend our liberal values ​​and for the world democracies to come together … Given that China exports surveillance, our job as a liberal democracy is to export privacy.”

Lockwood reiterated this point, “We believe that the acceleration of responsible innovation is essential to ensure that we build trust and accountability in these areas, and that is on the basis of our shared democratic principles … We must be able to demonstrate that we are taking concrete steps and actions to bridge this gap and to demonstrate that we are in fact different from other opponents and competitors in this area. ”

What’s next?

To explore critical questions about AI, the US Chamber AI Commission organizes a series of field hearings in the United States and abroad to hear from experts on a variety of topics. Previous hearings took place in Austin, TX; Cleveland, OH; Palo Alto, CA; and London, UK. The final field hearing will take place in Washington, DC, on July 21, focusing on national security and intellectual property in the field of artificial intelligence.

Learn more about the AI ​​Commission here.

About the authors

Assess risks and promote democratic values: AI experts weigh in on Fintech and global competitiveness

Michael Richards

Director, Policy, US Chamber of Commerce Technology Engagement Center (C_TEC)

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *