Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral.
During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence.
Here’s a list of the panel’s participants:
Moderator: Frans van Bruggen, Policy Officer for AI and FinTech at De Nederlandsche Bank (DNB)
Aoibhinn Reddington, Artificial Intelligence Consultant at Deloitte
Sabiha Majumder, Model Validator – Innovation & Projects at ABN AMRO Bank N.V
Laura De Boel, Partner at Wilson Sonsini Goodrich & Rosati
The first question called for thoughts about current and upcoming regulations that affect AI deployments. As a lawyer, De Boel kicked things off by giving her take.
De Boel highlights the EU’s upcoming AI Act which builds upon the foundations of similar legislation such as GDPR but extends it for artificial intelligence.
“I think that it makes sense that the EU wants to regulate AI, and I think it makes sense that they are focusing on the highest risk AI systems,” says De Boel. “I just have a few concerns.”
De Boel’s first concern is how complex it will be for lawyers like herself.
“The AI Act creates many different responsibilities for different players. You’ve got providers of AI systems, users of AI systems, importers of AI systems into the EU — they all have responsibilities, and lawyers will have to figure it out,” De Boel explains.
The second concern is how costly this will all be for businesses.
“A concern that I have is that all these responsibilities are going to be burdensome, a lot of red tape for companies. That’s going to be costly — costly for SMEs, and costly for startups.”
Similar concerns were raised about GDPR. Critics argue that overreaching regulation drives innovation, investment, and jobs out of the Eurozone and leaves countries like the USA and China to lead the way.
Peter Wright, Solicitor and MD of Digital Law UK, once told AI News about GDPR: “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”
The concerns raised by De Boel echo Wright and it’s true that it will have a greater impact on startups and smaller companies who already face an uphill battle against established industry titans.
De Boel’s final concern on the topic is about enforcement and how the AI Act goes even further than GDPR’s already strict penalties for breaches.
“The AI act really copies the enforcement of GDPR but sets even higher fines of 30 million euros or six percent of annual turnover. So it’s really high fines,” comments De Boel.
“And we see with GDPR that when you give these types of powers, it is used.”
Outside of Europe, different laws apply. In the US, rules such as those around biometric recognition can vary greatly from state-to-state. China, meanwhile, recently introduced a law that requires companies to give the option for consumers to opt-out from things like personalised advertising.
Keeping up with all the ever-changing laws around the world that may impact your AI deployments is going to be a difficult task, but a failure to do so could result in severe penalties.
The financial sector is already subject to very strict regulations and has used statistical models for decades for things such as lending. The industry is now increasingly using AI for decision-making, which brings with it both great benefits and substantial risks.
“The EU requires auditing of all high-risk AI systems in all sectors, but the problem with external auditing is there could be internal data, decisions, or confidential information which cannot be shared with an external party,” explains Majumder.
Majumder goes on to explain that it’s therefore important to have a second line of opinions -which is internal to the organisation – but they look at it from an independent perspective, from a risk management perspective.
“So there are three lines of defense: First, when developing the model. Second, we’re assessing independently through risk management. Third, the auditors as the regulators,” Majumder concludes.
Of course, when AI is always making the right decisions then everything is great. When it inevitably doesn’t, it can be seriously damaging.
The EU is keen on banning AI for “unacceptable” risk purposes that may damage the livelihoods, safety, and rights of people. Three other categories (high risk, limited risk, and minimal/no risk) will be permitted, with decreasing amounts of legal obligations as you go down the scale.
Credits/HELP - https://www.artificialintelligence-news.com/
Related #tags used - #artificialintelligence #AI2022 #Aiforall #AIexpo2022 #exhibition #event #bigdataexpo #Europe
If you're looking to augment your staff using custom software development company in USA or an offshore software development company in USA, visit Reliable Group.
0 Response to "AI & Big data Expo Europe 2022 : Protecting ethical standards in the age of AI"
Post a Comment