Every time the government announces a new form of support, fraud is not far behind. The recent economic stimulus grant scheme, which provides help to businesses and individuals affected by the Covid-19 pandemic, is no exception. We began to see stories about economic stimulus fraud within a few days of the roll-out of the scheme.
So far, so predictable. Yet economic stimulus fraud has, in other ways, some unique features. As one of the biggest programs of support ever rolled out by the US government, and with more than 150 million people eligible to claim it, the scale of potential fraud is unprecedented. It’s no wonder, then, that the government is turning to an emerging technology – Artificial Intelligence (AI) – in order to investigate potentially fraudulent claims.
In this article, we’ll look at why the stimulus package has become such a target for fraud, and how AI is being used to limit this.
There are a number of reasons why the recent economic stimulus package has become such a target for fraudsters. The simplest is that a huge amount of money is being distributed through the scheme, and fraudsters think they can therefore gain substantially by scamming the US government.
This factor is compounded by another: that the rules of scheme are incredibly complex, and changing all the time. In this regard, the economic stimulus package is similar to some of the other support packages rolled out by the US government over the last decade, and which also became a target for fraud.
It’s hardly surprising, then, that many analysts are worried about the scale of the fraud that the economic stimulus package, also known as the CARES act, will catalyze. The NYT reported last month that criminals – including organized crime enterprises such as the Mafia – are planning to take advantage. Most of these fraud schemes rely on the fact that the rules of the CARES act are constantly changing, and betting that even employees of the federal government can’t keep up with them.
AI to the Rescue?
At first glance, the task of identifying economic stimulus fraud would seem to be a great fit for AI systems, which have already had impressive success in protecting against identity theft, and many of the features of CARES fraud are shared with this parallel type of crime. Specifically, AI can sift through enormous amounts of data (relatively) quickly and identify suspicious patterns.
In addition, many of the organizations tasked with fighting CARES fraud – not just the US government, but also the banks who will distribute funds through the scheme – lack the necessary resources to investigate potential fraud manually. Many of these organizations have sought to cut costs in recent years, in no small part due to the imposition of data compliance legislation that requires significant investment.
AI systems designed to catch fraud are also some of the most well-developed solutions around. Many millions of dollars of research funding has been poured into building systems that are able to identify credit card fraud, for instance, and at first glance CARES fraud appears to operate in much the same way.
All this said, there remain some issues when it comes to using AI to fight fraud, and especially economic stimulus fraud. This is because of the basic model in which AIs are deployed in fighting fraud. The standard model for using machine intelligence in fraud prevention is to build these systems to err on the side of caution. In other words, these AI systems will identify many more instances of “potential” fraud than actually exist. In social sciences jargon, these are known as false positives.
In the AI-driven fraud prevention attempts that have been used up until now, this model has worked just fine. It assumes that credit card fraud – for instance – is fairly rare in comparison to the number of accounts held by a particular institution. Because of this, it’s possible for human analysts to check through each reported instance of fraud, and investigate it more fully. Given the scale of the economic stimulus package, some analysts fear that this will not be possible in the present case.
In this context, some analysts are arguing that fraud-fighting AI needs to be improved. By deploying even more technology, they argue, these systems can be made more effective at catching fraud, and this will have a follow-through effect on all types of fraud.
Others are not so sure. As we’ve seen, one of the major reasons why the CARES act has become such a magnet for fraud is that the provisions it contains are complex, and have been changed many times in just a few months. In addition, the basic text of legislation contains some fundamental errors that could also be exploited by scammers. The problem is not fraud itself, it seems, but that the rushed process in which the act was drafted means that it is difficult to stop criminals from abusing it.
At an even broader level, it’s far from clear that the federal government is the best-prepared organization to disburse these funds. It has a bad record of protecting citizens’ money from criminals, after all, and doesn’t appear to have the basic technological protections in place to do so. Adding AI to this mix, it could be argued, just gives hackers another way to target their attacks, since AI itself is also a target for scams.
In short, the addition of AI to fraud-prevention programs for the CARES act seems to be an attempt to fix a problem that is inherent to government support schemes. That shouldn’t, of course, stop you from investigating the many forms of support that are available to you personally. But it does mean, when all this is over, that we need to look again at how we toss around money in such situations.
Verify a website below
Are you just about to make a purchase online? See if the website is legit with our validator: