Right-to-work checks should concern all employers, but not for the wrong reasons

Contrary to populist beliefs, UK immigration and enforcement laws covering illegal workers are actually quite stringent. Home Office fines dealt out for non-enforcement have increased seven-fold in the past five years and now stand at £49m in the twelve months to October*. Those businesses who don’t pay their resulting “Civil Penalty Orders” can see their bank accounts frozen or premises closed for 48 hours. Responsibility for compliance rests squarely on the shoulders of employers, but is often delegated to frontline managers. The challenge there is that a hiring manager often isn’t the right person to be conducting the check. Firstly, they have an inherent conflict of interest, in that they are typically the beneficiary of a candidate starting work as soon as possible. Secondly, they have little expertise in sifting for incorrect or fraudulent identity and right to work documents.

Out with the old

Employers know all this. Through a combination of legacy and inertia, however, they’ve allowed slow, offline and local administration of right-to-work data to pervade over decades. With the introduction of a new data protection regime (GDPR), this inheritance is coming back to haunt them in the form of offline data storage, inadequate audit trails and disaggregated, unfettered access to employee records. The implementation of streamlined right-to-work technology can mitigate all of these.

Frustrating as these practical aspects may be for employers, they aren’t the most sinister aspect of current right-to-work screening processes.

As evidenced by innovation at border crossings, there is diminishing need or logic for humans to ‘read’ our IDs. We are all now accustomed to scanning our passports face-down at an automated immigration gate, and there are sound moral arguments against manual processes in ID checking. As a species we are vulnerable to bias and prejudice, both conscious and unconscious, however much we might think of ourselves as immune. The same argument holds in remote situations, where the document holder isn’t physically present. Machine learning technology, trained using real and fake documents in volumes that no individual could ever contemplate, have already set the bar in quality standards while removing the potential for human error or misjudgment.

Unbiased background checking

Employment enforcement should aim to catch-up with border control, detached from the false positives of someone untrained in spotting fakes, or the false negatives of unconscious bias towards ethnicity or nationality. Academic research has already shown that on average, minority university candidates receive fewer offers than equivalent white British applicants**, while more recently, Airbnb introduced a “Community Commitment” to try to ensure that its platform’s users did not discriminate against each other on the basis of “race, religion, national origin, ethnicity, disability, sex, gender identity, sexual orientation, or age”.

Enter technology. It’s now not only viable but advisable that identity verification technology be used to vet individuals, even in a remote setting. Developments in optical character recognition (OCR) have reduced the necessity for flatbed scanners, as new generations of mobile phone and webcam sport capable cameras and resolution. When used in combination with machine learning and biometrics, standards of automated fraud and impersonation detection have risen to keep pace with the fraudsters.

While there are good business reasons for updating legacy right-to-work and onboarding processes, there are moral imperatives for abandoning manual ones.

  • Source: FOIA request to the Home Office.

**Source: Black and Minority Ethnic Access to Higher Education A Reassessment http://www.lse.ac.uk/website-archive/newsAndMedia/PDF/NuffieldBriefing.pdf

Author image
Edward is Chief Commercial Officer at Onfido.
top