Are your employee social media checks consistent and fair?

By Daniel Callaghan, below, CEO and co-founder of global pre-employment screening platform Veremark

How extensively do you check a job candidate’s social media presence before hiring them? The rules in the UK are clear. If you’re going to check a candidate’s social media, you must get their consent in advance and you must be fair about your approach e.g. if you check one candidate’s profiles, you must do exactly the same for the others.

ACAS states that employers must make sure social media doesn’t inform whether or not you interview or hire someone, or you may be breaking the law. Yet it’s a very common way to vet candidates either pre-interview or pre-hire. Only 20% of employers at medium-sized organisations and 40% at small businesses say they would not or do not check prospective employees’ social media activities, according to YouGov research.

And close to a fifth (19%) of employers have turned down candidates for jobs at their companies because of their online activity, the same survey found. The three things most likely put off employers? Using aggressive or offensive language (75%), references to drug use (71%), and poor spelling and grammar (56%).

Customer faith in your brand is crucial, and your employees are the walking, talking face of your company. Hiring someone untrustworthy or otherwise unprofessional has the potential, therefore, to severely dent your company’s standing. That’s not to mention the financial cost of making a bad hire, which is estimated to be three times higher the salary they are paid, according to a recent study by the Recruitment & Employment Confederation.

Social media has, in effect, given risk-averse employers a lens through which to glimpse the life and views of a future employee (as long as their profiles are not set to ‘private’) and the chance to scan for any red flags before officially bringing someone onboard.

But as well as an opportunity, social media can also be an HR and employment law minefield, because those making judgements based on social media activity aren’t always fair. Things become problematic when hiring managers make potentially discriminatory conclusions about what they’re seeing. Should a person charting their journey through IVF and likely to soon be on maternity leave, or someone with tattoos, be avoided? Absolutely not, and yet this still happens – leaving some candidates unjustly disadvantaged and employers at risk of legal action.

SME Publications/ SME XPO 2024

A big part of the problem is that social media checks are still routinely carried out manually: usually by a hiring manager or an HR person using Google. A quick scroll through any profiles that are not hidden by privacy settings and judgments are made. This is problematic in more ways than one.

For starters, if it’s a candidate with a common name, they could be looking at the wrong person altogether. And, given the ease with which online profiles can be manufactured and identities stolen, they could also be fake. Candidates are also vulnerable to the personal and political views of the individual doing those checks. Can you be absolutely sure that someone in your company has never rejected someone based on the grounds of a ‘protected characteristic’, which includes disability, race, religion or sexual orientation?

To minimise these risks, automated and compliant social media screening software – that is, unlike humans, consistent and impartial – is widely and inexpensively available. But awareness of this technology is still low. Last year, automated social media checks amounted to less than 1% of all pre-employment checks carried out by Veremark, with the majority of companies pursuing a manual approach.

But automated checks use software to scan social media profiles in the public domain for red flag content, and search criteria can be tailored to the role. For example, businesses employing for a tech role with access to sensitive data could search for instances of the word ‘hacking’ or phrase ‘dark web’.

Such software can analyse years of posts and images across multiple and distinct risk classifications, using advanced machine learning and natural language processing to flag posts for specific risk factors including bullying, self-harm, narcotics, violence, violent images and (political) hate speech, and produce a report in less than 30 minutes, while following and conforming to official guidelines around social media privacy.

Screening will return any examples of potential ‘adverse content’ on someone’s public page or profile for the employer to look at, giving them a chance to put it into context and consider whether or not it poses a risk.

Some content will be clear cut. Posts which contain racially abusive language, or photos showing drug use or criminal activity will likely lead to an employer having to revoke a job offer (there are precedents for this in the UK).

But not all cases are so black and white. Often, checks will flag content that is embarrassing or slightly inappropriate, rather than downright offensive. This gives employers a valuable opportunity to discuss the matter with candidates and perhaps encourage them to delete the content, rather than have it potentially surface later.

While screening candidates is a good place to start, this should not be where a social media policy starts and finishes. It’s best practice for businesses to have a clear and well-communicated social media policy in place and to rescreen employees every two or three years. This should also outline how far back they will dig into an applicant’s online presence. Decades-old activity is usually considered irrelevant.

A clear policy will also demonstrate a level of expectation about behaviour. Unfortunately, there are plenty of examples of people bad mouthing their employers online (particularly on TikTok, where it’s become something of a trend). And even a seemingly innocent photo snapped in the office could, if it included a computer screen displaying sensitive information, be considered a risk.

So, don’t let social media become a blind spot in your hiring process. Almost eight in 10 UK citizens use at least one platform and, for SMEs, doing social media screening well will both minimise risk and potential for discrimination, as well as guarantee compliance.

Daniel Callaghan is CEO and co-founder of global pre-employment screening platform Veremark. Its mission is to help the business world ‘trust faster’, as well as hire and onboard new staff seamlessly, with automated and Blockchain-verified pre-employment background checks and periodic employee screening. Offering 40 types of credentials checks in 150 countries, Veremark helps businesses of all sizes to verify the integrity of existing and prospective employees while delivering the best candidate experience possible.

 

SME Publications/ SME XPO 2024