As I sit here typing away at my keyboard, I know full well how words appear on my screen.
First I type with a satisfactory clunk on a mechanical keyboard.
Signals travel from each key through the USB cable into my laptop.
Tiny elves transport the signals via miniature paintbrushe… wait.
Oh, never mind, I don’t really need to know how it works, it works well enough.
Until something goes wrong, and turning it off then turning it on again doesn’t do its job.
Time to get the experts in.
This is the Illusion of Explanatory Depth - our belief that we understand the world more fully than we do.
Until we’re asked to explain its workings and find the limit of our understanding.
Until things go wrong and it’s on us to fix them.
It’s a definition that has an important place in recruitment.
Especially for hiring processes that think they know how to recruit, yet aren’t accountable for their part if things go wrong.
Where there isn’t sufficient knowledge to ask the right questions, to get to the root of what happened, and find solutions to problems not known to exist.
Typically represented by the assumptions such as ‘all recruiters are the same’, ‘adverts don’t work’, ‘we give a great candidate experience’, and all that jazz.
“We work with specialist recruiters.”
What is a specialism in recruitment?
Is it knowledge of a market vertical, where your expertise can probe to establish what right is and bring them forward for the right reason?
Is it doing the same type of vacancy over and over, where you obtain a density of keywords, without the wherewithal to ask substantial questions?
Is it horizontal expertise in recruitment marketing, copywriting, consultation and advocacy?
If you rely on the specialism of your recruiters, how do you challenge their expertise to see if they specialise in how you need, not what you think you want?
“We provide an excellent candidate experience.”
To whom do you provide that?
Is it the type of candidate who you may wish to employ?
Is it suitable applicants who aren’t right for your vacancy?
Is it unsuitable applicants who see themselves as a candidate for employment?
Is it the people you’d love to employ, who actively chose not to engage, sometimes without you being aware of them?
Is it the people you’d love to employ, who you haven’t discovered, and who can’t discover you?
If the answer isn’t yes to all, and you aren’t measuring it, how good a candidate experience are you actually giving?
Clue: “If you don’t hear from us within one week, please assume you were unsuccessful,” means you can’t provide a holistically good candidate experience.
What impact will that have?
“There are no USPs in recruitment”
A unique sales proposition. Is that so?
What is it that we are selling? Is it CVs? Is it a CV database? Is it candidates (and what is a candidate)? Is it process? Is it philosophy?
Is it automation in the guise of AI? Is it more, quicker, better? Is it fewer, more accurately, more specifically?
Is it fill rate? Is it retention? Performance beyond expectation?
How does that matter for your recruitment?
What problems do they solve for you?
Are your problems unique to you, in which case shouldn’t it matter what service you buy from a recruiter?
And if your problems are unique, how are you assessing which recruiters are suited if their proposition isn’t both unique and uniquely aligned to your problems?
“Adverts don’t work”
Is that so? What evidence do you have to show this?
Is it the evidence of your applications? The evidence of candidate availability in your marketplace compared to market conditions?
An analysis of employer-centric ( inside out ) adverts vs candidate-centric (outside in) adverts?
Do your adverts give candidates reasons to get in touch, let alone apply?
I can’t speak for anyone else, but my adverts fill around half of my roles, including skills short and ‘passive’/'‘problem unaware’ candidates.
While this post shared by Mitch Sullivan shows an A/B test for how language affects advert performance.
And given an advert doesn’t just mean a message shared above-the-line on a job board, but also those below-the-line in DMs, emails and phone calls, I’d be worried by anyone who claims they don’t work, without evidence it isn’t them at fault.
How do you know there aren’t buyers if you don’t actively sell through your words?
Do they know how adverts work, to say that they don’t?
“70% of candidates don’t apply to adverts”
Or whatever the latest stat is, to support the passive candidate argument. But is that even the right argument, considering an effectively written advert, in the right place, can appeal to passive readers?
These are my thoughts.
And if passive isn’t the right term, how about problem awareness ?
Or how about people who are problem unaware one day, and problem aware the next, when they are sacked by Zoom through no fault of their own?
Are these people who then wouldn’t apply to adverts?
What’s holding people back from applying? Is it status, awareness, or a reaction to what they read?
While if people don’t apply to adverts, why might they respond instead to a message, attractive or otherwise?
Or could it be a good thing, not to advertise, given the 200 good candidates who applied across 3 vacancies last week, with over 1,000 applications? Would a headhunt be less work, with the same outcome of filling those vacancies?
Isn’t the better question to ask where the candidates are that are likely to be suited to a vacancy, than talk about whether they might apply for a job?
Given the crux of marketing is the right place, alongside the right person, the right time, the right offer and the right message.
“AI can’t replace the human side of recruitment”
But what is AI? Is it automation dressed up as intelligence?
Is it technology now, in the public domain, which changed again yesterday with 4o?
Is it technology that is being worked on, under the guise of Moore’s law, that is ready but not released?
Is it the aggregation of different automation across the recruitment lifecycle that, if implemented well, provides a better experience for its users - candidates?
What is the human side anyway? Is it trust? We trust our devices with no end of sensitive data as we doomscroll our feeds and subscribe to another app.
Is it contextual insight? Perhaps so right now, but if AI becomes intelligent why couldn’t it gain that straightforwardly, given technology is iterative and can only get better?
Is that a genuine statement to rely on, or are we Blockbuster when we didn’t buy Netflix in 2000?
Development - release - implementation - adoption - entrenchment. There are yards to go before we even know what we are dealing with.
I don’t think the Valley of Despair was the right term for me sliding down from Mount Stupid.
It’s an exhilarating ride to discover all the things you don’t know and unpick the things you thought you did.
It starts with understanding there are no elves - only key press triggering circuit closure, sending a unique scan code to the computer for character translation and display.
And when you blow up illusory depth, there are learning opportunities, to get better at what we do, by cutting past assumptions and leaning into what we don’t know.
If you want to fix your keyboard that is.
Regards,
Greg