You're Hiring Programmers Wrong: A Case for Interview Standardization
I've conducted about 100 technical interviews over the past 6 months for a software development recruiting company called Triplebyte. I've also been doing consulting work, which has required me to take numerous technical interviews. It's interesting contrasting the experiences to identify what works and what doesn't.
Every Triplebyte interview begins with the candidate coding a short game. We have a series of steps, and each step precisely defines simple requirements for the program to handle. I can generally tell a couple minutes into the 2 hour interview whether the candidate will be successful. There are certainly outliers (as well as mechanisms to prevent bias), but in general, I can quickly ascertain how well a candidate stacks up technically. So then, it begs the question, why is it so hard to hire engineers? The answer is, most of us are doing it wrong.
Interview Standardization
About 12 months ago, while running Fullstack Academy's Chicago campus, I was looking to hire a qualified instructor. Our interview consisted of live lecture as well as a mock 1-1 teaching session, where the candidate was required to guide me (playing the role of a struggling student) through a simple recursion problem. The latter exercise was almost always a better indicator of instructor fit than the former. The reason was simple: the latter exercise was assigned to the candidate whereas the lecture topic was their choice. This meant that every lecture topic was different and made lectures difficult to compare. In contrast, after having seen dozens of candidates attempt to teach me the same recursion problem, it was obvious who stood out.
The first mistake that companies make is that they fail to standardize the interview across candidates. As interviewers, it's tempting to say, "let's have the candidate pair with us, that way we can see how they perform on real day-to-day challenges." Or another fault I've seen committed: give some candidates one problem and other candidates a different problem. This way a company can protect against its problems getting leaked online, right?
The issue is, as an interviewer, unless you have witnessed a dozen candidates attempt a given problem, it's very challenging to assess their abilities. It's easy to fall victim to the "I solved it, why can't they" bias. You need to compare candidates to each other to assess who did a good job and who didn't. Which brings me to my next point.
Objective Scoring
You need to compare candidates objectively, scoring them on every exercise across a variety of factors. There must be well-defined criteria to differentiate scores, and scoring should be done immediately upon completion of the exercise.
Suppose our interviewer tasks the candidate to construct a simple web application. You assign a score of 1 through 4 to the following factors:
- Productivity
- Programming Style
- Language/Framework Familiarity
- Debugging Ability
Beforehand, you would need to write a short description of what qualifies each score for each of the above factors. Then, during the interview, you will need a rubric to help you determine what score a candidate earns for each factor. Productivity can be measured exceptionally objectively if you have the candidate follow a prescribed series of steps to solve the programming challenge. For example, at Triplebyte, we literally tell them (loosely) what functions to write and in what order.
After a week or two of interviewing a half dozen candidates, this scoring becomes critical and allows you avoid biases such as the Serial-position effect.
Culture Fit
Surprisingly, a lot of this advice can also be applied to assessing culture fit. You can have the same person, ask the same questions, in the same order, using the same rubric to compare every answer. The risk is that you will inadvertently dehumanize the candidate, but it is possible to do this objective assessment such that the candidate is unaware of it. At the very least, you need to document your impression of the candidate immediately after the interview, lest you rely on your memory/feelings a week later when making a decision.
You could take this philosophy to the absurd, and it would certainly eliminate bias, though I personally believe there's a point of diminishing returns, at which you degrade the experience of the interviewee. Nevertheless, some standardization across interviews is critical to avoid implicit bias and identify good candidates, and I encourage every tech company to examine their interview process to assess their level of standardization.
If you have experienced challenges doing technical hiring, I've helped companies streamline their process and increase their hiring success rate. Please contact me for more details.