Technical Interviews in the AI Era
The arrival of Large Language Models disrupted our work habits in so many ways, but how AI fits into the technical interviewing process has proven particularly divisive. On one end of the spectrum, we see companies like Canva moving to require the use of AI coding tools during technical interviews, and on the other end are some of the bigger tech companies who prohibit the use of AI tools in interviews. There's nuance to how these tools should be deployed during the hiring process. The all important step that engineering leaders skip when deciding policy here is getting explicit about what information they're trying to get out of an interview.
Let's break down a few common interview formats, the signal we might want from the interview, and the role of AI as a tool for the candidate. For simplicity, I'll scope discussion to hiring IC roles.
Impact of AI on Engineering
This disruption parallels when web search became truly effective and engineers were better off being good at finding answers than having memorized them. But AI assistants go further: where Google search required knowing what to ask and evaluating results, AI can generate novel solutions and iterate on feedback. Someone could reasonably have solid systems knowledge and describe a desired architecture to Claude Code without knowing a lick of Javascript. This means that during technical interviews we will often want to shift from assessing declarative knowledge to assessing problem solving and ability to track down a solution with the tools you'll have access to on the job.
Technical Phone Screen
The technical phone screen assesses whether a candidate has the fundamental technical knowledge and communication skills to warrant further investment in the interview process. Usually we're looking for simple signals: basic fluency with the programming language of choice, some problem solving, and ability to communicate ideas. Before AI assistants, this was the place where an easy Leetcode problem most often showed up. Because this assessment is run remotely, if you want to prohibit AI use, this is the most difficult spot to enforce it.
Here I'm a fan of a couple formats:
- One is to just stick with a simple Leetcode problem, ideally something that actually maps to realistic work in the role, but identify spots where the interviewer can verbally probe for more information. The goal isn't to just get a working solution, it's to see if the candidate knows what tradeoffs their solution is making.
- Another option is to have the candidate lead a code review or debugging session of some company-supplied code. This should naturally emphasize more verbal analysis. Interviewers should be trained to go two or three layers deep in discussion, to screen for surface level answers supplied by AI. A good option for code is to debug AI generated code where iterating on prompts failed to improve the result.
Coding Interview
Here there are two categories of goals:
- Understand technical ability. Probe familiarity with specific technical concepts, programming languages, and the pace at which they can apply a solution.
- Understand critical thinking. Determine how the candidate can break down a problem, identify edge cases, map a solution to business requirements, and communicate all of that.
If your ICs use AI coding assistants daily, coding interviews should let candidates use the same tools—you want to assess the candidate's actual on-the-job capability, where effective AI use is positive signal.
My favorite format here has become a take home coding assignment where the candidate produces something fairly substantial with AI tools, like a web app, and then a follow up session is a paired code review that the candidate leads. The candidate needs to demonstrate understanding of their own code during the in-person session. That session can also include a live refactor: "let's update this so that it uses an API gateway for the backend service" or "we need to add more strict validation of the user input at this chat node." You can also review prompts that the candidate used to generate the code, if you really want to go deep with how they use an assistant.
Focus on the candidate's ability to produce something fully featured, understand the code, anticipate edge cases, and refactor live.
Project Deep Dive
This one is straightforward: this should be verbal communication by the candidate. Some orgs like when a candidate has slides prepared to whip out, but I actually hate it. I want to have an organic and deep conversation about the work. There's little threat of AI interfering with your assessment here.
Systems Design
This session is about problem solving, identifying requirements, and understanding tradeoffs. I think this is a good one where you do want the candidate to showcase declarative knowledge. Resources like ByteByteGo that candidates use for prep can also help refine what you want to assess in this session. In terms of actually conducting the session, I advocate that this be largely verbal, with a whiteboarding tool that requires the candidate to input information manually (Google, Zoom, Excalidraw, Miro, etc). A good way to make the take home assignment feel more worthwhile to the candidate is to leverage that same content here: "let's add a load balancer" or "here are some new requirements for state management, let's talk through adding that."
Takeaways
Know what you're interviewing for in each session! The interview isn't a gauntlet to cull weak engineers, it is to deeply understand a candidate's strengths and weaknesses so you make an informed decision. Today, we will often be looking for someone who is good at leveraging AI code assistants, so don't eliminate that from the assessment during the interview.
- Phone Screen: Keep a coding problem small and focused. Create opportunities to probe that would be difficult to use AI during. Debugging AI generated code can be really effective.
- Coding Interview: Favor take home assignments that have a high bar, and allow the candidate to use whatever they like. The depth of assessment comes from code review of the assignment.
- Project Deep Dive and Systems Design: Keep these largely verbal and focus on critical thinking. These are places where you want to assess the candidate's ability in the absence of AI tools.