GitHub switched their entire API to GraphQL in 2016. Shopify processes billions of GraphQL queries daily. Netflix uses it to power their streaming platform. According to the State of JavaScript 2024 survey, GraphQL adoption continues to climb, and "explain how you'd solve the N+1 problem" has become a standard senior developer interview question.
This guide covers the GraphQL concepts interviewers actually ask about, from basic queries and mutations to advanced topics like DataLoader, schema design patterns, and real-time subscriptions. Whether you're preparing for a frontend or backend role, these questions will help you demonstrate real GraphQL expertise.
Table of Contents
- GraphQL Fundamentals Questions
- GraphQL vs REST Questions
- Schema Design Questions
- Resolver Questions
- N+1 Problem and DataLoader Questions
- Authentication and Authorization Questions
- Error Handling Questions
- Performance and Security Questions
- Pagination Questions
- Subscriptions Questions
- Apollo Client Questions
- GraphQL Interview Scenario Questions
GraphQL Fundamentals Questions
These questions test your understanding of GraphQL's core concepts and how it works at a high level.
What is GraphQL and what problem does it solve?
GraphQL is a query language for APIs and a runtime for executing those queries, developed by Facebook in 2012 and open-sourced in 2015. It solves the fundamental problems of over-fetching (receiving more data than needed) and under-fetching (needing multiple requests to gather related data) that plague traditional REST APIs.
Unlike REST where the server dictates the response shape, GraphQL lets clients specify exactly what data they need in a single request. This is particularly valuable for mobile applications where bandwidth is limited and for complex UIs that need data from multiple resources.
Core concepts:
- Schema - A strongly-typed contract defining all available types, queries, and mutations
- Queries - Read operations where clients specify exactly which fields they want
- Mutations - Write operations for creating, updating, or deleting data
- Resolvers - Functions that fetch the data for each field
How would you give a 30-second explanation of GraphQL in an interview?
When an interviewer asks "What is GraphQL?", they want to see that you can communicate complex concepts concisely. A strong 30-second answer demonstrates understanding without overwhelming with details.
Here's an effective response: "GraphQL is a query language for APIs developed by Facebook that lets clients request exactly the data they need in a single request. Unlike REST with multiple endpoints returning fixed data structures, GraphQL has one endpoint where clients specify their requirements declaratively. This eliminates over-fetching and under-fetching, makes APIs self-documenting through the schema, and enables frontend teams to work independently without waiting for backend changes."
After giving this answer, wait for follow-up questions rather than launching into advanced topics like resolvers and DataLoader unprompted.
What are the main benefits and tradeoffs of using GraphQL?
GraphQL excels in specific scenarios but isn't universally superior to alternatives. Understanding when to use it shows architectural maturity that interviewers value highly.
GraphQL shines when you have complex, nested data requirements, multiple clients (mobile, web, desktop) needing different data shapes, or teams that need to iterate quickly without backend dependencies. If you're building a dashboard that pulls data from multiple sources, GraphQL's ability to aggregate data in a single request is invaluable.
Benefits:
- Eliminates over-fetching and under-fetching
- Strong typing enables better tooling and validation
- Self-documenting through introspection
- Frontend teams can work independently
Tradeoffs:
- Implementation complexity compared to simple REST
- Caching challenges since everything goes through one endpoint
- Need to prevent expensive queries from overwhelming servers
- Steeper learning curve for teams new to GraphQL
GraphQL vs REST Questions
Understanding the practical differences between GraphQL and REST is crucial for senior-level interviews.
What is the difference between GraphQL and REST APIs?
The fundamental difference lies in how clients interact with the API. REST uses multiple endpoints where the server determines the response structure, while GraphQL uses a single endpoint where clients declare their exact data needs.
In REST, fetching a user's profile with their posts and comments might require three separate requests to different endpoints. Each request returns a fixed data structure defined by the server, often including fields the client doesn't need. GraphQL accomplishes this in one request where the client specifies exactly which user fields, post fields, and comment fields to return.
# REST approach requires multiple requests:
# GET /api/users/123
# GET /api/users/123/posts
# GET /api/posts/456/comments
# GraphQL accomplishes this in ONE request:
query GetUserDashboard {
user(id: "123") {
name
email
posts(limit: 5) {
title
commentCount
comments(limit: 3) {
text
author {
name
}
}
}
}
}When should you choose GraphQL over REST?
The decision between GraphQL and REST should be based on your specific use case, not on which technology is "better." Interviewers want to hear that you understand the practical implications of this choice.
GraphQL is the better choice when you have complex, nested data requirements that would require multiple REST calls, when multiple clients need different views of the same data, or when you want frontend teams to iterate independently. REST is preferable for simple CRUD operations, when you need HTTP caching at the CDN level, for file uploads/downloads, or when your team lacks GraphQL experience.
Choose GraphQL when:
- Building complex UIs with nested data from multiple sources
- Supporting mobile apps where bandwidth efficiency matters
- Multiple clients need different data shapes
- Frontend teams need to move fast without backend dependencies
Choose REST when:
- Simple CRUD operations with predictable data shapes
- Need HTTP-level caching at CDN/proxy layers
- Handling file uploads and downloads
- Small team without GraphQL expertise
Schema Design Questions
Schema design reveals whether you truly understand GraphQL or just copy-paste from tutorials.
What is the GraphQL Schema Definition Language (SDL)?
The Schema Definition Language is a human-readable syntax for defining GraphQL schemas. It specifies the types available in your API, their fields, and the relationships between them. The schema serves as a contract between frontend and backend teams and enables powerful tooling like auto-completion and validation.
Every GraphQL API has a schema that defines three special root types: Query for read operations, Mutation for write operations, and Subscription for real-time updates. Custom types describe your domain objects like User, Post, or Comment.
# Schema Definition Language (SDL)
type Query {
user(id: ID!): User
posts(limit: Int = 10, offset: Int = 0): [Post!]!
searchContent(query: String!): [SearchResult!]!
}
type Mutation {
createPost(input: CreatePostInput!): CreatePostResult!
updateUser(id: ID!, input: UpdateUserInput!): User
deletePost(id: ID!): Boolean!
}
type User {
id: ID!
name: String!
email: String!
posts: [Post!]!
createdAt: DateTime!
}
type Post {
id: ID!
title: String!
content: String!
author: User!
comments: [Comment!]!
publishedAt: DateTime
}What is the difference between Input types and Output types?
GraphQL distinguishes between types used for input (arguments to queries and mutations) and types used for output (return values). This separation ensures clean API design and enables different validation rules for each direction.
Input types can only contain scalar types, enums, and other input types—they cannot include output types or fields with arguments. This restriction exists because inputs represent data flowing into your API, while outputs represent data flowing out. The same conceptual entity often needs different shapes for input versus output.
# Input types for mutations (can't use regular types)
input CreatePostInput {
title: String!
content: String!
authorId: ID!
}
# Output type
type Post {
id: ID!
title: String!
content: String!
author: User! # Can include relationships
comments: [Comment!]!
createdAt: DateTime! # Server-generated field
}
# Union type for polymorphic returns
union SearchResult = User | Post | Comment
# Union for error handling (modern pattern)
union CreatePostResult = Post | ValidationError | AuthorizationError
type ValidationError {
message: String!
field: String!
}What is the difference between [Post]!, [Post!], and [Post!]! in GraphQL?
Nullability in GraphQL arrays is a common interview question because it reveals deep understanding of schema design. The exclamation mark means "non-null," and its position relative to the brackets changes the meaning significantly.
Understanding these patterns helps you design schemas that accurately represent your data constraints and handle edge cases gracefully. Choosing the wrong nullability can lead to cascading null errors or overly defensive client code.
type User {
# [String]! - Non-null list, nullable items
# Valid: [], ["tag1", null, "tag2"]
# Invalid: null
tags: [String]!
# [String!] - Nullable list, non-null items
# Valid: null, [], ["tag1", "tag2"]
# Invalid: ["tag1", null]
middleNames: [String!]
# [String!]! - Non-null list, non-null items
# Valid: [], ["role1", "role2"]
# Invalid: null, ["role1", null]
roles: [String!]!
}Practical guidance:
- Use
[Type!]!for collections that always exist and can't have null items (user's roles) - Use
[Type]!when items might be null but the list always exists (optional tags) - Use
[Type!]when the entire field might be absent but items are never null
Resolver Questions
Resolvers are where GraphQL connects to your actual data sources. Understanding them is essential for any GraphQL role.
What are GraphQL resolvers and what arguments do they receive?
Resolvers are functions that execute when a field is requested, fetching or computing the data for that field. They're the bridge between your schema definition and your actual data sources—databases, REST APIs, microservices, or any other backend.
Every resolver receives four arguments in a specific order: parent (the result from the parent resolver), args (arguments passed to the field), context (per-request data shared across all resolvers), and info (metadata about the query). Understanding these arguments is crucial for building effective GraphQL APIs.
const resolvers = {
Query: {
user: async (parent, args, context, info) => {
// parent: undefined for root queries
// args: { id: "123" } from query arguments
// context: shared per-request data (auth, DB, loaders)
// info: query metadata (rarely used directly)
return context.db.users.findById(args.id);
},
posts: async (parent, { limit, offset }, context) => {
// Destructure args for cleaner code
return context.db.posts.findMany({
take: limit,
skip: offset,
orderBy: { createdAt: 'desc' }
});
}
},
User: {
// This resolver runs for the 'posts' field on User type
posts: async (parent, args, context) => {
// parent is the User object from the parent resolver
return context.db.posts.findMany({
where: { authorId: parent.id }
});
},
// Computed field - doesn't exist in database
fullName: (parent) => {
return `${parent.firstName} ${parent.lastName}`;
}
}
};Why should resolvers be thin and delegate to service layers?
A common mistake is putting business logic directly in resolvers. This makes code harder to test, reuse, and maintain. Experienced developers keep resolvers thin, delegating actual work to service layers that can be tested independently.
Thin resolvers also enable code reuse—your service layer can be called from resolvers, background jobs, or other entry points without duplicating logic. This separation of concerns is a sign of production-ready GraphQL architecture.
// BAD: Business logic in resolver
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
// Don't do all this in the resolver
if (!context.currentUser) throw new Error('Not authenticated');
if (input.title.length < 5) throw new Error('Title too short');
const post = await context.db.posts.create({
data: { ...input, authorId: context.currentUser.id }
});
await sendNotification(post);
return post;
}
}
};
// GOOD: Delegate to service layer
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
return context.services.posts.create(input, context.currentUser);
}
}
};N+1 Problem and DataLoader Questions
The N+1 problem is the senior developer interview question. Being able to explain and solve it demonstrates real GraphQL expertise.
What is the N+1 problem in GraphQL and why does it occur?
The N+1 problem occurs when fetching a list of N items triggers 1 initial query plus N additional queries to fetch related data for each item. This happens because GraphQL resolvers execute independently—each field resolver doesn't know what other fields are being requested simultaneously.
When you query users with their posts, the user resolver fetches all users in one query, but then the posts resolver runs separately for each user, resulting in N additional queries. For 100 users, that's 101 database calls instead of the optimal 2.
// BAD: N+1 queries - 101 database calls for 100 users
const resolvers = {
Query: {
users: () => db.users.findMany() // Query 1: Get 100 users
},
User: {
posts: (user) => db.posts.findMany({
where: { authorId: user.id }
}) // Queries 2-101: One for EACH user!
}
};When you query users { name posts { title } }, here's what happens:
- One query fetches 100 users
- For each of the 100 users, a separate query fetches their posts
- Total: 101 database queries instead of 2
How does DataLoader solve the N+1 problem?
DataLoader is a utility that batches and caches requests within a single event loop tick. Instead of executing N separate database queries for N users' posts, DataLoader collects all the user IDs being requested, waits until the current execution tick completes, then makes a single batched query.
The key insight is that DataLoader's batch function receives an array of keys and must return results in the same order. This lets it combine what would be N individual queries into a single optimized query.
const DataLoader = require('dataloader');
// Batch function: receives array of IDs, returns array of results
// Results MUST be in same order as input IDs
const batchGetPostsByUserIds = async (userIds) => {
// ONE query for all users
const posts = await db.posts.findMany({
where: { authorId: { in: userIds } }
});
// Group posts by authorId
const postsByUser = {};
posts.forEach(post => {
if (!postsByUser[post.authorId]) {
postsByUser[post.authorId] = [];
}
postsByUser[post.authorId].push(post);
});
// Return in same order as input IDs
return userIds.map(id => postsByUser[id] || []);
};
// Create loaders per request (important!)
const createLoaders = () => ({
postsByUser: new DataLoader(batchGetPostsByUserIds),
users: new DataLoader(async (ids) => {
const users = await db.users.findMany({
where: { id: { in: ids } }
});
return ids.map(id => users.find(u => u.id === id));
})
});
// Resolvers using DataLoader
const resolvers = {
User: {
posts: (user, args, { loaders }) => {
// DataLoader batches all calls in one event loop tick
return loaders.postsByUser.load(user.id);
}
}
};Now instead of 101 queries, you get 2: one for users, one for all their posts.
Why must you create new DataLoader instances per request?
DataLoader caches results within its instance to avoid duplicate fetches during a single request. However, this same caching becomes dangerous if you reuse DataLoader instances across requests—you'd return stale data and potentially leak information between users.
Creating fresh DataLoader instances in your context function ensures each request starts with an empty cache. This is essential for correctness and security in production systems.
// Context setup - new loaders per request
const context = ({ req }) => ({
currentUser: getUserFromToken(req.headers.authorization),
db,
loaders: createLoaders() // Fresh loaders prevent cross-request caching
});Authentication and Authorization Questions
Security is critical in any API. These questions test your understanding of GraphQL's security model.
How do you implement authentication in GraphQL?
Authentication (verifying who a user is) should happen outside GraphQL, typically through HTTP headers containing JWT tokens or session identifiers. The GraphQL context function validates the token and attaches the authenticated user to the context object, making it available to all resolvers.
This approach keeps authentication logic centralized and ensures consistent behavior across all operations. Never scatter token validation across individual resolvers.
// Context setup: authentication
const context = async ({ req }) => {
const token = req.headers.authorization?.replace('Bearer ', '');
let currentUser = null;
if (token) {
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
currentUser = await db.users.findById(decoded.userId);
} catch (err) {
// Invalid token - user stays null
}
}
return {
currentUser,
db,
loaders: createLoaders()
};
};What is the difference between authentication and authorization in GraphQL?
Authentication verifies identity ("who are you?") and happens in the context function before resolvers run. Authorization controls access ("what can you do?") and happens inside resolvers based on the authenticated user's permissions.
This separation keeps concerns clear: context handles identity, resolvers handle permissions. Field-level authorization lets you protect specific fields or operations based on user roles or ownership.
// Authorization in resolvers
const resolvers = {
Query: {
me: (parent, args, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
return context.currentUser;
},
adminDashboard: (parent, args, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
if (context.currentUser.role !== 'ADMIN') {
throw new ForbiddenError('Admin access required');
}
return getDashboardStats();
}
},
Mutation: {
deletePost: async (parent, { id }, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
const post = await context.db.posts.findById(id);
// Field-level authorization
if (post.authorId !== context.currentUser.id &&
context.currentUser.role !== 'ADMIN') {
throw new ForbiddenError('Not authorized to delete this post');
}
await context.db.posts.delete(id);
return true;
}
}
};How do you implement directive-based authorization?
Custom directives provide a declarative way to specify authorization requirements directly in the schema. This makes security requirements visible at the schema level and reduces boilerplate in resolvers.
Directive-based authorization is particularly useful for APIs with consistent role-based access patterns. The directive implementation handles the actual permission checking, keeping resolvers focused on data fetching.
# Schema with auth directives
directive @auth(requires: Role = USER) on FIELD_DEFINITION
enum Role {
USER
ADMIN
}
type Query {
publicPosts: [Post!]!
me: User! @auth
adminDashboard: Dashboard! @auth(requires: ADMIN)
}Error Handling Questions
GraphQL's error model differs fundamentally from REST. Understanding this is crucial for building robust APIs.
How does error handling work in GraphQL?
GraphQL can return both data and errors in the same response—a fundamentally different model from REST where a request either succeeds or fails entirely. This allows partial success: if one field fails to resolve, other fields can still return data.
Errors appear in a top-level errors array with information about what failed and where. The path field shows which field in the query caused the error, helping clients handle failures gracefully.
// GraphQL can return BOTH data and errors
{
"data": {
"user": {
"name": "Alice",
"posts": null // This field failed
}
},
"errors": [
{
"message": "Database connection failed",
"path": ["user", "posts"],
"extensions": {
"code": "INTERNAL_SERVER_ERROR"
}
}
]
}What is the union type pattern for error handling?
The modern approach uses union types to represent expected errors as part of your schema rather than throwing exceptions. This makes errors type-safe and forces clients to handle them explicitly through the type system.
This pattern distinguishes between expected errors (validation failures, permission denied) that clients should handle gracefully and unexpected errors (database down, null pointer) that indicate system problems. Expected errors become first-class citizens in your API.
union CreateUserResult = User | EmailTakenError | ValidationError
type EmailTakenError {
message: String!
suggestedEmail: String
}
type ValidationError {
message: String!
field: String!
}
type Mutation {
createUser(input: CreateUserInput!): CreateUserResult!
}// Resolver returning typed errors
const resolvers = {
Mutation: {
createUser: async (parent, { input }, context) => {
// Check for existing email
const existing = await context.db.users.findByEmail(input.email);
if (existing) {
return {
__typename: 'EmailTakenError',
message: 'Email already registered',
suggestedEmail: suggestAlternative(input.email)
};
}
// Validate input
const validation = validateUserInput(input);
if (!validation.valid) {
return {
__typename: 'ValidationError',
message: validation.error,
field: validation.field
};
}
// Success case
const user = await context.db.users.create(input);
return { __typename: 'User', ...user };
}
}
};Performance and Security Questions
These questions reveal whether you've operated GraphQL in production environments.
What security vulnerabilities does GraphQL have and how do you prevent them?
GraphQL's flexibility is also its vulnerability—clients can craft arbitrarily complex queries that overwhelm your server. A malicious query with deep nesting or requesting huge lists can consume excessive resources, effectively creating a denial-of-service attack.
Production GraphQL APIs need multiple layers of protection: query depth limiting prevents infinitely nested queries, complexity analysis assigns costs to fields and rejects expensive queries, and introspection should be disabled in production to hide your API structure from attackers.
// PROBLEM: Malicious deeply nested query
query EvilQuery {
users { # Level 1
posts { # Level 2
comments { # Level 3
author { # Level 4
posts { # Level 5
comments { # Level 6 - recursion continues...
...
}
}
}
}
}
}
}
// SOLUTION 1: Query depth limiting
const depthLimit = require('graphql-depth-limit');
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules: [depthLimit(5)] // Max 5 levels deep
});
// SOLUTION 2: Query complexity analysis
const { createComplexityLimitRule } = require('graphql-validation-complexity');
const complexityRule = createComplexityLimitRule(1000, {
scalarCost: 1,
objectCost: 10,
listFactor: 20 // Lists multiply cost
});
// SOLUTION 3: Disable introspection in production
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== 'production'
});What are persisted queries and when should you use them?
Persisted queries are a whitelist approach where only pre-registered queries can execute. Instead of sending the full query text, clients send a hash that the server looks up. This provides security (arbitrary queries can't run) and performance (smaller payloads, no parsing overhead).
This approach is ideal for production applications where you control the clients. It eliminates the risk of malicious queries entirely since only queries you've explicitly approved can execute.
// SOLUTION 4: Persisted queries (whitelist)
// Only allow pre-registered queries
const server = new ApolloServer({
typeDefs,
resolvers,
persistedQueries: {
cache: new RedisCache({ host: 'localhost' })
}
});Pagination Questions
Pagination in GraphQL follows specific patterns that differ from REST approaches.
What is cursor-based pagination and why is it preferred?
Cursor-based pagination uses opaque cursors (usually encoded IDs) to mark positions in a dataset. Unlike offset pagination where "page 5" means different things as data changes, cursors point to specific items and remain stable even when items are added or removed.
The Relay Connection specification has become the standard for cursor-based pagination in GraphQL. It provides a consistent structure with edges, nodes, cursors, and pageInfo that clients can rely on across different APIs.
type Query {
# Cursor-based pagination (recommended)
posts(first: Int, after: String, last: Int, before: String): PostConnection!
}
type PostConnection {
edges: [PostEdge!]!
pageInfo: PageInfo!
totalCount: Int!
}
type PostEdge {
node: Post!
cursor: String!
}
type PageInfo {
hasNextPage: Boolean!
hasPreviousPage: Boolean!
startCursor: String
endCursor: String
}How do you implement cursor-based pagination in a resolver?
Implementing cursor pagination requires encoding positions as opaque strings, fetching one extra item to determine if more pages exist, and constructing the connection response with proper pageInfo. The cursor is typically a base64-encoded ID, though any unique stable identifier works.
The pattern of fetching first + 1 items lets you determine hasNextPage without an additional count query. This optimization is important for performance with large datasets.
// Cursor-based pagination resolver
const resolvers = {
Query: {
posts: async (parent, { first = 10, after }, context) => {
// Decode cursor (base64 encoded ID)
const afterId = after ? Buffer.from(after, 'base64').toString() : null;
// Fetch one extra to check hasNextPage
const posts = await context.db.posts.findMany({
take: first + 1,
cursor: afterId ? { id: afterId } : undefined,
skip: afterId ? 1 : 0,
orderBy: { createdAt: 'desc' }
});
const hasNextPage = posts.length > first;
const edges = posts.slice(0, first).map(post => ({
node: post,
cursor: Buffer.from(post.id).toString('base64')
}));
return {
edges,
pageInfo: {
hasNextPage,
hasPreviousPage: !!after,
startCursor: edges[0]?.cursor,
endCursor: edges[edges.length - 1]?.cursor
},
totalCount: await context.db.posts.count()
};
}
}
};Subscriptions Questions
Real-time updates are a key GraphQL feature for interactive applications.
What are GraphQL subscriptions and when should you use them?
Subscriptions enable real-time updates by establishing a persistent connection (typically WebSocket) between client and server. When relevant data changes, the server pushes updates to subscribed clients without them needing to poll.
Subscriptions are ideal for features requiring instant updates: chat applications, live notifications, collaborative editing, or real-time dashboards. For data that changes infrequently, polling is often simpler and more scalable.
type Subscription {
postAdded: Post!
commentAdded(postId: ID!): Comment!
}How do you implement subscriptions with a PubSub system?
Subscriptions use a publish-subscribe pattern where mutations publish events and subscription resolvers listen for them. The PubSub system acts as a message broker, routing events to interested subscribers.
For production systems, use a distributed PubSub implementation (Redis, Kafka) rather than the in-memory version, which doesn't work across multiple server instances.
const { PubSub } = require('graphql-subscriptions');
const pubsub = new PubSub();
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
const post = await context.db.posts.create(input);
// Publish to subscribers
pubsub.publish('POST_ADDED', { postAdded: post });
return post;
},
addComment: async (parent, { postId, text }, context) => {
const comment = await context.db.comments.create({
postId,
text,
authorId: context.currentUser.id
});
pubsub.publish(`COMMENT_ADDED_${postId}`, {
commentAdded: comment
});
return comment;
}
},
Subscription: {
postAdded: {
subscribe: () => pubsub.asyncIterator(['POST_ADDED'])
},
commentAdded: {
subscribe: (parent, { postId }) => {
return pubsub.asyncIterator([`COMMENT_ADDED_${postId}`]);
}
}
}
};Apollo Client Questions
Understanding client-side GraphQL demonstrates full-stack awareness.
How does Apollo Client manage caching and state?
Apollo Client uses a normalized cache that stores each entity by its unique identifier (typically id and __typename). When a query returns data, Apollo automatically updates all parts of your UI that reference the same entities—no manual cache invalidation needed.
The InMemoryCache's type policies let you customize caching behavior: merging paginated results, defining key fields for non-standard IDs, or specifying how fields should be read and written.
import { ApolloClient, InMemoryCache, gql, useQuery } from '@apollo/client';
// Client setup
const client = new ApolloClient({
uri: 'https://api.example.com/graphql',
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
posts: {
// Merge pagination results
keyArgs: false,
merge(existing = { edges: [] }, incoming) {
return {
...incoming,
edges: [...existing.edges, ...incoming.edges]
};
}
}
}
}
}
})
});How do you use the useQuery hook in React?
The useQuery hook provides a declarative way to fetch data in React components. It returns loading state, error state, and data, automatically re-rendering your component when results arrive or cache updates occur.
The hook handles the full lifecycle: triggering the request on mount, tracking loading state, catching errors, and subscribing to cache updates so your component stays synchronized with the latest data.
// React hook usage
const GET_USER = gql`
query GetUser($id: ID!) {
user(id: $id) {
id
name
posts {
id
title
}
}
}
`;
function UserProfile({ userId }) {
const { loading, error, data } = useQuery(GET_USER, {
variables: { id: userId }
});
if (loading) return <Spinner />;
if (error) return <Error message={error.message} />;
return (
<div>
<h1>{data.user.name}</h1>
{data.user.posts.map(post => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}GraphQL Interview Scenario Questions
Scenario questions reveal how you apply GraphQL concepts to real problems.
How would you design a GraphQL API for a social media feed?
When asked to design an API, interviewers want to see that you consider the full picture: types, relationships, pagination, real-time updates, and performance implications.
A strong answer addresses each concern systematically: "I'd start with the core types: User, Post, Comment, and Like. The Query type would have feed(first: Int, after: String) returning a PostConnection for cursor-based pagination. I'd use DataLoader for the author relationship to prevent N+1 queries. For real-time updates, I'd add a postAdded subscription filtered by followed users. Authentication would be JWT-based through the context, with field-level authorization for sensitive data like email."
How would you migrate from REST to GraphQL?
This question tests your ability to manage technical transitions pragmatically. The key is advocating for incremental migration rather than a risky big-bang approach.
"I'd take an incremental approach rather than a big-bang migration. First, I'd create a GraphQL layer that wraps existing REST endpoints—resolvers would call the REST API internally. This lets us validate the schema design with real usage. Then we'd gradually move data fetching directly to the database, replacing REST calls with direct queries. The REST API could run in parallel during migration, eventually becoming deprecated."
How would you debug a slow GraphQL API?
Performance debugging questions test operational experience. You should mention specific tools and common causes you'd investigate.
"I'd start with Apollo Studio or similar tools to trace resolver execution times. Common culprits are N+1 queries (add DataLoader), missing database indexes (add indexes on foreign keys), or over-fetching at the database level (use projections). I'd also check query complexity—maybe clients are requesting too much nested data. Solutions include query complexity limits, depth limiting, and persisted queries to whitelist allowed operations."
Quick Reference
| Concept | Purpose | Example |
|---|---|---|
| Query | Read data | query { user(id: "1") { name } } |
| Mutation | Write data | mutation { createUser(name: "Alice") { id } } |
| Subscription | Real-time updates | subscription { postAdded { title } } |
| Resolver | Fetch data for field | user: (parent, args, ctx) => db.findUser(args.id) |
| Schema | Type definitions | type User { id: ID!, name: String! } |
| DataLoader | Batch/cache queries | new DataLoader(batchFn) |
| Fragment | Reusable field sets | fragment UserFields on User { id name } |
| Directive | Field behavior | @deprecated @auth(role: ADMIN) |
Related Articles
If you found this helpful, check out these related guides:
- Complete Node.js Backend Developer Interview Guide - comprehensive preparation guide for backend interviews
- REST API Interview Guide - API design principles and best practices
- Node.js Advanced Interview Guide - Event loop, streams, and Node.js internals
- TypeScript Type vs Interface Interview Guide - Type definitions for GraphQL schemas
