55+ Azure Interview Questions 2025: VMs, App Service, AKS & Azure AD

·28 min read
azureclouddevopsmicrosoftinterview-preparation

Azure is the second-largest cloud platform and the default choice for many enterprises, especially those with Microsoft investments. If you're interviewing for a role at a company using .NET, Office 365, or any Microsoft stack, expect Azure questions.

Azure interviews test both general cloud knowledge and Microsoft-specific patterns. You need to understand Azure AD (identity is central to everything), ARM (how Azure manages resources), and the core compute, storage, and networking services. This guide covers the Azure services that appear in nearly every interview and the questions interviewers actually ask.

Table of Contents

  1. Azure Fundamentals Questions
  2. Azure Resource Manager Questions
  3. Virtual Machine Questions
  4. App Service Questions
  5. Azure Functions Questions
  6. Azure AD and Identity Questions
  7. RBAC and Access Control Questions
  8. Virtual Network Questions
  9. Network Security Questions
  10. Storage Account Questions
  11. Cosmos DB Questions
  12. Azure SQL Questions
  13. AKS and Container Questions
  14. Azure Architecture Scenario Questions

Azure Fundamentals Questions

Before diving into specific services, you need to understand how Azure organizes and manages resources at a fundamental level.

How is Azure's resource hierarchy organized?

Azure uses a specific hierarchy for organizing and managing resources that determines how policies, billing, and access control are applied. Understanding this hierarchy is essential because it affects how you structure enterprise Azure deployments and how permissions cascade.

The hierarchy flows from Tenant (your organization's Azure AD instance) down through optional Management Groups, then Subscriptions, Resource Groups, and finally individual Resources. Each level can have policies and RBAC assignments that inherit downward.

flowchart TB
    T["Tenant (Azure AD)"]
    MG["Management Groups<br/>(optional)"]
    S["Subscriptions"]
    RG["Resource Groups"]
    R["Resources"]
 
    T --> MG --> S --> RG --> R

Key concepts:

  • Tenant: Your organization's instance of Azure AD. One tenant can have multiple subscriptions
  • Management Groups: Optional containers for organizing subscriptions and applying policies at scale
  • Subscription: A billing and access boundary. Resources in different subscriptions are isolated by default
  • Resource Group: A logical container for resources that share the same lifecycle

What happens when you delete a Resource Group in Azure?

Resource Groups are lifecycle containers—they're designed to hold resources that should be created and deleted together. When you delete a Resource Group, all resources contained within it are also deleted. This is by design and makes cleanup straightforward, but it also means you must be extremely careful with production Resource Groups.

This behavior is why proper Resource Group organization is critical. Group resources by lifecycle: if you'd want to delete them together, they belong in the same Resource Group. For example, a dev environment's VMs, storage, and networking should be in one group that can be deleted when the environment is no longer needed.

What are Azure Regions and Availability Zones?

Azure's global infrastructure is organized into Regions and Availability Zones that provide the foundation for building highly available applications. Understanding these concepts is crucial because they directly impact your application's latency, compliance posture, and disaster recovery capabilities.

A Region is a geographic area containing one or more data centers (e.g., East US, West Europe). You choose regions based on latency to users, compliance requirements, and service availability. An Availability Zone is a physically separate data center within a region with independent power, cooling, and networking—not all regions have AZs. Paired Regions are region pairs used for disaster recovery, with some services replicating automatically.

RegionPaired Region
East USWest US
West EuropeNorth Europe
Southeast AsiaEast Asia

How do you achieve high availability in Azure?

High availability in Azure is achieved through redundancy at multiple levels, and the approach differs depending on whether you're protecting against hardware failures, data center outages, or regional disasters.

For high availability within a region, deploy across multiple Availability Zones. This protects against data center failures with a 99.99% SLA for VMs. For disaster recovery, replicate to a paired region using services like Azure Site Recovery or geo-redundant storage. Use zone-redundant services where available (Zone-Redundant Storage, zone-redundant AKS) to simplify the architecture.


Azure Resource Manager Questions

ARM is the deployment and management layer for Azure. Every Azure operation—whether from the Portal, CLI, PowerShell, or SDK—goes through ARM.

What is Azure Resource Manager and how does it work?

Azure Resource Manager (ARM) is the unified management layer that handles all Azure resource operations. When you create, update, or delete a resource through any Azure interface, that request goes through ARM, which authenticates and authorizes the request via Azure AD before processing it.

ARM provides several key capabilities: consistent management layer across all tools, declarative deployments through templates, dependency management that ensures resources are created in the correct order, and RBAC integration for access control. Understanding ARM is fundamental because it's how Azure actually works under the hood.

What is the difference between ARM Templates and Bicep?

ARM Templates and Bicep are both infrastructure-as-code solutions for Azure, but they differ significantly in syntax and usability. ARM Templates use JSON format, which can become verbose and difficult to read for complex deployments. Bicep is a domain-specific language that compiles to ARM templates but offers cleaner, more readable syntax.

Microsoft recommends Bicep for new infrastructure-as-code projects because it's significantly easier to write and maintain. However, ARM templates remain important to understand because Bicep compiles to them, and you'll encounter existing ARM templates in many organizations.

ARM Template (JSON):

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [{
    "type": "Microsoft.Storage/storageAccounts",
    "apiVersion": "2021-02-01",
    "name": "mystorageaccount",
    "location": "[resourceGroup().location]",
    "sku": {"name": "Standard_LRS"},
    "kind": "StorageV2"
  }]
}

Bicep (cleaner syntax):

resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
  name: 'mystorageaccount'
  location: resourceGroup().location
  sku: { name: 'Standard_LRS' }
  kind: 'StorageV2'
}

Virtual Machine Questions

Azure VMs provide full control over the operating system and are the foundation of IaaS workloads.

What are the different VM series in Azure and when would you use each?

Azure VMs are organized into series based on their optimization for different workloads. Choosing the right series affects both performance and cost, so understanding the options is essential for designing efficient infrastructure.

Each series is named with a letter indicating its purpose, and sizes within the series indicate resource amounts. For example, Standard_D4s_v3 is a D-series (general purpose) VM with 4 vCPUs, "s" indicating premium storage support, version 3.

SeriesUse CaseCharacteristics
BBurstableVariable workloads, cost-effective
DGeneral purposeBalanced CPU/memory
EMemory optimizedHigh memory-to-CPU ratio
FCompute optimizedHigh CPU-to-memory ratio
NGPUML training, graphics
LStorage optimizedHigh disk throughput

What is the difference between Availability Sets and Availability Zones?

Both Availability Sets and Availability Zones provide high availability for VMs, but they protect against different types of failures and offer different SLA levels. Understanding when to use each is a common interview question.

Availability Sets distribute VMs across fault domains (separate racks) and update domains within a single data center. They protect against rack-level hardware failures and planned maintenance, providing a 99.95% SLA. Availability Zones distribute VMs across physically separate data centers within a region, providing a 99.99% SLA and protecting against entire data center failures.

Use Availability Zones when they're available in your region for production workloads—they provide higher availability. Use Availability Sets as a fallback in regions without zones or for legacy workloads that can't span zones.

What are VM Scale Sets and when would you use them?

VM Scale Sets are groups of identical VMs that can automatically scale based on demand or a schedule. They're the foundation for building scalable, highly available applications on Azure VMs without manually managing individual instances.

Scale Sets integrate with Azure Load Balancer and Application Gateway for traffic distribution. You define scaling rules based on metrics like CPU utilization or queue depth, and Azure handles adding or removing instances. They're essential for any workload that needs to handle variable load or requires more than a few VM instances.


App Service Questions

App Service is Azure's fully managed platform for web apps, APIs, and mobile backends.

What are the different App Service Plan tiers and when would you use each?

App Service Plans define the compute resources your applications run on, and choosing the right tier significantly impacts cost, performance, and available features. The plan determines what features are available to all apps running on it.

Lower tiers are suitable for development and testing, while production workloads typically require Standard or higher for features like auto-scaling and deployment slots. The Isolated tier provides dedicated resources and VNet integration for enterprise scenarios with strict isolation requirements.

TierFeatures
Free/SharedDevelopment, shared infrastructure
BasicDedicated compute, manual scaling
StandardAuto-scale, staging slots, daily backups
PremiumMore instances, more storage, better performance
IsolatedVNet integration, dedicated environment

How do you achieve zero-downtime deployments in App Service?

Deployment slots are App Service's solution for zero-downtime deployments. A slot is essentially a separate instance of your app with its own hostname where you can deploy and test before affecting production.

The workflow is: deploy to a staging slot, warm up the application and run tests, then swap with production. The swap is nearly instant because it's just a DNS/routing change—no files are copied. If issues occur after swapping, you can immediately swap back to the previous version. This pattern eliminates deployment downtime and provides instant rollback capability.


Azure Functions Questions

Azure Functions is Azure's serverless compute platform for event-driven, pay-per-execution workloads.

What are the different hosting plans for Azure Functions?

Azure Functions offers three hosting plans that determine scaling behavior, timeout limits, and pricing model. Choosing the right plan depends on your workload patterns and requirements for cold start latency and execution duration.

The Consumption plan is the true serverless option—you pay only for execution time and it scales to zero when idle. However, it has cold start latency and timeout limits. Premium plan provides pre-warmed instances to avoid cold starts and supports longer execution times. Dedicated plan runs on an App Service Plan, giving you predictable costs if you already have App Service infrastructure.

PlanScalingTimeoutUse Case
ConsumptionAuto (0 to many)5-10 minSporadic, unpredictable workloads
PremiumPre-warmed instances60 minAvoid cold starts, VNet needed
DedicatedApp Service PlanUnlimitedExisting App Service, predictable load

What are triggers and bindings in Azure Functions?

Triggers and bindings are declarative ways to connect Azure Functions to other services without writing boilerplate connection code. A trigger is what starts a function execution, while bindings provide input data or output destinations.

Triggers include HTTP requests, timers, queue messages, blob storage events, Cosmos DB changes, and more. Bindings can be input (read data when function starts) or output (write data when function completes). This declarative approach reduces code and makes functions easier to test.

[FunctionName("ProcessOrder")]
public static void Run(
    [QueueTrigger("orders")] Order order,
    [CosmosDB("db", "processed")] out dynamic document)
{
    document = new { id = order.Id, processed = DateTime.UtcNow };
}

When should you choose VMs vs App Service vs Functions vs AKS?

This is one of the most common Azure architecture questions. The answer depends on your requirements for control, scaling patterns, and operational overhead you're willing to accept.

Use this decision framework: VMs when you need full OS control or are doing lift-and-shift. App Service for web applications that need always-on hosting with minimal operations overhead. Functions for event-driven, short-running tasks where you want pay-per-execution. AKS when you're running containerized microservices that need orchestration.

flowchart TD
    Q1{"Need full<br/>OS control?"}
    Q1 -->|Yes| A1["Virtual Machine"]
    Q1 -->|No| Q2{"Container<br/>workload?"}
    Q2 -->|Yes| A2["AKS or Container Apps"]
    Q2 -->|No| Q3{"Event-driven,<br/>short tasks?"}
    Q3 -->|Yes| A3["Azure Functions"]
    Q3 -->|No| A4["App Service"]

Azure AD and Identity Questions

Identity is central to Azure—almost everything authenticates through Azure AD (now called Entra ID).

What is the difference between Azure AD and traditional Active Directory?

This is perhaps the most important Azure identity question. Azure AD and traditional Active Directory are fundamentally different services designed for different purposes, despite the similar names.

Traditional AD is an on-premises directory service that uses LDAP and Kerberos for authentication. It's designed for domain-joined Windows machines and on-premises infrastructure. Azure AD is a cloud-based identity service that uses REST APIs and modern protocols like OAuth 2.0, SAML, and OpenID Connect. It's designed for cloud and SaaS applications.

FeatureTraditional ADAzure AD
LocationOn-premisesCloud
ProtocolsLDAP, KerberosOAuth 2.0, SAML, OIDC
StructureOUs, GPOs, domain-joinedFlat, no GPOs
Use caseDomain-joined machinesCloud/SaaS apps

Azure AD is not "AD in the cloud." You can sync on-premises AD to Azure AD using Azure AD Connect, enabling hybrid identity scenarios.

What is a Managed Identity and why should you use it?

Managed Identities solve the problem of credential management for Azure services. Instead of storing connection strings or secrets in your application, Azure automatically provides and manages an identity that your application can use to authenticate to other Azure services.

There are two types: system-assigned identities are tied to a specific resource and deleted when the resource is deleted, while user-assigned identities are independent resources that can be shared across multiple services. Managed Identities eliminate secrets from code and configuration, with Azure handling credential rotation automatically.

How should an Azure Function access Key Vault securely?

The recommended approach is to use a Managed Identity rather than storing Key Vault credentials in your application. Enable system-assigned identity on the Function App, grant that identity access to Key Vault secrets via RBAC or access policy, then reference Key Vault in your configuration. No credentials ever appear in code.

// No credentials needed - uses Managed Identity automatically
var client = new SecretClient(
    new Uri("https://myvault.vault.azure.net/"),
    new DefaultAzureCredential());

The DefaultAzureCredential class automatically discovers the managed identity when running in Azure and uses it for authentication.


RBAC and Access Control Questions

Azure Role-Based Access Control determines who can do what on which resources.

How does Azure RBAC work?

Azure RBAC is the authorization system that controls access to Azure resources. It works by assigning roles to security principals at specific scopes, and permissions are additive—if you have multiple role assignments, your effective permissions are the union of all roles.

A role assignment combines three elements: a security principal (who—user, group, service principal, or managed identity), a role definition (what they can do—the permissions), and a scope (where—management group, subscription, resource group, or resource). Permissions inherit downward through the scope hierarchy.

What are the built-in RBAC roles and when would you use each?

Azure provides built-in roles for common access patterns, and understanding the key roles helps you implement least-privilege access. The most important distinction is between Owner, Contributor, and Reader.

RolePermissions
OwnerFull access, can assign roles to others
ContributorFull access, cannot assign roles
ReaderView only
User Access AdministratorManage user access only

Scenario: A developer needs to deploy to App Service but shouldn't access production databases. Create a custom role or use the built-in "Website Contributor" role scoped only to the App Service resource group. Don't grant access at subscription level—follow the principle of least privilege.

What is Conditional Access in Azure AD?

Conditional Access policies add conditions to authentication decisions, enabling zero-trust security patterns. Instead of simply allowing or denying access based on credentials, you can require additional verification based on context.

Common policies include requiring MFA for specific applications, blocking access from certain geographic locations, requiring compliant devices for access to sensitive data, or forcing password changes for risky sign-ins. Conditional Access is a premium Azure AD feature that's essential for enterprise security.


Virtual Network Questions

Virtual Networks (VNets) are the foundation of Azure networking. Every VM, isolated App Service, and AKS cluster runs inside a VNet.

What are the core components of an Azure VNet?

A Virtual Network is your isolated network in Azure where you define an address space using CIDR notation. VNets contain subnets, which are ranges within the VNet where you deploy resources. Network Security Groups provide stateful firewall rules at the subnet or NIC level.

Understanding VNet design is critical because it affects security, connectivity, and IP address management. Plan your address spaces carefully—VNets that need to peer or connect via VPN cannot have overlapping address ranges.

flowchart TB
    subgraph vnet["VNet: 10.0.0.0/16"]
        subgraph web["web-tier (10.0.1.0/24)"]
            nsg1["NSG: allow 80, 443<br/>from internet"]
        end
        subgraph app["app-tier (10.0.2.0/24)"]
            nsg2["NSG: allow from<br/>web-tier only"]
        end
        subgraph data["data-tier (10.0.3.0/24)"]
            nsg3["NSG: allow from<br/>app-tier only"]
        end
    end
 
    web --> app --> data

What connectivity options are available for Azure VNets?

Azure provides multiple options for connecting VNets to each other and to on-premises networks, each suited to different requirements for bandwidth, latency, and security.

MethodUse Case
VNet PeeringConnect VNets (same or different regions/subscriptions)
VPN GatewayEncrypted connection over internet to on-premises
ExpressRoutePrivate dedicated connection to on-premises
Private EndpointAccess Azure PaaS services over private IP
Service EndpointOptimized route to Azure services (still public IP)

What is the difference between Private Endpoints and Service Endpoints?

Both provide more secure connectivity to Azure PaaS services, but they work differently and offer different security levels. Understanding the distinction is important for designing secure architectures.

Service Endpoints enable a direct, optimized route from your VNet to Azure services over the Azure backbone network. Traffic stays on Microsoft's network, but the service still has a public endpoint. Private Endpoints create a private IP address in your VNet for the Azure service—the service becomes accessible only via that private IP, with no public endpoint exposure. Private Endpoints are more secure but require more configuration.

How do you ensure traffic to Azure SQL never goes over the public internet?

Use a Private Endpoint to create a private IP in your VNet for the SQL server. After creating the Private Endpoint, disable public network access on the SQL server. All traffic between your VNet and SQL stays entirely within Azure's network, never touching the public internet.

This pattern applies to any Azure PaaS service that supports Private Endpoints: Key Vault, Storage Accounts, Cosmos DB, and more.


Network Security Questions

Network security in Azure involves multiple layers of defense.

What is the difference between NSG and Azure Firewall?

NSGs and Azure Firewall both provide network security, but at different layers and with different capabilities. Most architectures use both in combination—NSGs for microsegmentation between subnets, Azure Firewall for centralized perimeter security.

NSGs are free, stateful packet filters that work at Layer 3/4 (IP addresses and ports). They're applied at the subnet or NIC level and provide basic allow/deny rules. Azure Firewall is a paid, managed service that provides Layer 3-7 filtering with features like FQDN filtering, threat intelligence, centralized logging, and application-aware inspection.

FeatureNSGAzure Firewall
CostFreePaid service
LayerL3/L4 (IP, port)L3-L7 (application aware)
ScopeSubnet/NICCentralized
FeaturesBasic allow/denyFQDN filtering, threat intel, logging
Use caseMicrosegmentationEnterprise perimeter security

When would you use Azure Firewall over NSGs?

Use Azure Firewall when you need capabilities beyond basic packet filtering: centralized logging across all traffic, FQDN filtering (allow traffic to *.microsoft.com), threat intelligence to block known malicious IPs, or application-layer inspection.

NSGs are sufficient for basic network segmentation—controlling which subnets can communicate with which. Azure Firewall adds the enterprise security features needed for regulated environments or when you need deep visibility into network traffic.


Storage Account Questions

Azure Storage Accounts provide access to multiple storage services under a single account.

What storage services are available in an Azure Storage Account?

A single Storage Account can provide access to four different storage services, each optimized for different data types and access patterns. Understanding when to use each is essential for designing efficient storage architectures.

ServiceTypeUse Case
Blob StorageObject storageUnstructured data, images, backups
File StorageSMB file sharesLift-and-shift, shared storage
Queue StorageMessage queuingDecoupling components
Table StorageNoSQL key-valueSimple structured data (consider Cosmos DB)

What are the Blob Storage access tiers and when would you use each?

Blob Storage offers three access tiers that trade storage cost against access cost. Choosing the right tier based on access patterns can significantly reduce storage costs for large datasets.

Hot tier has higher storage costs but lower access costs—use for frequently accessed data. Cool tier has lower storage costs but higher access costs—use for data accessed less than once per month. Archive tier has the lowest storage cost but requires rehydration before access—use for data accessed rarely (compliance archives, long-term backups).

TierAccessCost Pattern
HotFrequentHigher storage, lower access
CoolInfrequent (30+ days)Lower storage, higher access
ArchiveRare (180+ days)Lowest storage, rehydration required

What redundancy options are available for Azure Storage?

Azure Storage provides four redundancy options that balance durability, availability, and cost. The choice depends on your requirements for data protection against different failure scenarios.

OptionDescriptionDurability
LRS3 copies in one data center11 nines
ZRS3 copies across Availability Zones12 nines
GRSLRS + async copy to paired region16 nines
GZRSZRS + async copy to paired region16 nines

Cosmos DB Questions

Cosmos DB is Azure's globally distributed, multi-model NoSQL database designed for planet-scale applications.

What makes Cosmos DB different from other databases?

Cosmos DB's key differentiators are global distribution with multi-region writes, guaranteed low latency, and tunable consistency levels. It's designed for applications that need to operate globally with consistent performance, something traditional databases struggle to provide.

Cosmos DB supports multiple APIs (SQL, MongoDB, Cassandra, Gremlin, Table), so you can use familiar query languages while getting Cosmos DB's global distribution benefits. However, this flexibility comes with complexity in pricing (based on Request Units) and the need to understand consistency tradeoffs.

What are the Cosmos DB consistency levels and when would you use each?

Cosmos DB offers five consistency levels that let you trade consistency guarantees against latency and availability. This is unique among databases and is a common interview topic.

LevelGuaranteeLatency
StrongLinearizable (always read latest write)Highest
Bounded StalenessConsistent within K versions or T timeHigh
SessionConsistent within a sessionMedium (default)
Consistent PrefixReads never see out-of-order writesLow
EventualNo ordering guaranteeLowest

Most applications work fine with Session consistency (the default), which guarantees that within a user session, you'll always read your own writes. Strong consistency is needed only when you absolutely cannot read stale data—financial transactions or inventory systems where overselling is costly—but it limits you to single-region writes and adds latency.

What is a partition key and why does it matter in Cosmos DB?

The partition key determines how Cosmos DB distributes your data across physical partitions. Choosing a good partition key is critical because it affects performance, scalability, and cost. A poor partition key leads to hot partitions—one partition handling disproportionate load—which causes throttling.

Choose a partition key that distributes data evenly and matches your query patterns. Most queries should include the partition key to enable efficient single-partition queries. Avoid partition keys with low cardinality (few unique values) or that create uneven distribution.


Azure SQL Questions

Azure SQL provides managed SQL Server database services with varying levels of compatibility and control.

What are the different Azure SQL options and when would you use each?

Azure provides three main SQL Server options, each offering different tradeoffs between compatibility, control, and management overhead.

OptionDescription
Azure SQL DatabaseSingle database, fully managed, most cost-effective
Elastic PoolMultiple databases sharing resources, cost optimization
SQL Managed InstanceNear 100% SQL Server compatibility

Use Azure SQL Database for new applications—it's the most fully managed option. Use Elastic Pools when you have many databases with variable usage patterns that can share resources. Use Managed Instance for lift-and-shift migrations that require SQL Server features not available in Azure SQL Database.

When would you use SQL Managed Instance over Azure SQL Database?

SQL Managed Instance provides near-complete SQL Server compatibility for scenarios where Azure SQL Database's limitations are blockers. This typically includes cross-database queries, SQL Server Agent jobs, CLR integration, Service Broker, or other features only available in full SQL Server.

Managed Instance is more expensive and complex than Azure SQL Database, so use it only when you need those specific features. For new applications, start with Azure SQL Database unless you have a clear requirement for Managed Instance features.

How do you choose between Azure SQL and Cosmos DB?

This is a common architecture question that tests your understanding of relational vs NoSQL tradeoffs in the Azure context.

FactorAzure SQLCosmos DB
Data modelRelational, joinsDocument, key-value
SchemaFixedFlexible
ScalingVertical (mostly)Horizontal (unlimited)
ConsistencyStrong (ACID)Tunable
Global distributionGeo-replicationMulti-region writes
Best forTransactional, complex queriesScale, global apps, flexible schema

AKS and Container Questions

Azure Kubernetes Service (AKS) is managed Kubernetes where Azure handles the control plane.

What does Azure manage vs what do you manage in AKS?

Understanding the responsibility split is essential for AKS interviews. Azure manages the Kubernetes control plane (API server, etcd, scheduler, controller manager), while you manage the worker nodes and your applications.

Azure handles control plane availability, upgrades, and scaling. You're responsible for worker node configuration, application deployments, and monitoring your workloads. Node pool configuration, including choosing VM sizes and enabling features like autoscaling, is your responsibility.

flowchart TB
    subgraph aks["AKS Cluster"]
        sys["System Node Pool<br/>(Linux, system pods)"]
        user1["User Node Pool 1<br/>(Linux, general workloads)"]
        user2["User Node Pool 2<br/>(Windows, .NET apps)"]
    end

What is the difference between Kubenet and Azure CNI networking in AKS?

AKS supports two networking models that affect how pods get IP addresses and integrate with Azure networking. The choice impacts IP address consumption, Windows container support, and network policy options.

Kubenet is simpler: nodes get Azure VNet IPs, but pods get IPs from a separate range and use NAT to communicate outside the cluster. It's IP-efficient but doesn't support Windows containers. Azure CNI assigns VNet IPs directly to pods, enabling direct communication with VNet resources. It requires a larger subnet but is required for Windows nodes and offers more network policy options.

FeatureKubenetAzure CNI
Pod IPsNAT'd behind nodeDirect VNet IPs
IP consumptionEfficientRequires large subnet
Windows supportNoYes
Network policiesCalico onlyAzure or Calico

How do you securely pull images from ACR to AKS?

The recommended approach is to attach Azure Container Registry (ACR) to your AKS cluster using managed identity. Running az aks update --attach-acr <acr-name> automatically grants the AKS managed identity the AcrPull role on the registry.

This eliminates the need for image pull secrets in your Kubernetes manifests. The cluster can pull images from the attached registry without any credentials in configuration files.


Azure Architecture Scenario Questions

These questions test your ability to combine Azure services into complete solutions.

How would you design a highly available web application on Azure?

When designing for high availability, think through each layer of the stack and how it achieves redundancy. A complete answer addresses compute, networking, data, and operational concerns.

A strong architecture includes: App Service deployed across multiple regions or VMs across Availability Zones for compute redundancy. Azure Front Door or Traffic Manager for global load balancing and failover. Azure SQL with geo-replication or Cosmos DB for database high availability. Blob Storage with CDN for static assets. Azure AD for authentication, Key Vault for secrets with Managed Identity for access. Application Insights for monitoring and alerting.

How would you migrate an on-premises .NET application to Azure?

Migration questions test practical experience. A methodical approach demonstrates maturity:

  1. Assess: Use Azure Migrate to discover the application and its dependencies
  2. Choose target: App Service (easiest for web apps), AKS (if containerizing), or VMs (pure lift-and-shift)
  3. Database: Azure SQL Managed Instance for maximum compatibility, or Azure SQL Database for new apps
  4. Identity: Azure AD Connect to sync on-premises Active Directory
  5. Networking: VPN Gateway or ExpressRoute for hybrid connectivity during and after migration
  6. Migration: Azure Site Recovery for VM migration, or direct deployment for App Service

How would you design secure access from a container application to Azure SQL and Key Vault?

This scenario tests your understanding of secure service-to-service communication in Azure.

Deploy to AKS with Azure CNI networking for VNet integration. Enable workload identity on AKS (the successor to pod identity). Grant the workload identity access to Key Vault and SQL via RBAC. Create Private Endpoints for both SQL and Key Vault to keep traffic on the Azure backbone. Your application uses DefaultAzureCredential—no secrets in code or configuration.

A VM can't connect to Azure SQL Database. What do you check?

Troubleshooting questions reveal operational experience. Work through the network path systematically:

  1. SQL Server firewall: Is the client IP or VNet allowed in the firewall rules?
  2. NSG on VM subnet: Is outbound traffic to port 1433 allowed?
  3. Private Endpoint: If using Private Endpoint, is private DNS resolution working correctly?
  4. Connection string: Is the server name, database, and authentication method correct?
  5. Authentication: If using SQL authentication, is it enabled on the server?

App Service can't access Key Vault. What do you check?

Another common troubleshooting scenario that tests understanding of managed identity and networking:

  1. Managed Identity: Is it enabled on the App Service?
  2. Access permissions: Is the identity granted access via RBAC or Key Vault access policy?
  3. Key Vault URI: Is the correct URI configured in the application?
  4. Network access: If using Private Endpoint, is VNet integration configured on App Service?
  5. Key Vault firewall: Is the firewall blocking the App Service?

AKS pods are stuck in Pending state. What do you check?

Kubernetes scheduling issues require understanding how the scheduler works:

  1. Node resources: Is there enough CPU/memory available? (kubectl describe node)
  2. Node pool scaling: Are nodes available, or is the autoscaler still adding capacity?
  3. Taints and tolerations: Does the pod tolerate any taints on available nodes?
  4. Node selectors: Does the pod's nodeSelector match any available nodes?
  5. PVC issues: If using persistent storage, is the PersistentVolumeClaim bound?

Quick Reference

Resource Hierarchy: Tenant → Management Groups → Subscriptions → Resource Groups → Resources

Compute:

  • VMs: Full control, IaaS
  • App Service: Managed web hosting, PaaS
  • Functions: Serverless, event-driven
  • AKS: Managed Kubernetes

Identity:

  • Azure AD: Cloud identity platform
  • Managed Identity: No credentials in code
  • RBAC: Role-based access control

Networking:

  • VNet: Virtual network
  • NSG: Stateful firewall (free)
  • Azure Firewall: Enterprise firewall (paid)
  • Private Endpoint: Private IP for PaaS services

Storage/Database:

  • Blob: Object storage
  • Azure SQL: Managed SQL Server
  • Cosmos DB: Global NoSQL, tunable consistency

If you found this helpful, explore our other cloud and DevOps guides:

Ready to ace your interview?

Get 550+ interview questions with detailed answers in our comprehensive PDF guides.

View PDF Guides