Azure Files scalability and performance targets (2024)

  • Article

Azure Files offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) and Network File System (NFS) file system protocols. This article discusses the scalability and performance targets for Azure storage accounts, Azure Files, and Azure File Sync.

The targets listed here might be affected by other variables in your deployment. For example, the performance of I/O for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements.

Applies to

File share typeSMBNFS
Standard file shares (GPv2), LRS/ZRSAzure Files scalability and performance targets (1)Azure Files scalability and performance targets (2)
Standard file shares (GPv2), GRS/GZRSAzure Files scalability and performance targets (3)Azure Files scalability and performance targets (4)
Premium file shares (FileStorage), LRS/ZRSAzure Files scalability and performance targets (5)Azure Files scalability and performance targets (6)

Azure Files scale targets

Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and individual files.

Storage account scale targets

Storage account scale targets apply at the storage account level. There are two main types of storage accounts for Azure Files:

  • General purpose version 2 (GPv2) storage accounts: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.

  • FileStorage storage accounts: FileStorage storage accounts allow you to deploy Azure file shares on premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage account.

AttributeGPv2 storage accounts (standard)FileStorage storage accounts (premium)
Number of storage accounts per region per subscription25012501
Maximum storage account capacity5 PiB2100 TiB (provisioned)
Maximum number of file sharesUnlimitedUnlimited, total provisioned size of all shares must be less than max than the max storage account capacity
Maximum concurrent request rate20,000 IOPS2102,400 IOPS
Throughput (ingress + egress) for LRS/GRS
  • Australia East
  • Central US
  • East Asia
  • East US 2
  • Japan East
  • Korea Central
  • North Europe
  • South Central US
  • Southeast Asia
  • UK South
  • West Europe
  • West US
  • Ingress: 7,152 MiB/sec
  • Egress: 14,305 MiB/sec
10,340 MiB/sec
Throughput (ingress + egress) for ZRS
  • Australia East
  • Central US
  • East US
  • East US 2
  • Japan East
  • North Europe
  • South Central US
  • Southeast Asia
  • UK South
  • West Europe
  • West US 2
  • Ingress: 7,152 MiB/sec
  • Egress: 14,305 MiB/sec
10,340 MiB/sec
Throughput (ingress + egress) for redundancy/region combinations not listed in the previous row
  • Ingress: 2,980 MiB/sec
  • Egress: 5,960 MiB/sec
10,340 MiB/sec
Maximum number of virtual network rules200200
Maximum number of IP address rules200200
Management read operations800 per 5 minutes800 per 5 minutes
Management write operations10 per second/1200 per hour10 per second/1200 per hour
Management list operations100 per 5 minutes100 per 5 minutes

1 With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see Increase Azure Storage account quotas.2 General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact Azure Support.

Azure file share scale targets apply at the file share level.

AttributeStandard file shares1Premium file shares
Minimum size of a file shareNo minimum100 GiB (provisioned)
Provisioned size increase/decrease unitN/A1 GiB
Maximum size of a file share100 TiB100 TiB
Maximum number of files in a file shareNo limitNo limit
Maximum request rate (Max IOPS)20,000
  • Baseline IOPS: 3000 + 1 IOPS per GiB, up to 102,400
  • IOPS bursting: Max (10,000, 3x IOPS per GiB), up to 102,400
Throughput (ingress + egress) for a single file share (MiB/sec)Up to storage account limits100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB)
Maximum number of share snapshots200 snapshots200 snapshots
Maximum object name length3 (full pathname including all directories, file names, and backslash characters)2,048 characters2,048 characters
Maximum length of individual pathname component2 (in the path \A\B\C\D, each letter represents a directory or file that is an individual component)255 characters255 characters
Hard link limit (NFS only)N/A178
Maximum number of SMB Multichannel channelsN/A4
Maximum number of stored access policies per file share55

1 The limits for standard file shares apply to all three of the tiers available for standard file shares: transaction optimized, hot, and cool.

2 Azure Files enforces certain naming rules for directory and file names.

File scale targets

File scale targets apply to individual files stored in Azure file shares.

AttributeFiles in standard file sharesFiles in premium file shares
Maximum file size4 TiB4 TiB
Maximum concurrent request rate1,000 IOPSUp to 8,0001
Maximum ingress for a file60 MiB/sec200 MiB/sec (Up to 1 GiB/s with SMB Multichannel)2
Maximum egress for a file60 MiB/sec300 MiB/sec (Up to 1 GiB/s with SMB Multichannel)2
Maximum concurrent handles for root directory310,000 handles10,000 handles
Maximum concurrent handles per file and directory32,000 handles2,000 handles

1 Applies to read and write I/Os (typically smaller I/O sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower. These are soft limits, and throttling can occur beyond these limits.

2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see SMB Multichannel performance.

3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share. However, if you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use multiple Azure file shares. For more information, see Azure Files sizing guidance for Azure Virtual Desktop.

Azure Files sizing guidance for Azure Virtual Desktop

A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how handles are consumed by various types of disk images, and provides sizing guidance depending on the technology you're using.

FSLogix

If you're using FSLogix with Azure Virtual Desktop, your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user will open a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (\\storageaccount.file.core.windows.net\sharename) + the profile directory (%sid%_%username%) + profile container (profile_%username.vhd(x)).

If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.

Warning

While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you've created. Your requirements might vary based on users, profile size, and workload.

For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is extremely unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.

App attach with CimFS

If you're using MSIX App attach or App attach to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for disk images. Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.

If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you'll need one handle each for three files in the directory. So if you have 100 VMs, you'll need 300 file handles.

You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.

App attach with VHD/VHDX

If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they are shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.

In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.

In another example, 100 VMs accessing 20 apps will require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.

If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.

Azure File Sync scale targets

The following table indicates which targets are soft, representing the Microsoft tested boundary, and hard, indicating an enforced maximum:

ResourceTargetHard limit
Storage Sync Services per region100 Storage Sync ServicesYes
Storage Sync Services per subscription15 Storage Sync ServicesYes
Sync groups per Storage Sync Service200 sync groupsYes
Registered servers per Storage Sync Service99 serversYes
Private endpoints per Storage Sync Service100 private endpointsYes
Cloud endpoints per sync group1 cloud endpointYes
Server endpoints per sync group100 server endpointsYes
Server endpoints per server30 server endpointsYes
File system objects (directories and files) per sync group100 million objectsNo
Maximum number of file system objects (directories and files) in a directory (not recursive)5 million objectsYes
Maximum object (directories and files) security descriptor size64 KiBYes
File size100 GiBNo
Minimum file size for a file to be tieredBased on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size will be 8 KiB.Yes

Note

An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync won't be able to operate.

Azure File Sync performance metrics

Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second.

For Azure File Sync, performance is critical in two stages:

  1. Initial one-time provisioning: To optimize performance on initial provisioning, refer to Onboarding with Azure File Sync for the optimal deployment details.
  2. Ongoing sync: After the data is initially seeded in the Azure file shares, Azure File Sync keeps multiple endpoints in sync.

Note

When many server endpoints in the same sync group are syncing at the same time, they're contending for cloud service resources. As a result, upload performance is impacted. In extreme cases, some sync sessions will fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced.

Internal test results

To help you plan your deployment for each of the stages (initial one-time provisioning and ongoing sync), here are the results we observed during internal testing on a system with the following configuration:

System configurationDetails
CPU64 Virtual Cores with 64 MiB L3 cache
Memory128 GiB
DiskSAS disks with RAID 10 with battery backed cache
Network1 Gbps Network
WorkloadGeneral Purpose File Server

Initial one-time provisioning

Initial one-time provisioningDetails
Number of objects25 million objects
Dataset Size~4.7 TiB
Average File Size~200 KiB (Largest File: 100 GiB)
Initial cloud change enumeration80 objects per second
Upload Throughput20 objects per second per sync group
Namespace Download Throughput400 objects per second

Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that executes. In this process, the system will enumerate all the items in the Azure file share. During this process, there will be no sync activity. No items will be downloaded from cloud endpoint to server endpoint, and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes.

The rate of performance is 80 objects per second. You can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days.

Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(80 * 60 * 60 * 24)

Initial sync of data from Windows Server to Azure File share: Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast, and the majority of time is spent syncing changes from the Windows Server into the Azure file share(s).

While sync uploads data to the Azure file share, there's no downtime on the local file server, and administrators can setup network limits to restrict the amount of bandwidth used for background data upload.

Initial sync is typically limited by the initial upload rate of 20 files per second per sync group. Customers can estimate the time to upload all their data to Azure using the following formulae to get time in days:

Time (in days) for uploading files to a sync group = (Number of objects in server endpoint)/(20 * 60 * 60 * 24)

Splitting your data into multiple server endpoints and sync groups can speed up this initial data upload, because the upload can be done in parallel for multiple sync groups at a rate of 20 items per second each. So, two sync groups would be running at a combined rate of 40 items per second. The total time to complete would be the time estimate for the sync group with the most files to sync.

Namespace download throughput: When a new server endpoint is added to an existing sync group, the Azure File Sync agent doesn't download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.

Ongoing sync

Ongoing syncDetails
Number of objects synced125,000 objects (~1% churn)
Dataset Size50 GiB
Average File Size~500 KiB
Upload Throughput20 objects per second per sync group
Full Download Throughput*60 objects per second

*If cloud tiering is enabled, you're likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they're changed on any of the endpoints. For any tiered or newly created files, the agent doesn't download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they're accessed by the user.

Note

These numbers aren't an indication of the performance that you'll experience. The actual performance depends on multiple factors as outlined in the beginning of this section.

As a general guide for your deployment, keep a few things in mind:

  • The object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network.
  • The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you'll experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you'll get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.

See also

  • Understand Azure Files performance
  • Planning for an Azure Files deployment
  • Planning for an Azure File Sync deployment
Azure Files scalability and performance targets (2024)
Top Articles
Fixed Income Market Update – Q1 2024 - Madison Investments
Why Aren't My Supers Filling?
Virtual Roster Ameristar
Amy Davis No Wedding Ring
Wilson Tattoo Shops
1v1 Lol | Play Unblocked Games on Ubg4all
Quillins Weekly Ad
Reading Craigslist Pa
Magicseaweed Capitola
Brown-eyed girl's legacy lives
Li Bai - New World Encyclopedia
Pepper Deck Sketchy Pharm
韓國KBJ美女視頻-2021051021-ssefth1203-小野猫福利
Walmart.com Careers Job Application Online
Cats For Free Craigslist
What is the difference between a T-bill and a T note?
Devotion Showtimes Near Amc Hoffman Center 22
Elite Dangerous Sensor Fragments
9Xflix Movie
Dekalb County Jail Fort Payne Alabama
Seo Glossary definition page
Cranston Sewer Tax
Caldwell Idaho Craigslist
Craigslist Artesia Nm
What Is Galvanization? Does Galvanized Steel Rust?
Restaurants Near Paramount Theater Cedar Rapids
Nalley Trailer Sales Photos
How Much Does Grupo Arriesgado Charge Per Hour
Houses For Sale 180 000
Livy's Ice Cream
Modesto Rainfall 2022
Dr Frobish Possesses A
Sanctuary 2022 Showtimes Near Santikos Entertainment Palladium
Brett Cooper Wikifeet
Dsw Nesr Me
Courier Press Sports
2003 Chevrolet Corvette Z06 Coupe On for sale - Portland, OR - craigslist
Pinterest Shadowban Checker
Craigslist Snowblower
Max80 List
Central Nj Craiglist
Pestweb Login
Grab this ice cream maker while it's discounted in Walmart's sale | Digital Trends
Please Help Me: What to Do When You Need Help
8002905511
Jerusalem Market Tampa
5A Division 1 Playoff Bracket
탱글다희 유출
Koikatsu Card Booru
The Complete list of all Supermarkets in Curaçao  | Exploring Curaçao
Half Sleeve Hood Forearm Tattoos
Latest Posts
Article information

Author: Pres. Carey Rath

Last Updated:

Views: 5960

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.