Getting the right server configuration for your ConfigMgr 2012 R2 environment is far from an exact science, but with a few simply tests you can at least get an idea if you’re dead wrong, or still on track… As you probably know, ConfigMgr 2012 requires a lot more CPU and memory than ConfigMgr 2007 ever did, but the key bottleneck is most often Disk I/O.
Reaching out to the community
The information I have later in this post are from my own experience from various customer engagements the last few years, but I would love to hear from you. I’m hoping to gather additional real world numbers here, providing a few more samples that could help others sizing their ConfigMgr server better. If you are willing please contact me (via email) and we’ll take it from there.
The things I’m asking for is the following:
- Notify help desk that ConfigMgr will be unavailable for 30 minutes.
- Notify the storage folks that you plan to put some really high load on their SAN for about 15 minutes (ask nicely, and/or wait to off peak hours)
- Create as large benchmarking large file (see note below) as you can on each volume.
- Stop all ConfigMgr/SQL services on the site server (if possible disconnect the server from network).
- Run SQLIO (use below PowerShell Script and pipe to a text file)
- Send me the text file, and some info about your site server(s), VM configuration (CPU/Memory/disk), SAN Hardware etc.
Together we can provide some real world configurations that other admins can learn from…
Thanks / Johan
Single Primary Site Server, supporting 12.000 clients
For this scenario, I would start off with a single VM running Windows Server 2012 or Windows Server 2012 R2. Of course running SQL Server 2012 SP1 locally on the VM, and having the VM configured with 4 vCPU’s and 32 GB of RAM.
But before installing SQL Server 2012 SP1, I request a few disk volumes from the storage group to determine what the final disk layout will be. To determine the final disk layout I use SQLIO from Microsoft to get a rough idea about the performance I get from each volume. After gathering and reviewing the result from SQLIO I request the final disk layout from the storage group.
The critical thing about using SQLIO is to have enough amount of data to test with, a 100 GB file is enough for most tests, and to run the tests a least a few minutes. And please do not create the file using FSUtil, because it will just create an empty file, which the SAN cache may suck into RAM immediately and your test results will be off the charts. Create a “real” file, with content, generate a giant ISO file, or a large WinRAR archive, anything you can think of as long as the file is full with data.
You can also download the free CreateFile.exe written by Deepak Kumar (Adaptiva), which create test files that are uncompressible, again, so you can get real results.
Next step, benchmarking
Then run some SQLIO tests with various block sizes, here is a good starting point.
# Read Sequential, various blocksizes (8, 64 and 512 kb) .\SQLIO.EXE -s120 -kR -fsequential -b8 -t4 -o2 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kR -fsequential -b64 -t4 -o2 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kR -fsequential -b512 -t4 -o2 -LS -BN G:\Benchmarkfile.dat # Read Random, various blocksizes (8, 64 and 512 kb) .\SQLIO.EXE -s120 -kR -frandom -b8 -t4 -o16 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kR -frandom -b64 -t4 -o16 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kR -frandom -b512 -t4 -o16 -LS -BN G:\Benchmarkfile.dat # Write Random, various blocksizes (8, 64 and 512 kb) .\SQLIO.EXE -s120 -kW -frandom -b8 -t4 -o16 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kW -frandom -b64 -t4 -o16 -LS -BN G:\Benchmarkfile.dat .\SQLIO.EXE -s120 -kW -frandom -b512 -t4 -o2 -LS -BN G:\Benchmarkfile.dat
For example, if the Read Sequential with 8 kb blocks is below 25000 IOPS and the Random write with 8 kb blocks is below 7000 IOPS I would expect a somewhat normal SAN (or just a badly configured/sized high-end SAN), and would recommend a classic ConfigMgr disk layout with six volumes.
In this configuration I’m splitting the DB files, the DB Logs, and the TempDB over three different volumes.
However, if I’m starting to see values way over 50000 IOPS for the same read, and 20000 IOPS for the same write, I would expect a really high-end SAN, or possible a local SSD array, or a local accelerator card (FusionIO etc.) and would most likely recommend a different disk layout with only four volumes. Because of the large amount of IO, dividing the database components is not as critical as for the previous scenario.
SSD or SSD Accelerator card configuration.
More ...