One of the first arguments I ever had about ConfigMgr was around distribution points. The initial design drawn up included several secondary site servers all sitting right alongside our primary site server in our campus datacenter. The reason? As documented, each site can only support up to 250 ‘regular’ distribution points and we had nearly that exact number of remote sites. I thought that was dumb and argued to use pull distribution points since a single site can support over 2000 of them. While that was the singular reason we initially used Pull DPs it ended up being a fortuitous decision for other reasons I’ve been meaning to write about.
Wait, What’s the Heck is a Pull DP?
If you are not familiar, Pull DPs were introduced in ConfigMgr 2012 SP1 with the specific goal of solving the problem I outlined above: avoiding the need for secondary sites. A regular DP has content pushed to it by the distribution manager in a nice, controlled, centralized manner. While that has some advantages the processing overhead required was creating bottlenecks. Administrators were seeing their site servers get crushed by the processing and memory required to distribute large content sets (think software updates) to hundreds of DPs.
Pull DPs solve this bottleneck by taking on the overhead of managing the data transfer and verifying that it was successful. All the distribution manager has to do is tell the Pull DP that it has a job to do, wait for it to do its thing, and return status messages. This distributed processing results in a ten-fold increase in the supported number of DPs you can have at a single site.
Nothing’s Free, What’s the Catch?
Ok, so there are some downsides to Pull DPs. Really, there’s just two and they’re related: scheduling and rate limiting. Because the distribution manager is handling the transfer you lose the ability to schedule when transfers happen and configure a rate limit. However, even those aren’t really a total loss since Pull DPs are downloading the content just a like a regular client using Background Intelligent Transfer Service (BITS). Since you can control BITS throttling using client settings you can effectively do the same things. Specify when you don’t want your Pull DPs to transfer data and target them with a client setting that limits transfers to 0 Kbps during that time. Outside of that time specify how much bandwidth they can use. Boom, you have scheduling and rate limiting.
Note: If you throttle BITS on a PullDP that applies to any ConfigMgr client running on it as well. Set it to 0 and the client will not be able to download content … including from itself.
But Wait! There’s MORE!
Sure, everything I just said should be enough to convince you if you are bumping up against the ‘regular’ DP limit of 250. Should the vast majority of organizations not hitting that limit still consider Pull DPs? Yes. Yes, they very much should. Let me tell you why.
Hub and Spoke Content Distribution
Since Pull DPs … pull … the data you are able to define where they pull it from which themselves can be Pull DPs. You can have multiple sources allowing for fault tolerance and you can order their priority to provide some amount of load balancing. More importantly, if you work in a widely distributed organization you can create a hub and spoke distribution architecture without secondary sites. Simply tier your Pull DPs in whatever hierarchy fits your needs to lessen the use of slow WAN links. This becomes really handy if you need to remotely populate a new DP. Make it a Pull DP, point it at the closest/fasted source DP, and watch content flow onto it. Great for replacing remote DPs.
Note: You can only select HTTP sources from within the console. If the PullDP has the client itself you can configure HTTPS sources using the ConfigMgr Software Development Kit (docs).
Pull DP + Dedupe + BranchCache = Freaking MAGIC
Before I get into this I should acknowledge and thank Phil Wilcock of 2Pint Software for giving me and the community at large a free masters class in dedupe and BranchCache. Andreas doesn’t exactly suck either but Phil patiently answered my onslaught of questions over the years so to him goes the glory.
Simply put: go enable data deduplication (docs), enable BranchCache on the Pull DP (docs), and watch magic happen. Data deduplication has its own simple to understand allure: reduced disk space usage. However, the additional benefit for Pull DPs is that dedupe will create all the necessary hash values for the content on the drive. Since the hashes already exist they are ready to be used by BranchCache to eliminate downloading content that’s already on the drive. What this means is that right out of the gate you get the WAN and storage benefits of BranchCache by first enabling dedupe.
Note: For this dedupe magic to work your Pull DPs need to be on Server 2012 or better. Windows 7 and 10 lack dedupe so while they can use BranchCache they don’t get the extra benefits that dedupe brings to the table.
The end result is that your Pull DPs will use less storage (dedupe) and bandwidth across your WAN (BranchCache). I experienced a 30-40% reduction in disk space and data transfer across several hundred DPs doing this. Phil confirmed that this is in line with his experience as well so I think it’s a fair benchmark.
Enabling deduplication on the DP is relatively easy:
#Run locally on each DP
Import-Module ServerManager
Add-WindowsFeature -name FS-Data-Deduplication
Import-Module Deduplication
Enable-DedupVolume <YourDriveLetter>:
Enabling BranchCache from ConfigMgr isn’t hard either:
#Run from ConfigMgr Console
Get-CMDistributionPoint | Set-CMDistributionPoint -EnableBranchCache $true
Phil’s BranchCache Pro-Tips
Here’s a few nitty-gritty things I learned from Phil that you may want to do to to further optimize BranchCache. Not strictly necessary but you’ll thank Phil sooner or later that you did.
If you are not enabling BranchCache on clients then you should consider configuring the Pull DP to use local caching mode (PoSH: Enable-BCLocal). This will prevent BranchCache from reaching out to local peers for content thus drastically speeding up the lookup process.
By default when you enable BrancCache the the cache location is on the C: drive. You probably have a drive dedicated to hosting the data and will want to move it there:
Set-BCCache
-MoveTo <PathToValhalla>
If you are working with some very large packages you may want to change the size of the BranchCache … cache. Similar to the ConfigMgr client’s cache it needs to be as large as your largest package. The default size is 5% of the disk volume it is on but if that’s not enough you can set increase it:Set-BCCache -Percentage <MoreThan5_LessThan100_101IsRightOut>
orSet-BCCache -SizeBytes <AVeryBigNumber>
Lastly, there’s apparently a bug that can cause BranchCache to crap all over itself when you update existing content. If you start seeing BranchCache Event 13 being thrown you will want to clear out your BranchCache cache (PoSH: Clear-BCCache). Note that this does not impact the stored data, only the cache that BranchCache uses.
Who needs Scheduling and Throttling When You Have LEDBAT?
Earlier I called out the lack of scheduling and rate limiting as one of the things you lose with Pull DPs. While you can approximate those features by managing BITS there’s an even better solution: Windows’ implementation of Low Extra Delay Background Transport (LEDBAT). Ok, so LEDBAT is more of a companion feature to BITS rather than a replacement but I recommend enabling LEDBAT first and only mess with BITS throttling afterwards if you have to (Spoiler: you shouldn’t have to). LEDBAT is a congestion protocol that attempts to maximize the bandwidth utilization while prioritizing other non-LEDBAT traffic. If the line becomes congested by non-LEDBAT traffic it with throttle its own jobs until the other traffic finishes and bandwidth becomes available. Importantly, LEDBAT will react to congestion along the entire network path between sender and receiver. It self adjusts based on the weakest link.
LEDBAT was initially released as part of Windows Server 2019 but was back-ported to Server 2016 with the release of the May 2018 Service Stack Update (KB4132216) and the July 2018 Cumulative Update (KB4284833). If you’re still trying to get rid of Server 2008 boxes don’t despair. LEDBAT works solely on the sender side which means only your upstream or source DPs need have LEDBAT enabled. Get your upstream DPs to 2016 or better and everything downstream will benefit regardless of OS level.
In Current Branch 1806 (doc) a setting was added to the Distribution Point configuration to enable LEDBAT on Server 2016 and above. There’s quite literally no reason to not enable this on all of your DPs so go do it now:
#Run from ConfigMgr Console
Get-CMDistributionPoint | Set-CMDistributionPoint -EnableLedbat $true
Go Forth and Dedupe, Cache, and Use … Lead Bats?
There it is; my best case for using Pull DPs. The advantages they provide over ‘regular’ DPs are enormous and should not be ignored. They are, by far, the greatest Distribution Point OF ALL TIME.
Great article Brian. Can you clarify if de-dup is recommended on PullDP’s only, or should it be enabled on the source DP as well?
I have a 205 pull DP’s and the crazy thing I keep experiencing is that every time we do build upgrade we have pull DP’s that quit working and I end up having to delete them and rebuild them. Do you have this problem?
I think with 1910 I had to rebuild 50. With Build 2006 it’s about 12.
Anyone else experience this? I spoke with a Microsoft person today and he made it sound like it’s a known issue and he just tells all his customers to go back to push DP’s. Am I going to be told I need a CAS next (LOL)?
Yea, I will admit that Pull DPs can be finicky in annoying ways. I’m not at that same org anymore so I can’t speak to recent upgrades but we didn’t see that issue. What we would see is that the final “I’m done installing” status message wouldn’t get lost (or whatever) and while the Pull DP would work it would show up as having an issue in the console.
When you say ‘Microsoft person’ is that support or just someone you know? Because the solution in these kinds of cases usually means being stubborn with support. “I am using this as designed but it’s not working. Reinstalling the DP is useful for troubleshooting but is not a solution nor a root cause. Please either help me resolve this issue globally or escalate me to to someone who can identify the root cause.” From personal experience, that can be a huge slog but once you get to one of the Sr. Support Engineers they don’t take this kind of stuff lightly. Sure, they might tell you it’s a bug and there’s nothing you can do right now but at least there’s hope for a fix. Then, after every upgrade, just publicly shame the product team on Twitter until it’s resolved.
Hi! If you already run a ‘normal’ DP on a remote site, is it possible to make it a Pull DP? Does it pull all the content newly? If PXE responder is enabled on remote DP. Does it still work after transform?
Thanks in advance… Dietmar
Yes, you can switch between the two types of DPs without having to re-download the content since it’s stored in the same way/place. In fact, it’s a successful strategry for bringing up a new remote ‘normal’ DP: make it a PullDP to get the content from an existing local DP and then change it back.
PXE is supported on PullDPs but I’ve never actually tested that transition process between DP types personally.
our of curiosity, do you do any exclusion when setting up dedup?
I see some articles only applying dedup on SCCMContentLib and SMSPKGE$ while excluding everything else.
my testing doesn’t show any negative effect if I just do what you do so yea, just asking our of curiosity lol
In our case all our Pull DPs had a data drive (D:) and we had no_sms_on_drive.sms on the primary (C:) drives so we just deduped the date drive without exclusions.
ah it’s the same for us. There are files in other folders like SMSSIG so that’s why I was curious why some guides exclude those folders.
all good, thanks for the article Bryan!
Great post. Some really interesting use cases. Only other thing I might include is the limitation around HTTPS source DPs.
Done, added a note and a link to the docs regarding HTTPS sources.
Nice one. It’s weird you have to use the SDK to set an HTTPS source. HTTPS only is mandatory in the larger environments I’ve worked in. I’m sure there’s a highly technical explanation as to why they’ve made it so difficult…
Added that to my list of questions to ask at MMSJAZZ … I’ll let you know if I get an answer.
did you got an answer since the time you wrote this comment ?