- 16 May 2022
- 6 Minutes to read
HOWTO: Disaster Recovery with Replication
- Updated on 16 May 2022
- 6 Minutes to read
How ProGet Supports Your Disaster Recovery Plan Using Feed Replication
Teams looking to fortify their disaster recovery plans can leverage ProGet’s replication feature to create feeds to use in an emergency situation.
ProGet can replicate a feed to be used in catastrophic disaster situations like a fire, tornado, long-term power outage, etc. that would result in your production server unusable.
Replicated feeds can be filled with business-critical packages that your organization can’t afford to go without during a disaster. After setting up a disaster recovery feed with all business-critical packages, teams can still work at minimal capacity, limping along until the production server is back up and running.
This doc will go through an example situation of a company, Kramerica, using ProGet’s replication feature to replicate a feed to a disaster recovery server. Kramerica has two servers with two instances of ProGet Enterprise running:
- Production server in Ohio.
- Disaster recover server in Florida.
To help differentiate between the two instances, screenshots from Kramerica's disaster relief instance will be in dark mode.
Step 1: Plan What Feeds You Need
A disaster recovery server can be thought of like a bunker. It should be nice but doesn’t need to have all the comforts of home. Creating an exact duplicate of a production server would be expensive, difficult to upkeep, and a time sink.
Generally, teams should only replicate feeds to a disaster recovery server that:
- Are critical to business operation
- Would be necessary to rebuild your production server in a case in which it was lost
For example, Kramerica will not be replicating feeds like: ci-apps, testing-apps, unapproved-nuget, approved-nuget, pre-release nuget, etc. This is because those feeds aren’t considered business-critical, and are mostly filled with packages that are still being developed.
Of course, those feeds could be replicated -- but doing so would mean bringing over everything every time a new version of a package was built, or a package was changed. It’s just not necessary, especially for Kramerica’s “bunker” scenario.
Duplicating the environment would mean that Kramerica would be paying for a larger, faster, and load-balanced servers. Something that isn’t in their budget. Instead, Kramerica only plans on replicating one feed to their ProGet disaster recovery instance: kramerica-internal-nuget
Step 2: Create Disaster Recover Feed
To start, connect to your ProGet instance on a disaster recovery server which is, ideally, in a separate location.
For our example, we’ll create a NuGet feed and name it the same as the production server
kramerica-internal-nuget. We’ve done this so that when it comes time to use the disaster recovery feed, we can simply switch the DNS to point to the disaster recovery server.
Note, we've created a "Private/internal" feed since we don't want this feed connecting to any public repository like NuGet.org.
By default, all packages are replicated from one feed to another. To ensure that only necessary packages are stored in the disaster recovery feed, teams can set up more aggressive retention rules. As discussed in step 1, we don’t want or need our disaster recovery server to be large or powerful. So, the less disk space used, the better.
To set up retention rules navigate to your disaster recovery feed > "Manage Feed" > "Storage & Retention" > add
Here teams can set up retention rules that fit into their current disaster recovery plans and server capabilities.
For this example, we’ll create rules to tell ProGet to delete pre-release versions, all versions past the latest 10, and all unused versions not requested in the past 30 days.
Step 3: Configure Production Feed for Replication
Now that our disaster recovery feed is set up, we can configure our production feed to be replicated.
To do this, access your production server, click on "Replication" > "Configure New Replication" and select which feed you would like to replicate.
We want our production feed to be replicated and want to make sure it isn’t accidentally modified by any other feeds. We can do this by selecting "Incoming" and generating a specific sync token that will only be shared with the disaster recovery feed.
Feed replication can be used for many use cases like Edge Computing or Federated Architecture. However, to properly configure a feed for Disaster Recovery, we will set the Replication mode to "Push Content to Other Instances."
To complete the configuration, confirm your configuration settings and click on "Add New Replication." Finally, click on the ProGet logo to navigate to your homepage and copy your URL. We will use that URL in the next step to establish communication with our disaster recovery feed.
Now that our production feed is configured, any number of feeds could connect and replicate it.
Step 4: Configure Disaster Recovery Feed for Replication
The disaster recovery feed now needs to be configured so it can replicate the production server.
To do this, access your disaster recovery server, click on "Replication" > "Configure New Replication" and select which feed you would like to replicate the production feed.
Next, we'll configure this feed for Outgoing communication and enter the URL from our production server as well as the sync token we generated in the previous step. Since both our feeds are named
kramerica-internal-nuget we'll check the Other feed names box.
For packages to be properly replicated from our production feed to disaster recovery feed, we'll select "Pull Content from Other Instances."
After reviewing the configurations, click "Add New Replication" and your disaster recovery feed will be fully configured.
Step 5: Test Run Replication
After configuring your disaster recovery feed you'll be redirected to the Replication Overview.
By default, replication runs every 60 seconds. We could simply wait for the replication to automatically run. But for this example, Kramerica wants the replication to run immediately to verify its success. To do this we’ll click "run."
Click "Run All Replications Now" to run the replication immediately.
Now all the packages from the production server feed will be in our disaster recovery feed.
Step 6: Test Run Retention
After performing the replication, you should also test that the retention rules you set are working on the disaster recovery feed.
You can do this by going to "Admin" > "Additional Logs & Events"
"Scheduled Jobs" and pressing the green play button beside the desired feed.
Verify Successful Replication
There are many ways to check that the replication was a success in ProGet.
To view a brief overview of your replication history, connect to your disaster recovery feed > "Replication" Overview" > "Replication" > “history”
For a more detailed history including names of packages that were replicated, click on “View last run”. Here you can review the full execution details that include the names of the packages.
Step 7: Test Disaster Recovery Plan
This of course will be different from team to team. But generally, you’d want to test your disaster recovery plan by migrating an existing ProGet installation to a new server and deploying from it.
Kramerica will test their disaster recovery plan by setting up a usable ProGet instance with the packages available in their disaster recovery feed. They will then attempt to deploy from the newly set up instance that, in a true crisis situation, would serve as their new production server.
After successfully deploying from the new production server, Kramerica will then do a one-time replication from the disaster recovery feed to the new production feeds.
Production Server Feed One-Time Replication Configuration
- Outgoing options: Only apply external changes to local feed
- Incoming options: Disabled
Disaster Recovery Feed One-Time Replication Configuration
- Outgoing Replication: Disabled
- Incoming Replication: Allow external feeds to replicate from this feed
Once all Kramerica’s business critical packages are confirmed to be in both the disaster recovery and production server, we’ll go ahead and switch the inbound and outbound settings to what we configured in steps 3 and 4.