A Typical IT Backup Policy and Backup procedure

Lets see how back up policy was important for a typical IT company and how did it implement the same.

Background:

As we progress in our path way towards higher levels of Enterprise Management Systems, the number of critical Intranet Systems will also increased, and this is going to increase in the years to come. Recently xyz has implemented high end e-Servers for hosting its Intranet Web site and Applications. This is the first step that shows the seriousness of the situation to provide the best system support for the mission critical applications.

As a second step towards providing 100% availability of applications, xyz plans to implement extensive Backup policies for its intranet servers. As a Technical Leader from abc consulting, my responsibility is to set up a Backup Policy and then implement it.

We have studied various parameters and have come up with 4 levels of Back up – 

  • Level 0 – Daily Backups on the same servers
    • Process : Automatic
  • Level 1 – Stand by System
    • Process : Automatic
  • Level 2 – CD Based backup
    • Process : Manual
  • Level 3 – Magnetic Tape based Backup.
    • Process : Automatic

The Backup Policy that we have designed for xyz’s Intranet Systems are very cost effective in the sense that we are not using any branded Backup Software. We have explored Linux’ Shell scripting and scheduling to set up the various Backup methods. I shall explain the Backup Policy and Methods in detail as follows.

 Backup Policy:

At xyz, abc consulting is maintaining 3 servers – 11.1.xx.xx (Intranet Web site, HR management system, Task Scheduler, Meeting Software; 11.2.xx.xx (Workflow) and 11.2.xx.xx (Stand by server, Internet Gateway, Development/Testing server). Out of these servers, Back up policies will be implemented for 10.1.xx.xx (www server) and 11.2.xx.xx (workflow server). 11.3.xx.xx will be the Stand by server.

We will be taking full backups of Database and Programs in all Levels (Level 0, Level 1, Level 2, Level 3) of Backup, every time when we take the backup. This will be done daily at off hours. In the case of Data Files, since it will be in bulk amounts, it will be advisable to take daily incremental backups and weekly full backups. So in Level 1, Level 2 and Level 3 backups, we will be sticking on to daily incremental backups and weekly full backups, while backing up Data Files. In level 0 backup, we will be taking daily full backup of Data files, since it is stored on the same server.

It is possible to take daily full backup of Data Files. But the problem lies while writing Giga Bytes of Data onto CDs and while transferring and storing the same Giga Bytes of Data on to the Stand by server and the Magnetic Tapes. abc has developed custom made Linux Shell Scripts that takes the backup of all the required parts of both servers such as the Programs, File data and Database on appropriate schedules.

Now to save space of the Back Up medium, we will be rotating CDs on a weekly basis, in case of CD based backup, and will be cleaning 2 week old full backups in case of other backup methods. In case of Level 0, Level 1 and Level 3 backup methods, we will be setting schedulers for doing this clean up job.

Server Areas to take back up:

The following table shows the critical areas of the servers that we will be taking backups. 

1.      11.1.xx.xx (Table 1)

Area Description
/usr/users/xxx/imp/ All Database stuff
/virtualHosts//virtualHosts.lib/ All Programs and Library
/virtualHosts.data//virtualHosts.mirror//workCsvs/ All Data filesMirror scripts and daily tar ballsDaily workflow CSVs

2.      11.2.xx.xx (Table 2)

Area Description
/usr/users/wf/imp//usr/users/meet/imp//usr/users/meet12/imp//usr/users/hp/imp/  All Database stuff
/virtualHosts//virtualHosts.lib/ All Programs and Library
/virtualHosts.data//virtualHosts.mirror/ All Data files,Mirror scripts and daily tar balls

Now, I shall describe each Backup method in detail.

Backup Methods: 

 

LEVEL 0 Backup on Same servers:

Linux System Scheduler invokes the shell scripts to take backup of the required parts of the servers as listed in Table 1 and Table 2. The backup is stored in the ‘/backup/’ area of the respective servers. This process is a daily scheduled process which takes place at midnight. This is the level 0 Backup.

The following table shows the statistics of increase in various dimensions of data for various applications, at the end of 5 years (We have considered the maximum possible case).

Table 3: 

  Server 1 – 11.1.xx.xx Server 2 – 11.2.xx.xx
(Data in MB)  Work FLow Other Intranet apps Intranet Web site, etc
Database 300 300 2000
Data Files 7000 5000 8000
Programs 70 300 2000

 LEVEL 1 Stand by System:

We will be maintaining a Stand by server (11.3.xx.xx) to face any unexpected system failures. The system works as follows – For implementing this setup, we have implemented Shell scripts on each of the Production servers which will push the daily backups to the Stand by server at off hours, daily. These backups will be stored in well defined areas of the Stand by server and later on they can be restored in terms of Database, Data Files and Programs to Switch On the application on this server. This switch over might take minimum of 1 hour. i.e. The minimum system downtime will be 1 hour.

The entire backup can be found in the area‘/backup/’ of the Stand by server. It should be noted that we are able to bring the system back to the level it was up to the last backup. i.e. if the failure happens at 2 pm, then we won’t be able to update the data that was entered about 5 hours before. This is because, the last backup happened last midnight.

One very important thing to remember at this point is that if we try to take the up-to-date backup, i.e. hot backups or every 15 minute backup or something like that, it can load the systems heavily while taking backups, transferring data and restoring the systems automatically. 

 

LEVEL 2 CD based Backup:  

xyz will be providing Good Quality R/W CDs for taking CD based backups. The following table summarizes the number of CDs required currently to manage the Backups and the procedures used. Please note that these ratings are arrived at taking into consideration normal system growth. We might require purchasing more CDs, if required, in the coming years. Taking into consideration, the advances in the IT field, especially in the Compact Disk Technology, we should not go for purchasing stock in bulk. Here the point is that the stock should be sufficient to meet the standard requirements. 

Table 4: 

Type Of Application No Of CDs required Procedure
Work Flowon 11.2.xx.xx 30 CDs 2 CDs will be required for taking daily full backups of Programs. The CDs will be rotated on alternate days. 6 CDs will be required for taking daily full backup of Database. The CDs will be rotated every week so that the latest copy always remains. Considering the huge size of Data Files and its growth in the coming years, we will require 20 CDs to start with. From the starting week, say on a Friday/Saturday, we will start with a complete backup in a specified set of CDs (Set 1), and in the coming week days, we will be going for the daily incremental backup in separate CDs. Now on next weekend (Fri or Sat), we will again take a full backup in a separate set (Set 2) of CDs. In the coming week, the daily incremental backup CDs will be rotated. And in the coming weekend, i.e., the third weekend, Set 1 and Set 2 will be rotated.Total No Of CDs required : 30
Meeting Software etc on 11.2.xx.xx 20 CDs 2 CDs will be required for taking daily full backups of Programs. The CDs will be rotated on alternate days. 6 CDs will be required for taking daily full backup of Database. The CDs will be rotated every week so that the latest copy always remains. Considering the huge size of Data Files and its growth in the coming years, we will require 12 CDs to start with. From the starting week, say on a Friday/Saturday, we will start with a complete backup in a specified set of CDs (Set 1), and in the coming week days, we will be going for the daily incremental backup in separate CDs. Now on next weekend (Fri or Sat), we will again take a full backup in a separate set (Set 2) of CDs. In the coming week, the daily incremental backup CDs will be rotated. And in the coming weekend, i.e., the third weekend, Set 1 and Set 2 will be rotated.Total No Of CDs required : 20
Intranet web site, etc on 11.1.xx.xx 50 CDs 15 CDs will be required for taking daily full backups of Programs. The CDs will be rotated on alternate days. 15 CDs will be required for taking daily full backup of Database. The CDs will be rotated every week so that the latest copy always remains. Considering the huge size of Data Files and its growth in the coming years, we will require 20 CDs to start with. From the starting week, say on a Friday/Saturday, we will start with a complete backup in a specified set of CDs (Set 1), and in the coming week days, we will be going for the daily incremental backup in separate CDs. Now on next weekend (Fri or Sat), we will again take a full backup in a separate set (Set 2) of CDs. In the coming week, the daily incremental backup CDs will be rotated. And in the coming weekend, i.e., the third weekend, Set 1 and Set 2 will be rotated.Total No Of CDs required : 50

 Total number of CDs to be purchased = 100

LEVEL 3 Magnetic Tape based backup:

xyz has purchased two Magnetic devices (Tandberg 30 / 60GB SLR60 SCSI), one each for the live servers (11.1.xx.x and 10.2.xx.xx). We will be storing the daily backups created in ‘/backup/’ area of each of the servers, in respective Tape Device. The system works as follows – For implementing this setup, we have implemented Shell scripts on each of the production servers which will push the daily backups to the respective Tape Device at off hours, daily.

Restoring Backed up data: 

  • Restoring Database  

1. Untar the file ‘*Database*.tgz’ in ‘/usr/users/’ area using command

                          ‘gtar –zxvf  *Database*.tgz’.

2. Go inside the ‘impex’ area and run the import. Before running the import make sure that you create the tables on the Stand by server with the same schema.

3. Remove the tar file.

  • Restoring Data Files

 1. Do test untar of the files ‘*Data200*.tgz   and/or ‘*virtualHostsMirror*.tgz’ and/or ‘*attendanceCSVs*.tgz’ and see to complete path into which it gets uncompressed. Go to the ‘/’ of that area.

2. Unzip the file ‘*Data200*.tgz’ inside that area.

3. Remove the tar file.

  • Restoring Programs 
    1. Do test untar of the files ‘*Lib200*.tgz, (‘*Scripts*.tgz’ or ‘*virtualHosts200*.tgz’) and see to complete path into which it gets uncompressed. Go to the ‘/’ of that area.
    2. Unzip the files inside respective areas.
    3. Remove the tar files.