Zmanda Cloud Backup adds Tokyo as its latest cloud storage location

March 16th, 2011

We are adding support for Asia Pacific (Tokyo) Region in Zmanda Cloud Backup (ZCB). This is the fifth worldwide location supported by ZCB.

This support provides faster uploads for ZCB users in Japan. Throughput will be significantly higher because of less hops along the way and very high bandwidth connections typically available in Japan. Overall processing will be faster because of lower latency (expected to be single digit millisecond latency for most end users in Japan).

Cloud Backup to Three Continents Now Includes Japan

Cloud Backup to Three Continents Now Includes Japan

This support enables users to ensure that their data does not leave Japan, e.g. if required for compliance reasons.

In summary, users in Japan now have an effective and scalable solution to backup their Windows Filesystems, Microsoft Applications and Databases (MySQL, SQL Server, Oracle) to a robust storage cloud

As an introductory offer to our customers in Japan, we are waiving all transfer and storage charges to the Tokyo location until April 30th, 2011. You only pay for the initial setup fees ($4.95) and pro-rated monthly fees ($4.95 per month). After April 30th, our regular charges will apply at par with all other supported regions.

There is more on the horizon for our Japanese customers. We are soon going to offer a fully localized Japanese version of ZCB (Current shipping version has already been tested with Japanese file and folder names). Watch this space for an announcement on that within few weeks.

Zmanda Cloud Backup with Japanese Files/Folders

MySQL Backup Webinar Series: Scalable backup of live databases

October 14th, 2010

mysql logo

Setting up of a good backup and recovery strategy is crucial for any serious MySQL implementation. This strategy can vary from site to site based on various factors including size of the database, rate of change, security needs, retention and other compliance policy etc. In general, it is also required from MySQL DBAs to have least possible impact on usability and performance of the database at the time of backup - i.e MySQL and its dependent applications should remain hot during backup.

Join MySQL backup experts from Zmanda for two webinars dedicated to hot backup of MySQL:

MySQL Backup Essentials: In this webinar, we will go over best practices for backing up live MySQL databases. We will also cover Zmanda Recovery Manager (ZRM) for MySQL product in detail, including a walk through the configuration and management processes. We will discuss various features of ZRM including full backups using snapshots, point-in-time recovery, monitoring and reporting.

Register for MySQL Backup Essentials Webinar on November 23rd at 10:00AM PT

MySQL Backup to Cloud: In this webinar, we will focus on backing up MySQL databases running on Windows to the cloud. Cloud Storage provides an excellent alternative to backing up to removable media and shipping it to remote secure site. We will provide live demonstration of the Zmanda Cloud Backup (ZCB) product backing up MySQL to Amazon S3 storage. ZCB is an easy-to-use cloud backup solution which supports all Windows platforms. We will also discuss recovering MySQL database in the cloud, creating a radically low cost disaster recovery solution for MySQL.

Register for MySQL Backup to Cloud Webinar on November 30th at 10:00AM PT

Zmanda @ Oracle OpenWorld 2010

September 7th, 2010

Oracle Open World

If you are coming to this year’s Oracle OpenWorld 2010, please do visit us at Booth #3824.

We will have our backup solution experts at the show to discuss any of your database or infrastructure backup needs.

When it comes to backing up various products offered by Oracle, we have several solutions:

We hope to see you at the show!

Go Tapeless - Use Zmanda Cloud Backup for backup and disaster recovery

June 23rd, 2010

If you are in charge of ensuring backup and disaster recovery of critical servers for your business, you have undoubtedly grappled with unwieldy tapes. In this age of digital everything, writing to tapes and then shipping them to a remote location seems like a relic from another era. Advances in Cloud based services, e.g. those offered by Amazon Web Services, provide an excellent alternative to tapes for backup and disaster recovery.

We have been offering Amazon S3 based cloud backup solution for about three years now. Today we are announcing the third generation of our Zmanda Cloud Backup product. Particularly exciting for me is the support for the Asia Pacific Region.

Cloud Backup to Three Continents

Cloud Backup to Three Continents

For many of the same reasons that Amazon picked Singapore as their first Asia Pacific Region, Singapore is a great destination to preserve your valuable assets. Performance and robustness provided by Singapore’s Internet connectivity is a major plus for backup and disaster recovery needs.

Backing up your data to the cloud requires several steps. You need to (1) Plan what do you want to backup and when; (2) Extract data out of your live applications, e.g. SQL Server or Exchange; (2) Stage this backup image to transfer to the cloud; (3) Monitor the transfer for any Internet hiccups and take corrective actions; and (4) Delete backup images which have expired per your retention policy. Zmanda Cloud Backup automates these steps through an easy GUI based backup configuration and management. ZCB integrates with S3’s REST API to coordinate transfer of on-premises data to the storage cloud.

In third-gen ZCB we also added support for international character sets. So, ZCB is friendly with files and folders named in e.g. Chinese (Simplified or Traditional), Japanese or Korean.

Backup What screenshot - Chinese filenames

Backup What screenshot - Chinese filenames

Backup What screenshot - Japanese filenames

Backup What screenshot - Japanese filenames

While a lot of Zmanda’s customers backup to local disks or tapes, Cloud Backup is fastest growing part of our business. In many environments, customers are backing up some backup sets to local media and other backup sets to the Cloud - with plans to move entire backup to the storage on the Cloud in a few years. We have seen this adoption across the board, including in the traditionally conservative financial industry. So, it appears more and more IT managers are daring to go tapeless when it comes to their backup operations!

Disaster Recovery in the Cloud

June 21st, 2010

Most small and medium sized business do not have a formal Disaster Recovery (DR) plan and implementation because of its cumbersome and costly nature. Various factors make DR complex, including: (1) Allocation and administration of remote compute and storage resources; (2) Data transport mechanism - e.g. tape shipment or data replication; and (3) Application environment synchronization. To makes matter worse, regular testing of a DR implementation tends to be complicated, and in many cases not practical.

Cloud Computing provides an excellent means to radically simplify the DR process. This is achieved by backing up your critical applications to a Storage Cloud (e.g. Amazon S3), and making preparation to quickly recover in the nearby Compute Cloud (e.g. Amazon EC2).

We have two solutions for backup and DR in the cloud: Amanda Enterprise (with the Amazon S3 Option) and Zmanda Cloud Backup (ZCB). Amanda Enterprise is meant for environments with heterogeneous systems, whereas ZCB is targeted at small businesses with a handful of Windows servers and desktops.

Amanda Enterpise DR in the Cloud

Setup of Amanda Enterprise for Cloud Based DR

 

Zmanda Cloud Backup DR in the Cloud

Setup of Zmanda Cloud Backup for Cloud Based DR

 

The process of setting up DR in the cloud is as follows:

  1. Set up backup process to Amazon S3.
  2. Complete first backup of applications on primary site to S3.
  3. Configure standby VMs on EC2 to match the OS (and patch level) of the corresponding systems on your primary site. For all data storage, use Elastic Block Storage, so you have persistent data across reboots.
  4. Install Zmanda backup software on these standby VMs.
  5. Install the same S3 certificate that is used in step #1 on the standby VMs.
  6. In case of Amanda Enterprise setup the AE-DR option to replicate backup catalog and configuration to the standby VM running the AE server.
  7. Perform full recovery from S3 to standby VMs.
  8. Take a snapshot of the standby VMs.
  9. Shutdown standby VMs.
  10. Optionally start standby VMs periodically to perform steps #6-#8. This will help in reducing the time to recover after a disaster and also tests your DR process.

If you are considering the Cloud for your DR needs, come join us tomorrow (June 22nd) for a webinar: Noted Storage Analyst, Lauren Whitehouse from Enterprise Strategy Group, will be joining me: Leveraging the Cloud for Radically Simple and Cost-Effective Disaster Recovery

Taking a Snapshot of a Thousand Dancing Dolphins

April 12th, 2010

An increasing number of large MySQL applications, e.g. social networking and SaaS back-ends, use a distributed MySQL architecture. MySQL data is distributed logically or heuristically on multiple, and in some cases thousands of, real or virtual servers. Backing up such large and dynamic environments presents its own complexities.

In this blog, we will use the cluster terminology - but we do not imply that NDB Cluster storage engine is being used for MySQL. Most implementations use InnoDB for data and MyISAM for dictionary. Typical architecture for such applications uses Database Sharding - i.e. shared-nothing partitioning of data across similarly configured nodes.

In most sharded environments, high availability is built-in - i.e. the cluster can continue to answer the queries and commit the transactions of all users in face of a node failure. This is typically accomplished either by database level replication or by designing the application so that each row is mirrored on two or more nodes. If MySQL Replication is being used, then slaves can be used for load-balancing as well - as long as it is ok that some clients may not get the latest data on the master node. E.g. a profile update by a user may not be visible to all her friends right away.

But built-in high availability does not do away with the need for setting up a backup and recovery process. Just like RAID does not replace backup, Sharding with redundancy does not replace backup either. The inherent complexity of large scale distributed database environments makes errors (human, system, environmental) more probable. Also, the implied availability of these environments increases the stress during the recovery process.

Here are the backup and recovery needs for such environments, some of the needs conflict with each other:

  • Application managers desire a point-in-time restore which is coordinated across multiple servers.
  • IT managers want to have as identical configuration as possible across all nodes - so process of replacing nodes becomes simple.
  • Depending on the application, retention policy could be several years.
  • Overall application should be able to recover from multiple node failures, human errors or sabotage, and geographic problems (disaster, connectivity etc.)

Zmanda Recovery Manager for MySQL is designed to meet these challenging needs. It uses various backup methods for backing up individual shards, and manages backup and recovery of the overall MySQL environment.

For point-in-time restore capability, ZRM uses MySQL binary logs. In very high update-oriented environments - size of these binary logs can become very big. In such environments, if the organization’s Recovery Point Objective (RPO) requires to be able to recover to any point within the past few weeks, it may not be possible to store these binary logs on the MySQL node itself. In any case, in order to be able to recover in the face of complete node failures, these logs need to be stored outside of the node. So, a storage environment which is physically or logically shared among the nodes is typically a requirement for storing the backup images. This shared secondary storage does not violate the shared-nothing principles of sharding, because it is not in the path of actual application. It is out-of-band storage being accessed and managed by the backup software. Also note that ZRM can automatically remove the binary logs from the MySQL node once they have been copied over to their archive location.

Taking a Snapshot of a multiple MySQL databases

ZRM can use two techniques to allow for point-in-time recovery of distributed MySQL environments: Coordinated Backups or Coordinated Restores:

Coordinated backup provides a backup image of all nodes consistent to a specific event. E.g. all rows are backed up until a specific Global Sequence Number (GSN) - assuming a GSN exists in the application. Another option is to create a checkpoint event specifically for backup purposes. Of course, having a GSN or a checkpoint event may create periodic brief hiccups which may or may not be acceptable for the business needs. But this process creates the cleanest backup images for the whole application.

Coordinated restore allows for each individual node to be backed up completely independent of each other. This eliminates the need for a backup checkpoint event. However at the time of recovery more processing is required to make sure all nodes are recovered to a point which is logically acceptable to the higher level application. ZRM can be scripted to identify this point in the backed up binary logs for every shard. Also, the visual log analyzer feature of ZRM helps DBAs to efficiently search for these points. Note that it is possible that all shards are not recovered to their state as it existed at exact same time, however they should be recovered to a state which is acceptable for the overall application. Having the clocks of nodes synchronized will also help the DBAs to identify points-of-recovery across nodes - by being able to correlate events easily.

Being able to backup a smaller shard instead of the whole dataset provides some opportunities both from technical and logical perspective. Since the size of each shard may be relatively small, a particular backup method may be acceptable even though it would not have been acceptable if the whole dataset was in one monolithic database. If data was distributed among shards using some external criteria (e.g. users of each zip code go to a particular shard), then backup images of each shard may be individually usable by an application. ZRM creates portable backup images - a key need for backing up shards - so backups from one node can be restored on another.

If recovery from a site wide disaster is also an objective, then suitable backup images need to be securely transported to the remote site. This can be done via the new Disaster Recovery Option now available for ZRM. This option replicates backup images, backup catalog and configuration data to the remote site - enabling full disaster recovery on an as-needed basis. Individual nodes need not be replicated, saving huge hassle and cost.

If your show is backed by a pod of dancing dolphins, a well implemented and documented backup and disaster recovery process is a good investment.

What’s New in Amanda Community: Postgres Backups

March 25th, 2010

Second installment in a series of posts about recent work on Amanda.

The Application API allows Amanda to back up structured data — data that cannot be handled well by dump or tar. Most databases fall into this category, and with the 3.1 release, Amanda Community Edition ships with ampgsql, which supports backing up Postgres databases using the software’s point-in-time recovery mechanism.

The how-to for this application is on the Amanda wiki.

Operation

Postgres, like most “advanced” databases, uses a logging system to ensure consistency even in the face of (some) hardware failures. In essence, it writes every change that it makes to the database to the logfile before changing the database itself. This is similar to the operation of logging filesystems. The idea is that, in the face of a failure, you just replay the log to re-apply any potentially corrupted changes.

Postgres calls its log files WAL (write-ahead log) files. By default, they are 16MB. Postgres runs a shell command to “archive” each logfile when it is full.

So there are two things to back up: the data itself, which can be quite large, and the logfiles. A full backup works like this:

  • Execute PG_START_BACKUP(ident) with some unique identifier.
  • Dump the data directory, excluding the active WAL logs. Note that the database is still in operation at this point, so the dumped data, taken alone, will be inconsistent.
  • Execute PG_STOP_BACKUP(). This archives a text file with the suffix .backup that indicates which WAL files are needed to make the dumped data consistent again.
  • Dump the required WAL files

An incremental backup, on the other hand, only requires backing up the already-archived WAL files.

A restore is still a manual operation — a DBA would usually want to perform a restore very carefully. The process is described on the wiki page linked above, but boils down to restoring the data directory and the necessary WAL files, then providing postgres with a shell command to “pull” the WAL files it wants. When postgres next starts up, it will automatically enter recovery mode and replay the WAL files as necessary.

Quiet Databases

On older Postgres versions, making a full backup of a quiet database is actually impossible. After PG_STOP_BACKUP() is invoked, the final WAL file required to reconstruct a consistent database is still “in progress” and thus not archived yet. Since the database is quiet, postgres does not get any closer to archiving that WAL file, and the database hangs (or, in the case of ampgsql, times out).

Newer versions of Postgres do the obvious thing: PG_STOP_BACKUP() “forces” an early archiving of the current WAL file.

The best solution for older versions is to make sure transactions are being committed to the database all the time. If the database is truly silent during the dump (perhaps it is only accessed during working hours), then this may mean writing garbage rows to a throwaway table:

CREATE TABLE push_wal AS SELECT * FROM GENERATE_SERIES(1, 500000);
DROP TABLE push_wal;

Note that using CREATE TEMPORARY TABLE will not work, as temporary tables are not written to the WAL file.

As a brief encounter in #postgres taught me, another option is to upgrade to a more modern version of Postgres!

Log Incremental Backups

DBAs and backup admins generally want to avoid making frequent full backups, since they’re so large. The usual pattern is to make a full backup and then dump the archived log files on a nightly basis for a week or two. As the log files are dumped, they can be deleted from the database server, saving considerable space.

In Amanda terms, each of these dumps is an incremental, and is based on the previous night’s backup. That means that the dump after the full is level 1, the next is level 2, and so on. Amanda currently supports 99 levels, but this limit is fairly arbitrary and can be increased as necessary.

The problem in ampgsql, as implemented, is that it allows Amanda to schedule incremental levels however it likes. Amanda considers a level-n backup to be everything that has changed since the last level-n-1 backup. This works great for GNU tar, but not so well for Postgres. Consider the following schedule:

Monday level 0
Tuesday level 1
Wednesday level 2
Thursday level 1

The problem is that the dump on Thursday, as a level 1, needs to capture all changes since the previous level 0, on Monday. That means that it must contain all WAL files archived since Monday, so those WAL files must remain on the database server until Thursday.

The fix to this is to only perform level 0 or level n+1 dumps, where n is the level of the last dump performed. In the example above, this means either a level 0 or level 3 dump on Thursday. A level 0 is a full backup and requires no history. A level 3 would only contain WAL files archived since the level 2 dump on Wednesday, so any WAL files before that could be deleted from the database server.

Summary

The combination of a powerful open source database system and the open source ampgsql plugin combine to produce a powerful protected storage system for your mission-critical data. We will continue to develop additional Application API plugins, and encourage you and other members of the community to do the same!

What’s New in Amanda: Automated Tests

March 12th, 2010

This is the first in what will be a series of posts about recent work on Amanda. Amanda has a reputation as old and crusty — not so! Hopefully this series will help to illustrate some of the new features we’ve completed, and what’s coming up. I’m cross-posting these on my own blog, too.

Among open-source applications, Amanda is known for being stable and highly reliable. To ensure that Amanda lives up to this reputation, we’ve constructed an automated testing framework (using Buildbot) that runs on every commit. I’ll give some of the technical details after the jump, but I think the numbers speak for themselves. The latest release of Amanda (which will soon be 3.1.0) has 2936 automated tests!

These tests range from highly-focused unit tests, for example to ensure that all of Amanda’s spellings of “true” are parsed correctly, all the way up to full integration: runs of amdump and the recovery applications.

The tests are implemented with Perl’s Test::More and Test::Harness. The result for the current trunk looks like this:

=setupcache.....................ok
Amanda_Archive..................ok
Amanda_Changer..................ok
Amanda_Changer_compat...........ok
Amanda_Changer_disk.............ok
Amanda_Changer_multi............ok
Amanda_Changer_ndmp.............ok
Amanda_Changer_null.............ok
Amanda_Changer_rait.............ok
Amanda_Changer_robot............ok
Amanda_Changer_single...........ok
Amanda_ClientService............ok
Amanda_Cmdline..................ok
Amanda_Config...................ok
Amanda_Curinfo..................ok
Amanda_DB_Catalog...............ok
Amanda_Debug....................ok
Amanda_Device...................ok
        211/428 skipped: various reasons
Amanda_Disklist.................ok
Amanda_Feature..................ok
Amanda_Header...................ok
Amanda_Holding..................ok
Amanda_IPC_Binary...............ok
Amanda_IPC_LineProtocol.........ok
Amanda_Logfile..................ok
Amanda_MainLoop.................ok
Amanda_NDMP.....................ok
Amanda_Process..................ok
Amanda_Recovery_Clerk...........ok
Amanda_Recovery_Planner.........ok
Amanda_Recovery_Scan............ok
Amanda_Report...................ok
Amanda_Tapelist.................ok
Amanda_Taper_Scan...............ok
Amanda_Taper_Scan_traditional...ok
Amanda_Taper_Scribe.............ok
Amanda_Util.....................ok
Amanda_Xfer.....................ok
amadmin.........................ok
amarchiver......................ok
amcheck.........................ok
amcheck-device..................ok
amcheckdump.....................ok
amdevcheck......................ok
amdump..........................ok
amfetchdump.....................ok
amgetconf.......................ok
amgtar..........................ok
amidxtaped......................ok
amlabel.........................ok
ampgsql.........................ok
        40/40 skipped: various reasons
amraw...........................ok
amreport........................ok
amrestore.......................ok
amrmtape........................ok
amservice.......................ok
amstatus........................ok
amtape..........................ok
amtapetype......................ok
bigint..........................ok
mock_mtx........................ok
noop............................ok
pp-scripts......................ok
taper...........................ok
All tests successful, 251 subtests skipped.
Files=64, Tests=2936, 429 wallclock secs (155.44 cusr + 31.48 csys = 186.92 CPU)

The skips are due to tests that require external resources - tape drives, database servers, etc. The first part of the list contains tests for almost all perl packages in the Amanda namespace. These are generally unit tests of the new Perl code, although some tests integrate several units due to limitations of the interfaces. The second half of the list is tests of Amanda command-line tools. These are integration tests, and ensure that all of the documented command-line options are present and working, and that the tool’s behavior is correct. The integration tests are necessarily incomplete, as it’s simply not possible to test every permutation of this highly flexible package.

The =setupcache test at the top is interesting: because most of the Amanda applications need some dumps to work against, we “cache” a few completed amdump runs using tar, and re-load them as needed during the subsequent tests. This speeds things up quite a bit, and also removes some variability from the tests (there are a lot of ways an amdump can go wrong!).

The entire test suite is run at least 54 times for every commit by Buildbot. We test on 42 different architectures - about a dozen linux distros, in both 32- and 64-bit varieties, plus Solaris 8 and 10, and Darwin-8.10.1 on both x86 and PowerPC. The remaining tests are for special configurations — server-only, client-only, special runs on a system with several tape drives, and so on.

Fast Backups of MySQL Running on Amazon EC2

January 23rd, 2010

If you are running your MySQL databases on the Amazon EC2 compute cloud, Zmanda Recovery Manager (ZRM) for MySQL can perform fast full backups of these databases by using Elastic Block Store (EBS) Snapshots. ZRM takes only a momentary read lock on the MySQL database during the creation of the snapshot, in order to ensure consistency of the backed up database archive. MySQL Backups using Amazon EBS snapshots are differential backups, meaning that only the blocks that have changed since your last full backup (via EBS snapshot) will be saved. For example, if you have a database with 100 GBs of data, but only 5 GBs of data has changed since your last snapshot, only the 5 additional GBs of snapshot data will be stored back to Amazon S3 during the current full backup run.

EC2 to S3 mysql backup diagram

ZRM automatically deletes EBS snapshots (containing full backups of MySQL) according to the configured retention policy. Just like other snapshot based full backups, ZRM intelligently correlates EBS Snapshots with incremental backups using MySQL logs, enabling you to recover your MySQL instances running on EC2 to any point in time.

Backups made using EBS snapshots can be recovered on the original EC2 instance or on a new EC2 instance. This also provides a quick and convenient mechanism to instantiate new MySQL database servers based on the database state from a desired point-in-time.

ZRM can run on the same EC2 instance as the MySQL database. On the other hand, if you have multiple EC2 instances with MySQL databases, you can run ZRM on one centralized EC2 instance dedicated for backup purposes. In this case, backup configuration and management for all MySQL databases is performed via Zmanda Management Console from this centralized backup server.

We have created an Amazon Machine Image (AMI) with ZRM pre-configured. This makes implementation of a MySQL backup solution on the cloud even simpler. We have used the “EC2 Small Instance” - which is powerful enough to backup most MySQL workloads in the cloud. This also makes it a very cost-effective option. This AMI is available to all ZRM customers, as part of the ZRM Enterprise subscription. You will need to create your own Amazon EC2 account, and pay standard per hour price to Amazon to run an instance based on this AMI. Note that you can configure your backup server instance to run only during the backup window. So, if you are backing up your databases once a week, and your backups takes less than an hour, then you can have this instance up only during that hour. EC2 pricing is per instance-hour consumed from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour. In addition to the EC2 compute capacity, you will pay standard storage charges for Amazon S3 (to store EBS Snapshots).

Join us on January 28th for a webinar on MySQL Backups (hosted by Sun/MySQL). Along with an introduction to Zmanda Recovery Manager, we will also discuss backing up MySQL applications on the cloud, and demonstrate the new ZRM AMI.

Red Hat Enterprise Linux and Amanda Enterprise: IT Manager’s Backup Solution

January 14th, 2010

A backup server represents a very important component of any IT infrastructure. red hat logoYou need to pick the right components to implement a scalable, robust and secure backup server. The choice of the operating system has crucial implications. Red Hat Enterprise Linux (RHEL) provides many of the features needed from an ideal OS for a backup server. Some of these include:

Virtualization: RHEL includes a modern hypervisor (Red Hat Enterprise Virtualization Hypervisor) based on the Kernel-Based Virtual Machine (KVM) technology.  Amanda backup server can be run as a virtual machine on this hypervisor. This virtual backup server can be brought up as needed. This provides optimal resource management, e.g. you can bring up the backup server just at the time of backup window or for restores. A virtualized backup server also makes it much more flexible to change the resource levels depending on the business needs, e.g. if more oomph is needed from the backup server prior to a data center move.

High I/O Throughput:
Backup server represents huge I/Os, typically characterized by large sequential writes. RHEL, both as real and virtual system, provides high I/O throughput needed for a backup server workload. RHEL 5 allows for switching I/O schedulers on-the-fly. So, a backup administrator can fine tune I/O activity to match with higher level function (e.g. write-heavy backups vs. read-heavy restores).

Security: Securing a backup server is critical in any overall IT security planning. In a targeted attack, a backup server provides a juicy target because data that is deemed to be important by an organization can be had from one place. Security-Enhanced Linux (SELinux) in RHEL implements a variety of security policies, including U.S. Department of Defense style mandatory access controls, through the use of Linux Security Modules (LSM) in the Linux kernel. Amanda supports RHEL SELinux configuration. It allows users to run backup server in a secure environment.

Scalable Storage:
Storage technologies built into RHEL provide scalability needed from backup storage. The Ext3 filesytem supports up to 16TB file systems. Logical Volume Manager (LVM) allows for backup storage on a pool of devices which can be added to when needed. System administrators can also leverage Global File System (GFS) to provide backup server direct access to data to be backed up, by-passing the production network.

Compatibility: RHEL is found on compatibility matrix of any modern secondary storage device - whether it be a tape drive, tape library or a NAS device. RHEL also supports wide variety of SAN architectures, including iSCSI and Fibre Channel. This, along with Amanda’s use of native drivers to access secondary media, gives IT managers the widest choice in the market for devices to store backup archives.

Manageability: Easy update mechanism, e.g. using yum, from Redhat Network makes it easier for the administrator to keep the backup server updated with latest fixes (including security patches). Amanda depends on some of the system libraries and tools to perform backup and recovery operations. A system administrator can pare down a RHEL environment to only have bare-minimum set of packages needed for Amanda, and then use RHN to keep these packages up-to-date.

Long Retention Lifecycle: Many organizations need to retain their backup archives for several years due to business or compliance reasons. Each version of RHEL comes with seven year support. This combined with open formats used by Amanda Enterprise makes it practical for IT managers to have real long-term retention policies, with a confidence to be able to recover their data several years from now.

starbucks coffee
In summary, if you are in the process of making a choice for your backup server, RHEL should certainly be in the short-list for operating systems, and (yes, we are biased) Amanda in the short-list for backup software.  We will discuss this combination in detail in a webinar on January 21st. Red Hat is warming up this webinar by offering a $10 Starbucks card for every attendee. Join us!