Telcos are typically concerned with availability and large companies have high requirement for availability in service and data. It should be noted that information is typically stored on computers and not in hard-copy media. As a result, computers need to stay operation around the clock. Computers are not books that don’t require power and spinning storage media to distribute information to consumers. At the moment, SSD is considered as a more reliable and durable form of data storage compared to HDD, but it is often too expensive for typical cloud services that need to store and process terabytes of data.
In early days, hard drives were expensive, physically large and limited in capacity. As centralized data storage become a requirement and tape-based storage was no longer sufficient, engineers began to arrange multiple hard drives to simulate a single large one. Availability started to become an issue and companies require that they can obtain data whenever possible. Networking progressively became an integral component in computing infrastructure. More and more people started to think about data access and backup, especially after their simple data center went down for the first time.
In this situation, speed of access wasn’t an important thing. Reliability and availability were bigger issues, because it could take days for the center to get operational again after a failure. Each workstation depends on one another to keep the whole system running. Lose a workstation and it is possible to lose the entire network. Mesh and star network designs could remove some reliance to a monolith system, but these topologies won’t mean a thing, if multiple workstations failed.
Only after the network stability improved, users began to demand for more storage and speed of access. Availability was provided by early forms of backups and then the Microsoft Distributed File System was introduced. Duplicates of data could be created automatically. If one file server failed, others could provide the same service. Eventually, the Internet and WWW came; and the rest is history. Thanks to redundancy provided by these backup system; remote services, data and applications arrived with a vengeance.
Today we expect to get high availability and consumers could get pretty sore if their applications and data can’t be available. Cloud services were eventually introduced to deliver ubiquitous information that exists everywhere. With redundancy, it is possible for us to use simple web browsers to access the information 24 x 7 throughout the year. Add to that current sophisticated smartphones and tablets, then we would realize why infrastructure must be highly available and reliable through redundancy.
In fact, redundancy should be considered as a solution to resilience and this should sound straightforward and easy to understand. Network designers always incorporate redundancy in all parts of the infrastructure. Redundancy could help us overcome hardware limitations that service providers and manufacturers can only minimize, not eliminate. We can’t expect to have a server that would work perfectly and last forever under all conditions. Redundancy is about success in providing high degree of reliability and availability.