For financial firms, the data center design of the future will only be successful if it enables them to adapt to specific business operational characteristics (e.g., performance, cost, regulatory, etc.) for high-velocity electronic and algorithmic trading, risk management, and complex synthetic hedging strategies. This will require a value chain platform that executes in real time, providing greater performance at a lower cost in contrast with today's data centers.
To get there, firms have to understand the design limitations created from past decisions. Previous data center designs have resulted in complexity, waste, performance barriers and cost models that just do not work. A lack of understanding and transparency about what has been done in the past will continue to create misalignment with business needs if this issue is not addressed today. Critical design limitations include:
- Supply-Driven Management Most data center infrastructure is designed and managed from the bottom up. The typical approach is to standardize, partition, allocate and implement a vanilla solution that is attached to the network based on the topology of the data center floor. Provisioning is then designed for peak workloads. The business workload and service requirements that incorporate performance, price or efficiency factors are not incorporated, resulting in inconsistent service delivery and a misalignment of needs.
- One Size Fits All Data center infrastructure and vendor strategy are usually built around a perceived standardized footprint. The problem is that this is typically designed bottom-up with little or no correlation to the workflows, workloads, information, content and connectivity requirements, or to the competitive needs of the business. Such disconnects result in poor performance, unnecessary costs and waste, and agility issues for both the business and IT.
- Spaghetti Transaction Flow Transaction flow across traditional infrastructures is designed without consideration for the proximity of various devices that comprise a service unit. This results in significant performance impacts whereby compute, memory, input/output (I/O) fabric, disk storage and connectivity to external feeds are provided in terms of layout -- not in terms of service delivery. This approach impacts performance by 30 times and creates unnecessary network traffic and bandwidth usage, causing return on equity to suffer.
- Definition of Insanity: Conducting the same action time after time and expecting a different outcome. The continuous use of a typical data center layout incorporates homogeneous pooling of various classes of resources -- servers by classes are typically in multiple pools; storage by file or block is located in its own area of the data center; network load balancers, switches and routers are pooled and deployed across various areas of the data center. This approach is not designed for business impact, optimal workload throughput, time to provision, or ideal space and power usage. With this type of data center layout, the average provisioning cycle is measured in weeks or months versus the required minutes or days to provision, troubleshoot or perform to meet the needs of the business.