Infrastructure

01:26 PM
Becca Lipman
Becca Lipman
Commentary
Connect Directly
Facebook
Google+
Twitter
RSS
E-Mail
50%
50%

Updating the Datacenter: Monitoring the Gradual Shift

To gradually virtualize legacy infrastructure, institutions first need a detailed view of performance across systems.

IT loves the next new, shiny object and is fascinated by technical capability, but at the same time, as an industry, financial services is one of the most fundamentally resistant to change. It is adopting new infrastructure without getting rid of the old and not fundamentally changing how they approach computing. As a result, many of the newest technologies are reliant on performance and availability aging legacy infrastructure.

John Gentry, vice president of marketing and alliances at Virtual Instruments, says, "If you look at banking institutions, it may launch a new smartphone application that gives access to an internal transfer of fund, and they may put that on new scale-out architecture, but in reality the core of the compute that pulls money and balances accounts is still on the legacy infrastructure. There's too much risk with trying to move that."

Faced with an ever accelerating pace of innovation everyone is eager to make the most economically sound and future-proof decisions about their infrastructure. But the complexity of existing systems makes it risky to move workloads to the most efficient networks without impacting performance of mission critical systems.

In simpler times, if a company wanted to move infrastructure out of a datacenter to a net new virtual or cloud environment, Gentry explains the preferred approach was to do a massive overbuild of an entirely new landing point for the storage, migrate it over wholesale and retire the legacy. It was arduous and inefficient but ultimately better than risking a shutdown of operations for even a moment.

However, the sheer growth of applications in portfolios (numbering in the thousands) compounded by the growth of data that needs to be stored means the overbuild approach is no longer manageable. Datacenters are too large, he explains, “as they look at modernizing they realize they can no longer overbuild, and there is the inflection point for a lot of the new technologies that are using that migration and abstracting the underlying physical infrastructure.”

In lieu of the difficulties, gradual upgrades of financial datacenters are the new approach. According to Gentry, that opens the door for issues that impact performance, and IT teams are hard pressed to guarantee system availability during the transfer of virtualized workloads.

“There is a shift happening where you used to have experts in various silos, like a storage team or server team who knew the best technologies in their domain and would build them out to industry best practices, but weren't necessarily concerned with performance of the overall system. They also had monitoring for each silo, and the reality is that silo based approach is now breaking down. When you look at going to cloud and going to truly agile computing, those individuals' views are no longer sufficient to provide the business visibility to whether or not they are appropriately leveraging infrastructure that's being deployed, and if it's being deployed at the appropriate cost or tier to support the business.”

Gentry believes those challenges are becoming increasingly acute and the financial risk associated with the performance managment is becoming increasingly pressing to CFOs, such that they are leaning in on CIOs to reduce cost while continuing to maintaining capacity and availability.  

What's lacking is visibility into the end-to-end system, says Gentry. To get it, IT teams are now leveraging infrastructure performance management tools to get a vendor and domain agnostic view into performance of systems across legacy and physical infrastructure as well as modern virtualized and cloud based infrastructures.

With a complete oversight into the environment,  teams can use performance monitoring tools to map out how the applications interconnect and identify where resources are being utilized inefficiently. More importantly, by measuring performance the tools can react to status changes during a system migration.

Gentry says innovative companies are actively searching for alternatives, even looking beyond cloud at modern scale out architectures, and open source initiatives like open stack. But before they can adopt and connect with the new, they must disconnect from the old, or risk the health of their overall system.

Becca Lipman is Senior Editor for Wall Street & Technology. She writes in-depth news articles with a focus on big data and compliance in the capital markets. She regularly meets with information technology leaders and innovators and writes about cloud computing, datacenters, ... View Full Bio
More Commentary
A Wild Ride Comes to an End
Covering the financial services technology space for the past 15 years has been a thrilling ride with many ups as downs.
The End of an Era: Farewell to an Icon
After more than two decades of writing for Wall Street & Technology, I am leaving the media brand. It's time to reflect on our mutual history and the road ahead.
Beyond Bitcoin: Why Counterparty Has Won Support From Overstock's Chairman
The combined excitement over the currency and the Blockchain has kept the market capitalization above $4 billion for more than a year. This has attracted both imitators and innovators.
Asset Managers Set Sights on Defragmenting Back-Office Data
Defragmenting back-office data and technology will be a top focus for asset managers in 2015.
4 Mobile Security Predictions for 2015
As we look ahead, mobility is the perfect breeding ground for attacks in 2015.
Register for Wall Street & Technology Newsletters
Video
7 Unusual Behaviors That Indicate Security Breaches
7 Unusual Behaviors That Indicate Security Breaches
Breaches create outliers. Identifying anomalous activity can help keep firms in compliance and out of the headlines.