Wall Street & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Trading Technology

11:55 AM
Connect Directly
RSS
E-Mail
50%
50%

The High-Speed Arms Race on Wall Street Is Leading Firms to Tap High-Performance Computing

Wall Street firms are turning to high-performance computing technologies to meet the increasing computing demands of quantitative analysts, traders and risk managers without driving up power consumption.

In the ongoing arms race on Wall Street, high-performance computing (HPC), also referred to as supercomputing, provides a huge competitive advantage that no firm can afford to miss out on. HPC is the engine that lets firms analyze and run simulation models on complex financial instruments and portfolios, price options, detect fraud, and predict currency shifts. It lets companies analyze risk across portfolios as well as the enterprise, and enables quantitative analysts, traders and portfolio managers to understand the relationship between risks and profitability. But the speed and processing power comes at a substantial -- and continually escalating -- cost.

Over the past few years, Wall Street has taken a brute force approach to HPC, populating data centers with thousands of servers and racks of blades. These tactics can meet high-performance needs, but the grids and server clusters grow larger every day as firms develop new products and models that demand high performance. "We see a continuing ratcheting up of the amount of high-performance computing that's being done on the Street to stay competitive," notes Gartner analyst Carl Claunch.

But where do firms scale from here, and critically, how do they manage escalating power needs? Each AMD or Intel chip draws roughly 75 watts of power. Multiply that by 10,000 chips and you're soaking up vast amounts of power and generating a tremendous amount of heat that requires cooling, which draws even more power, creating a vicious energy-hogging cycle. And some facilities -- particularly in and around market centers Manhattan, London and Tokyo -- are running out of available space and electricity.

"There's a trade-off we're all having to make at the moment," notes John van Uden, SVP, capital markets and banking technology at Citigroup. "The faster the clock cycles go, the more heat they require to be dissipated. The more heat they generate, the more cooling they need, so they therefore start driving power usage up." If old computers are left in the data centers, the number of machines necessary for HPC multiplies quickly. Yet refreshing hardware frequently to follow the HPC curve is expensive both from an equipment and a power-consumption point of view.

As a result, the financial services industry is exploring new ways of obtaining more HPC cycles while keeping power demands in check, following the lead of HPC pioneers such as universities and the Department of Defense. Among this diverse wave of new HPC offerings are software that helps applications take advantage of parallel processors and graphics boards, PCI (peripheral component interface) cards and Cell processors that handle portions of HPC tasks. Also among the next generation of products is Microsoft's latest cluster server, which is designed to extend HPC to more applications and workers.

Exploiting Multicores and Blades

The way in which performance gains are achieved on computer chips has changed, creating a new challenge for HPC software. "For years, Intel just kept doubling, doubling, doubling clock speeds," Microsoft CEO Steve Ballmer tells Wall Street & Technology in a recent exclusive interview. "Now they don't double clock speeds anymore -- they give us twice as many cores or processors. Figuring out how to take any application and parallelize it so it can run across multiple cores will be a key, not just in high-performance computing and not just in financial scenarios but in all applications figuring out how to exploit the increase in power that physics is giving us." Microsoft's Windows Compute Cluster Server 2003 supports up to four processors per server, and the Vista operating system supports multicore processors, according to the company.

Intel researchers recently developed an 80-core chip the size of a fingernail that, according to Intel, uses less electricity than a home appliance. That chip may never come to market, but it illustrates the direction in which computer chips are going. Intel's quad-core Clovertown Xeon chip and the AMD Barcelona processor due out this fall contain four cores, and within 12 months eight-core chips may be available, industry observers say. In a similar vein, server blades from IBM and HP that enable the packing of many servers into one rack are hugely popular on Wall Street.

But such dense computing does not automatically equal high performance. If applications are not written to run in parallel, the benefits of multiple processors and dense computing will be limited. "One approach we've seen emerge over the last year is companies trying to be smarter about the way applications accomplish tasks so they don't need so much horsepower," says Gartner's Claunch.

Virtualization software (such as Xen and VMWare) can help, as can grid computing software from the likes of Platform Computing and Data Synapse. Aspeed (New York) provides software that lets developers modify applications so that they can run across multiple processors within a server or across multiple servers. The Aspeed tools identify loops or iterations that can be run in parallel, form subroutines out of them, and manage the parallel instances at run time. Microsoft's Windows Compute Cluster Server (CCS) comes with a message-passing interface that can be used to help port existing parallel applications. Microsoft also has a partnership with Platform Computing, a vendor of grid-based job schedulers, that allows its software to communicate with CCS. And Visual Studio 2005 includes the OpenMP API (application programming interface) -- which supports multiprocessing programming in C/C++ and Fortran -- as well as a parallel debugger that both support HPC application development.

Turbochargers

Under consideration at many Wall Street firms are a variety of specialized hardware components, known as accelerators, that help a computer run an application more efficiently. An accelerator is like a turbocharger in a car -- it makes the engine go faster when you need it to. On Wall Street, accelerators can be used to speed up part of an application, such as Monte Carlo simulations, with a low incremental power use, freeing up the core processors for other work.

Accelerators don't require a major data center upgrade, making them an attractive solution for financial firms. They come in the form of graphics processing units (GPUs), PCI boards and the IBM Cell processor. Using such devices in combination with traditional HPC equipment sometimes is referred to as "heterogeneous computing" or "heterogeneous architecture." This technology category is brand-new, however, and many of the providers are start-ups. So Wall Street firms are not ready yet to reveal which products they're trying. Executives at Citigroup and Merrill Lynch, for instance, say they are evaluating accelerators but decline to specify which ones. "We are following this marketplace closely," says Hasan Dewan, managing director, technology architecture solutions and services at Merrill Lynch. "We know the companies leading this space and talk to them every quarter. Everybody is dabbling in this right now, like us."

"If you're a market leader and you want to be able to price products with more accuracy, then you'll investigate these technologies," asserts Adam Vile, head of grid, HPC and technical computing at consulting firm Excelian. Consider a complicated model, such as one for CDO squared -- a type of collateralized debt obligation where the underlying portfolio includes tranches of other CDOs -- that can take six to eight hours to run on a standard computer. The ability to put that algorithm on a faster device that can perform the same calculations in five minutes is potentially worth millions of dollars and could best competitors, Vile notes. Even Tier 2 firms that can't afford or don't have space for a grid are investigating accelerators because of the boosts in processing speed they can provide in a small amount of space with low power usage, he adds.

Previous
1 of 5
Next
Register for Wall Street & Technology Newsletters
Video
Exclusive: Inside the GETCO Execution Services Trading Floor
Exclusive: Inside the GETCO Execution Services Trading Floor
Advanced Trading takes you on an exclusive tour of the New York trading floor of GETCO Execution Services, the solutions arm of GETCO.