Arista Networks Monday introduced a high-powered low-power computer that can process transactions in five millionths of a second. It already sells one to high-frequency traders that can process an order in 600 billionths of a second.

Blade Networks and Voltaire, also Monday, announced a partnership that will combine their switching networks for data center customers. The maximum time that it will take data to move through the combination is estimated at 2.4 millionths of a second, which may have led Blade chief executive Vikram Mehta to declare at the opening session of the 2010 High Performance Computing Linux Financial Markets Show and Conference in New York that the next bottleneck in high-speed trading is not hardware, but software.

In effect, he said, success in increasingly fast-moving automated markets will be dependent on how efficiently traders’ programmers write the algorithms that drive their buying and selling of securities in the increasingly smaller slices of time that technology will permit.

“If you look at the rest of the infrastructure, your servers, your storage subsystems, your fabrics, people in the trading world have access to the same technology that their competitors do,’’ Mehta said. “So, I really think at the end of the day, in the pursuit of zero latency, the ultimate decider is going to be how smart those algos are, as opposed to specifically what architecture is being deployed.’’

Mehta contended that “those architectures are going to be very universal, whether you’re a high-frequency trader or an exchange offering those services” to clients. And Andy Bechtolsheim, founder of Arista Networks, on the same panel, said technology will keep driving the speed of execution toward the speed of light. Right now, for instance, it’s a big deal to be sending data across Ethernet networks at speeds of 10 billion bits of data a second. But, Bechtolsheim noted, that will be leapfrogged twice over the next 19 months, first to 40 billion bits and then 100 billion bits. This will take place by the end of 2011, he said he expects.

Also cutting time will be moves to put more functions associated with trading venues into single blades or computing servers, putting a stock exchange in a box. The technology unit of NYSE Euronext, for instance, is creating a single server that will provide “trading in a box.”

The device will include chips that will act as engines to pull in market data and send them to trading engines, trading engines for a trading firm, smart order routing engines to pick the right venue to send orders to and, finally, market access engines, to take those orders to those markets instantly. The single-box trading server was described by Feargal O’Sullivan, head of enterprise software in the Americas for NYSE Technologies.

But software is not the only bottleneck. Every step of the trading cycle has to be analyzed and the time spent in each step cut, said Mats Andersson, the chief technology officer of Nasdaq OMX Group.

Nasdaq, he says, uses a planning and management system called a “balanced scorecard” to keep driving down order execution times. Andersson said the Nasdaq exchange now handles 99 percent of all orders from trading servers located in the same facility as its matching orders in 272 millionths of a second. The average? 272 millionths.

"The real fight is for talent,'' said Jeff Birnbaum, chief technology architect & global head of architecture of Bank America Merrill Lynch.

"It's not just the hardware technology that makes the difference," Bernstein said, in his own presentation on "The Changing Landscape of High Performance Computing." "You need the software talent to go with that."

Birnbaum noted that he has tried to engage "unblemished" minds before they even get to college. Last year, for instance, he taught aspects of parallel computing to students and faculty at Brooklyn Technical High School in New York.

Afterward, he gave one 15-year-old student a challenge: how to create a faster parser.

A parser is a program, usually part of a compiler, which receives input in the form of sequential source program instructions, interactive online commands, markup tags, or some other defined interface and breaks them up into parts that can then be managed by other programming.

The student took about three months to return a response. The code was "horrific," Birnbaum said.

But the approach was creative - and fast. With some help on the code, Birnbaum said he believed he could get the parser to process 2 billion bits a second, several times the conventional rate.

This article can also be found at

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access