Businesses and consumers are getting a lot of high-level advertising and messaging about new computing models, IBM’s “Smarter Planet” campaigns, for instance. What does that really mean for the future?
IBM has been on the Smarter Planet theme for a pretty long time. Three years ago, we were working in seven out of the top 10 top smart grid deployments. Now we are in 150 smart grid deployments around the world. Having done a lot of these, we’ve started to see some things in common. There is a set of messages around what is happening in the data center and a separate message that says some IT is coming out of the data center. There are examples in smart grids, smart traffic, smart water and smart wireless, kinds of deployments where some of the IT, typically in the form of some appliance that combines hardware and software together is moved closer to the end user. If it's for a smart phone, there might be a base station or something else that has IT and compute function nearer the user. It's computing in places where IT has never gone before, where there's never been a computer before.
Does that mean thinking about a new kind of architecture?
From a systems perspective I think first about workload optimization. There is a big difference between one size fits all commodity systems, and purpose-built systems tuned to a task. It's a big difference if you tune or you don’t. Smarter Planet workloads are for different things, traffic, communications, energy that will represent different activities and no single architecture is appropriate for all of the above. You have to have some flavor of transaction processing, you might have to manage data that is coming at you at high speed, you might have to analyze at high speed and you might have to [run] deep analytics. What you need is not a single architecture, but a multi-architecture or a flexible system that can support multiple types of sub-workloads within broader workloads.
So from a systems view, Smarter Planet sounds like a backbone message and replacing obsolescence with new, ubiquitous, multipurpose architecture.
Yes, multipurpose, multi-architecture is clearly an important high level theme for us. Last year we introduced the zEnterprise mainframe that also included a thing called z Blade Extensions, which allow you to have x86 blades and Power blades and the mainframe all kind of integrated into a single kind of package. That to me is the first version of a multi-architecture system, and there is a single unified resource manager that lets you define the concept of a workload where the workload can run in part on x86, partly on Power blades, partly on the mainframe. You can set performance goals for your workload and the system will automatically decide to give it more x86 MIPS [computing instructions] or less, more Power MIPS or less, more memory or less etc.
Why is that architectural flexibility important?
Our customers already have many different kinds of workloads running on all these kind of architectures. If you go to them today and say, ‘I've got this great new system for you but it's all on blades, and they can't migrate all their applications to that they can’t use it. Or, you offer something that consumes one-fifth the power and one-tenth the floor space, and it networks really well, but it's only x86 and Linux, what are they going to do with their other applications? And yes, we like to use the word “flexible,” it's a flex architecture.
In a grid, how does networking become more important?
The networking trends are making a lot of this work possible. If I go back 10 years, my bandwidth to memory [access] was maybe a factor of 1,000 times faster than my bandwidth on a network to get to storage or get to another processor. If I fast forward 10 years, that difference is now a factor of two instead of 1,000. So all of a sudden, even things like memory could be potentially shared. You could carve out and say, I've got this much memory in my system, I want to give one-third of it to my x86 processor and another third to something else and be able to change that over time. That level of flexibility is going to be possible I think.
We’re talking to CIOs more about the effect of the network than we did in the past. What else has changed besides bandwidth?
One of the trends today is that there is a lot of function in a core network switch. It moves and switches bits very fast, which is the important thing, but it has a lot of the control function also. We see with the way the MIPS are available on servers today that much of that function can be moved into software running on servers and the switches themselves can stick to switching bits at very high speed. So there is a whole trend in the direction of moving function from these core switches, particularly control plane kinds of processing and moving that back into software or into access level switches that are much closer to the server. That will give us a much better level of network virtualization. We've had server virtualization and storage virtualization and increasingly we'll see the same thing with networking.