So to summarize the introductory part of this, clearly Clouds are playing a very, very important part in our computational infrastructure, in our cyber-infrastructure, and they’re part of… production computational environment, so they will become part of production computational environments, and they will play a role in science and engineering in terms of research, in terms of education, in… the reason is clear, there are many benefits that Clouds provide that you have learned about throughout this week. The idea of shifting from capital expenses to operational expenses; you can scale your operational expense on demand as you need it and pay for what you use. And that’s it. It allows you to provide a platform for… quick startups and quick prototyping, for One-off tasks, so it’s a very nice program… platform where you have certain requirements, and then you… need something right away, and I can do this, rather than investing in a large-scale infrastructure, one can do this quickly. You have the cost associated with you, which says that you… basically the cost of one machine for a thousand hours is same as the cost of thousand machines for one hour. And that’s very interesting for a lot of applications. You can use this as supporting this dynamic scale-up, which is very interesting. [pause] Clouds are… integrating Clouds to cyber-infrastructure gives us a lot of opportunities for transforming science, new application formulation, new delivery modes, HPC as a Cloud, new usage modes with these hybrid workflows, democratization, and many other things. And there’s also challenging research agenda in terms of not only building better Clouds, which I’m calling here a Science of the Clouds, how to come up with better technologies, better approaches for building more effective Clouds, more secure Clouds, more robust Clouds, but also Science on the Clouds, which says that how can I use this to do better science, more effective science, getting more insights, using the capability that Clouds provide. So this… concludes the more introductory part of my presentation. I wanted to shift to… the next part of this, which is basically an introduction to CometCloud. [pause] So CometCloud was something that we have been working on here at Rutgers University for some time. (new speaker)
We’re checking for questions. (Manish Parashar)
Okay, thanks. I believe there was a question. [pause] (new speaker)
Yes, UCLA has a question. (new speaker)
Yeah, the question, you already said that the question was between the Cloud technology maturing and the necessity of supercomputing centers, because over the past few days we were talking about… taking… doing the computation where the data exists, or if the Cloud technology is mature enough, we can do the computation near the data, so at a… [unknown] we’ll scale it only if we need it, so do we have the necessity of supercomputing centers now where we have to move the data to this other place? [pause] (Manish Parashar)
So… okay, so… I’m not sure I fully understood the data aspect of the question, but as far as the need of supercomputing capabilities, there is definitely capabilities that these resources provide, the high end computers such as the Cray’s and… the IBM Blue Genes that have architectures that are very well-suited for very computationally intense applications. There are also specialized hardware such as the Anton machine which have capabilities that are specialized for a certain class of problems. So clearly in addition to the Cloud… type of platform will not be able to meet all these specialized needs, and so there will be a need for having these resources as well as the Cloud resources that can target a… broader more general class of applications. As far as moving the data, it depends how and where the data is produced. It might be data being produced by instruments which have… which are located… next to… within… a data… HPC center, for instance. And in that case using resources there might make sense. So I don’t know if that answered… if I understood your data part of your question correctly. (new speaker)
Thanks. (Manish Parashar)
Okay. So if there were no other questions, I’d like to move on to give you a brief overview of CometCloud. And as I was saying… that CometCloud was a framework that we are building and working on here at Rutgers. And the goal is to enable a… [pause] an autonomic federation of Cloud infrastructure and HPC infrastructure and then support applications, programming paradigms, on top of such a federated infrastructure. And it comes from the recognition that… [pause] most applications have rather heterogeneous and… dynamic demand for resources in terms of the type of resources they need during the lifetime of an application workflow, as well as the number and… scales of resources that they need. And being able to more flexibly… [pause] join or federate these resources, support these dynamic needs, is important. And there are various constraints on this in terms of some phases of… application workflow may have very strict throughput requirements, there may be constraints in budgets that may require you to do some tradeoffs, there could be run-time… limitations, and so being able to use the autonomic capabilities to be able to balance these different objectives and dimensions would be interesting. [pause] Being able to go beyond just federating existing infrastructure and add Cloud services as part of this was an important part of CometCloud, and so being able to use appropriate numbers and types of resources from Cloud providers within these constraints was another motivation. [pause] And the overall goal is really to be able to provision the right mix of resources to meet these objectives and to more effectively… be able to execute your application workflows. [pause] So the key features of CometCloud are the following: the basic abstraction that Cloud provides is this idea of a… federated pool of resources of… heterogeneous resources that applications can access as they need. And so it allows you to cloudburst, which basically you can use different policies that allow the system to scale out to resources, either to public Clouds or to other types of resource types, to meet the type of requirements you have. You can decide that you’re running an application and now you needed a certain type of resource as part of your federation. And you can bridge that in on the fly, and when you don’t need it, you can… remove it from your federation. So if I… needed a certain capability, for example, I needed a large memory system to do some analytics, I can federate it in without having to restart my application. The autonomics is something that we have been focusing a lot on. Being able to use policies to drive the federation process as well as the execution of the application and this would include user policies in terms of saying that I want this done… by this deadline and I don’t care how much I spend on it, or I cannot spend more than ‘x’ amount of dollars on this. It could mean resource constraints in terms of how to deal with failures… or systems being down. Rather than stop the application, I could use federation to move my workload elsewhere. Availability of resources, they might be available only during certain times of the day, and how do I keep my federation… to adapt to these kind of constraints. And then building this core infrastructure that you learn a little more about in the hands-on, we also have built more popular programming models on this, which says that how do I do a classic bag of tasks, a master/worker type of model, on top of this CometCloud framework, or how would you do something like MapReduce Hadoop on top of this, or how would you do a more standard application workflow that might combine these pieces on top of this kind of a infrastructure.