Research and High Performance Computing
Recently Rutgers embarked on a high profile High Performance Computing Initiative with the launch of the Rutgers Discovery Informatics Institute (RDI2). This institute is currently in it's first phase and has an IBM Blue Gene/P supercomputer. The SAS is also moving forward with an HPC initiative in cooperation with this new institute and OIT. This new HPC cluster will complement the work of the RDI Institute by providing different computational technologies. Our goal is create a cooperating group of HPC systems and personnel to help expand the availability and utilization of these systems. All of these initiatives seek to expand the use of advanced computing while leveraging the economies of scale that come from centralizing networking, storage, power, cooling, and staffing.
The SAS project began after we received several requests for high performance computing resources within a very short time frame. Rather than create several individual small clusters, we contacted the individual faculty and they agreed to put their funds toward a larger, centralized cluster. In exchange for expanding the project scope to the entire University, OIT agreed to chip in significant additional resources including funds toward staffing and equipment as well as agreeing to house the cluster in their machine room.
Our goal is create a voluntary center where different areas around Rutgers can pool technological and staffing resources in order to make the best possible HPC resources available to the faculty and students. This means that there are many ways that different groups can participate. Not everyone will choose to simply buy into this new cluster but we hope that this will become an area where collaboration around HPC can take place. We anticipate that those who choose to work with us will do it in one of the three following ways:
- Groups will simply join our initiative and expand one of the clusters we're building rather than continuing on their own. This would buy them more computing power and storage than they could otherwise afford, provide them with redundancy in staffing and standardization of operation.
- Groups will continue to maintain their own clusters but will move them to share the same space as ours so they can leverage the better storage, higher speed networking, cooling and power that we can provide.
- Groups will continue to maintain their own clusters in their own space and participate in our HPC group to help develop guidelines for the creation and management of future clusters. For example, it would be nice if different clusters used the same scheduler and Unix distribution so that faculty who float between systems would have somewhat similar experiences.
We will have more information on the cluster in the near future. Please make sure that your signed up for our mailing list or check back here regularly for more information. It is our hope that this new type of cooperative structure will be successful and lead to this type of collaboration in other areas.