Learn how to use the NSFCloud!
All researchers who have computational and big data analytic needs are encouraged to attend!
The University of Texas at San Antonio (UTSA) is one of five institutions nationwide partnering on a $10 million National Science Foundation (NSF) grant to create a cloud- computing testbed that will let researchers develop and experiment with new cloud architectures and applications.
The NSFCloud project will support the design, deployment and initial operation of Chameleon, a large-scale testbed consisting of 650 cloud nodes and five petabytes of storage.
Paul Rad, co-principal investigator for Chameleon and Assistant Director of the Open Cloud Institute at UTSA, will lead training and outreach efforts for the project.Find out more »
Jon Weissman, Ph. D.
Dept. of Computer Science and Engineering
University of Minnesota, Twin Cities
Cloud computing is evolving from the centralized data-center to a more widely distributed collection of data-centers and even to the use of edge devices, so-called Fog Computing. This evolution of clouds is driven by the proliferation of dispersed data, user-facing latency-sensitive applications, and the availability, and the availability of powerful edge devices. We term this new frontier of cloud computing, Geo-Distributed Clouds, and discuss our work in supporting computation and storage in two geo-distributed cloud projects: Nebula and Tiera.
Faculty reception to followFind out more »
N. Rama Rao Professor of Computer Science
My group is working with a consortium of bulk electric power transmission operators for the Northeastern US on a cloud-based “smart grid” infrastructure. Our long goal is to use the cloud to host machine-learning and optimization technologies, but before this can be done, the scale of the problem forces us to think about how one can build a cloud-scale solution secure enough to support a nationally critical resource, strongly consistent, and seamlessly recoverable after disruption. Today we have a platform running: we call it GridCloud, and have just completed a DOE-funded effort to show that it could monitor the national bulk power grid. One follow-on project will seek to transition the technology to the consortium mentioned earlier: ISO NE, NYPA and NY ISO. Another follow-on will deploy GridCloud in a smart power distribution system with substantial microgeneration (rooftop solar) and schedulable loads (a/c, baseboard electric heaters, hot water heaters) and explore distributed supply/demand control with much larger numbers of sensor endpoints and fairly tight real-time control-loop objectives. GridCloud could be useful in other real-time settings too. For example, self-driving cars might require cloud- hosted situation awareness tools, automated health-care solutions could easily have the mix of security and reliability issues on which we focus, and similar comments can be made about smart homes and office complexes, banking, and other demanding use cases. In fact, we increasingly think of the system as an operating system for the mission-critical cloud, hosted inside systems like Amazon AWS or AWS government, and augmenting the standard cloud frameworks with extra features aimed at improving consistency, fault-tolerance and supporting real-time responses within time bounds that could be 100ms or smaller.
Find out more »
Professor of High Performance Computing
Norwegian University of Science
Current cloud infrastructures are mostly homogeneous, centrally managed and made available to the end users through the three standard delivery models: IaaS, PaaS, and SaaS. As systems of different types are added to the growing cloud infrastructure in order to maximize performance and power efficiency, heterogeneous cloud are being created. However, exploiting different architectures poses significant challenges. To efficiently access heterogeneous resources and, at the same time, to exploit these resources to reduce application development effort, to make optimizations easier and to simplify service deployment, requires a re-evaluation of our approach to service delivery and how we implement HPC applications and computer simulations requiring fast inter-processor communication in various engineering field.
In this talk I will present the main ideas behind the current EU project CloudLightning led by Prof. John Morrison's group at UCC, Cork, Ireland, in collaboration with my group at NTNU in Trondheim, Norway along with DCU and Intel Ireland, Maxeler Technologies of UK, IeAT Romania, and CERTH & DUTH of Greece. This project is based on the principles of self-organization and self-management that shifts the deployment and optimization effort from the consumer to the software stack running on the cloud infrastructure. Our framework is general, but our proposed use cases focuses on how to enable cloud services for high performance computing. These use cases include genomics, oil, and gas exploration, and ray tracing. My research group is responsible for the oil and gas use case and risk management as well as the overall test-bed that includes a novel Numascale SMP with GPUs, a small GPU cluster, an Intel Phi system as well as a Maxler FPGA-based system. The oil and gas use case is also done in collaborations with Statoil and the OPM (Open Porous Media) project.
Find out more »
PhD, Associate Professor, Department of Biochemistry at
UT Health Science Center San Antonio
In this seminar I will provide an overview of the biophysical technique of analytical ultracentrifugation (AUC). This technique is widely used for the solution study of biopolymers and other molecules in the nanoscale. Data resulting from this technique is analyzed with a series of algorithms that involve finite element simulations, non-negative least squares grid searches, as well as Monte Carlo and genetic algorithm optimization. Data analysis is currently performed using x86-type parallel cluster architectures on national supercomputers. I will present the underlying algorithms and will ask the question if the current MPI code can be ported to GPU-type architectures. Following the seminar will be a discussion with the audience members.
Find out more »
Once a month Paul Rad provides a Webinar to show new users, of chameleoncloud.org, how they can use Chameleon Cloud for their projects.
This month the webinar will be on August 18th, but you will need to sign up with the following form beforehand.
For more information about this opportunity, visit Chameleon Cloud hereFind out more »
PhD, Professor, Department of Epidemiology Biostatistics at
UT Health Science Center San Antonio
"SALSI: Cloud Based Precision Medicine"
SEPTEMBER 27, 2016, 4:00-5:00 P.M.
LOEFFLER ROOM, BSB 3.03.02
PhD, Director, Center for Computational Research
University at Buffalo, The State University of New York
"Optimizing the Performance of High Performance Computing Systems"
This presentation will discuss the development of an open source (Open XDMoD) for the comprehensive management of high performance computing (HPC) systems. Today's HPC systems are a complex combinations of computer hardware and software, and it is important that support personnel have at their disposal tools to ensure that this complex infrastructure is running with optimal efficiency as well as the ability to proactively identify under-performing hardware and software. In addition, most HPC systems are oversubscribed and support personnel desire the capability to monitor and analyze all end-user jobs to determine how efficiently they are running and what resources they are consuming (computer memory, processing, storage, networking, etc.) in order to optimize job throughput as well as plan for future upgrades. Open XDMoD is the first fully comprehensive tool for supporting the information needs of all stakeholders using and running HPC systems. Case studies of the application of the Open XFMoD to HPC systems at the University of Texas, and UB's Center for Computational Research are included. The case studies indicate the level of detailed system metrics that are available as well as the proactive identification of under-performing hardware and software.
Friday, December 2, 2016, 10:00-11:30 A.M.
JPL Assembly Room, JPL 4.04.22
Hosted by University of Texas at San Antonio College of Business and College of Engineering.
PhD, SUNY Distinguished Professor of Computer Science and Engineering at
University of Buffalo; VP for Research and Economic Development
Given the pervasive use of e-commerce transactions and personal data storage in the cloud, society has an urgent need for a robust process that authenticates and protects the privacy and online assets of individuals and organizations. We recommend a totally new approach that rethinks the entire "science of authentication." The biometrics and cyber security communities have approached the challenge from different vantage points. The former focuses on "individuality" and "liveness" of human characteristics whereas the latter has primarily considered encryption and elaborate software protocols. This talk explores methods that go beyond the traditional biometrics of physical and behavioral modalities by integrating tests for humanness and identity in a cognitive framework. We are creating a comprehensive solution that will address a host of challenging AI problems ranging from OCR, object recognition to natural language understanding. We also will show how our holistic process allows for a more practical approach to security within the framework of a continuous authentication scenario.
Find out more »
Please click the picture for a higher resolution image that can be saved
EVENT Details PDF