CfP: High Performance Computing Symposium 2015
The 2015 Spring Simulation Multi-conference will feature the 23rd High Performance Computing Symposium (HPC 2015), devoted to the impact of high performance computing and communications on computer simulations. Full information is available at the 23rd High Performance Computing Symposium (HPC 2015)
Abstract submissions are due on September 12, 2014, but are not compulsory. Full paper submissions are due on November 22, 2014. Topics of interest include:
- High performance/large scale application case studies
- GPU for general purpose computations (GPGPU)
- Multicore and many-core computing
- Power aware computing
- Cloud, distributed, and grid computing
- Asynchronous numerical methods and programming
- Hybrid system modeling and simulation
- Large scale visualization and data management
- Tools and environments for coupling parallel codes
- Parallel algorithms and architectures
- High performance software tools
- Resilience at the simulation level
- Component technologies for high performance computing
Looking forward to your submission
Phi and Kepler Run Monte Carlo Race
There have been a number of efforts lately to help delineate the differences between performance, portability and functionality on GPUs over the new Xeon Phi coprocessors, with some organizations benchmarking according to industry-specific algorithms.
NVIDIA Shows Off Mobile Variant of Kepler GPU
NVIDIA used a recent graphics conference to demonstrate Project Logan, the low-power version of the Kepler GPU that it’s developing for smartphones and tablets. The GPU maker says that, when it ships, Project Logan will enable graphics capabilities for mobile devices that are on par with the most powerful desktop- and console-based gaming systems.
The Modern GPU: A Graphic History
What do the Atari 2600 and Tianhe-2 have in common? It may be difficult to imagine, but both systems are examples of the use of cutting-edge graphic processers for their times. This demonstrates the fascinating evolution of the GPU, which today is one of the most critical hardware components of supercomputer architectures.
NC Researchers Claim Pathway for Processor Scalability
It can be said the scalability is one of biggest challenges that I found in parallel computing. Hence, those types of research are quite interesting.
RESEARCH TRIANGLE PARK, N.C., Apr 16 — Researchers sponsored by Semiconductor Research Corporation (SRC), the world’s leading university-research consortium for semiconductors and related technologies, today announced that they have identified a path to overcome challenges for scaling multi-core semiconductors by successfully addressing how to scale memory communications among the cores. The results can lead to continued design of ever-smaller integrated circuits (ICs) into computer hardware without expensive writing of all new software from scratch to accommodate the increased capabilities.
A heterogeneous approach to solving big data analytics graph problems
Sponsored Content by Convey Computer
Today’s commodity servers, as well as systems designed specifically for numerically intensive algorithms (“supercomputers”), are ill suited for many applications in the world of big data analytics. Such applications often use graph manipulation algorithms and data structures, and are best addressed by architectural extensions not found in commodity systems. Convey Computer Corporation’s hybrid-core system takes a heterogeneous approach to solving graph-type problems, and the resulting performance is characterized by results on the Graph500 Benchmark (www.graph500.org). Let’s take a look at architectural features that accelerate graph problems, and how Convey has implemented these features in its reconfigurable computing system.
From Mobile Phones to Supercomputers
By Robert Gelber
While almost every system on the Top500 list makes use of multicore CPUs, one supercomputing research team looks to buck that trend. Next month, the Barcelona Supercomputing Center (BSC) will begin building the second prototype of their Mont-Blanc supercomputer, using NVIDIA’s Tegra 3 processors, the same chips found on mobile devices. In a Wired article published this week, Mont-Blanc project-lead Alex Ramirez talks about the work and what to expect from the from the new prototype system
$10 million NSF Grant for Code Automation by nine universities and institutions
I found this news at HPC Wire. It sounds like an ambitious project but why don’t ? Really curious about how smart the developed tools will be. If it is successful, $10 million is quite cheap.
HOUSTON, TX, April 3 — Computer scientists from Rice University, the University of Pennsylvania and seven other institutions are teaming up to address one of the greatest ironies of the information age: While computers and robots have automated the manufacture of thousands of products, the software that allows them to do this is still written mostly by hand.
Armed with a $10 million grant from the National Science Foundation (NSF), the researchers hope to create intelligent software agents — smart programs that can first observe and learn from human programmers and then help humans write code faster and with fewer errors. Based at Penn, the five-year effort is dubbed Expeditions in Computer Augmented Program Engineering, or ExCAPE. It is funded by the NSF’s Expeditions in Computing program, which supports ambitious research agendas that will define the future of computing.
OpenMP Announces Improvements for Multicore and Accelerators
CHAMPAIGN, Ill., Mar 27 — OpenMP, the de-facto standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, multicore and real time systems. A new version is being developed that will include support for accelerators, error handling, thread affinity, tasking extensions and Fortran 2003. The OpenMP consortium welcomes feedback from all interested parties and will use this feedback to improve the next version of OpenMP.