The 2015 Spring Simulation Multi-conference will feature the 23rd High Performance Computing Symposium (HPC 2015), devoted to the impact of high performance computing and communications on computer simulations. Full information is available at the 23rd High Performance Computing Symposium (HPC 2015)
Abstract submissions are due on September 12, 2014, but are not compulsory. Full paper submissions are due on November 22, 2014. Topics of interest include:
- High performance/large scale application case studies
- GPU for general purpose computations (GPGPU)
- Multicore and many-core computing
- Power aware computing
- Cloud, distributed, and grid computing
- Asynchronous numerical methods and programming
- Hybrid system modeling and simulation
- Large scale visualization and data management
- Tools and environments for coupling parallel codes
- Parallel algorithms and architectures
- High performance software tools
- Resilience at the simulation level
- Component technologies for high performance computing
Looking forward to your submission
There have been a number of efforts lately to help delineate the differences between performance, portability and functionality on GPUs over the new Xeon Phi coprocessors, with some organizations benchmarking according to industry-specific algorithms.
NVIDIA used a recent graphics conference to demonstrate Project Logan, the low-power version of the Kepler GPU that it’s developing for smartphones and tablets. The GPU maker says that, when it ships, Project Logan will enable graphics capabilities for mobile devices that are on par with the most powerful desktop- and console-based gaming systems.
What do the Atari 2600 and Tianhe-2 have in common? It may be difficult to imagine, but both systems are examples of the use of cutting-edge graphic processers for their times. This demonstrates the fascinating evolution of the GPU, which today is one of the most critical hardware components of supercomputer architectures.
It can be said the scalability is one of biggest challenges that I found in parallel computing. Hence, those types of research are quite interesting.
RESEARCH TRIANGLE PARK, N.C., Apr 16 — Researchers sponsored by Semiconductor Research Corporation (SRC), the world’s leading university-research consortium for semiconductors and related technologies, today announced that they have identified a path to overcome challenges for scaling multi-core semiconductors by successfully addressing how to scale memory communications among the cores. The results can lead to continued design of ever-smaller integrated circuits (ICs) into computer hardware without expensive writing of all new software from scratch to accommodate the increased capabilities.
Sponsored Content by Convey Computer
Today’s commodity servers, as well as systems designed specifically for numerically intensive algorithms (“supercomputers”), are ill suited for many applications in the world of big data analytics. Such applications often use graph manipulation algorithms and data structures, and are best addressed by architectural extensions not found in commodity systems. Convey Computer Corporation’s hybrid-core system takes a heterogeneous approach to solving graph-type problems, and the resulting performance is characterized by results on the Graph500 Benchmark (www.graph500.org). Let’s take a look at architectural features that accelerate graph problems, and how Convey has implemented these features in its reconfigurable computing system.