Posts filed under ‘Computer Architecture’

NC Researchers Claim Pathway for Processor Scalability

It can be said the scalability is one of biggest challenges that I found in parallel computing. Hence, those types of research are quite interesting.

RESEARCH TRIANGLE PARK, N.C., Apr 16 — Researchers sponsored by Semiconductor Research Corporation (SRC), the world’s leading university-research consortium for semiconductors and related technologies, today announced that they have identified a path to overcome challenges for scaling multi-core semiconductors by successfully addressing how to scale memory communications among the cores. The results can lead to continued design of ever-smaller integrated circuits (ICs) into computer hardware without expensive writing of all new software from scratch to accommodate the increased capabilities.

Read more

April 17, 2012 at 1:17 am Leave a comment

From Mobile Phones to Supercomputers

By Robert Gelber

While almost every system on the Top500 list makes use of multicore CPUs, one supercomputing research team looks to buck that trend. Next month, the Barcelona Supercomputing Center (BSC) will begin building the second prototype of their Mont-Blanc supercomputer, using NVIDIA’s Tegra 3 processors, the same chips found on mobile devices.  In a Wired article published this week, Mont-Blanc project-lead Alex Ramirez talks about the work and what to expect from the from the new prototype system

Read more …

April 5, 2012 at 8:21 pm Leave a comment

AMD Opens Up Heterogeneous Computing


HSA, which until recently was know as the Fusion architecture, is AMD’s platform design for integrating CPU and GPU cores onto the same chip. But HSA is more than AMD’s attempt to define an architecture for internal use, as was the case for Fusion. Rather HSA is an open specification that AMD wants the industry to adopt as the de facto platform for heterogenous computing…”

For full article, visit this link:

March 14, 2012 at 10:21 pm Leave a comment

DDN solutions now available through Penguin Computing

DataDirect Networks (DDN) has announced that Penguin Computing has signed an agreement to offer DDN’s award-winning suite of HPC and Big Data storage solutions to its global customer base. Effective immediately, customers will be able to source DDN products from Penguin Computing, including the SFA storage platforms, the GRIDScaler and EXAScaler parallel file storage systems, NAS Scaler, DDN’s enterprise scale-out NAS platform, and WOS, the company’s hyperscale object storage system.

For full article, visit here:

March 14, 2012 at 10:11 pm Leave a comment

Facebook Shakes Hardware World With Own Storage Gear

Just read the article “Facebook Shakes Hardware World With Own Storage Gear” at As a fast growing company as Facebook, such hardware requirements are obviously. However, reading through the commends of this article is kind of fun.

“Facebook already built its own data center and its own servers. And now the social-networking giant is building its own storage hardware — hardware for housing all the digital stuff uploaded by its more than 845 million users.

Like the web’s other leading players — including Google and Amazon — Facebook runs an online operation that’s well beyond the scope of the average business, and that translates to unprecedented hardware costs — and hardware complications. If you’re housing 140 billion digital photos, you need a new breed of hardware.

In building its own data center on the Oregon high desert, Facebook did away with electric chillers, uninterruptible power supplies, and other terribly inefficient gear. And in working with various hardware manufacturers to build its own servers, the company not only reduced power consumption, it stripped thee systems down to the bare essentials, making them easier to repair and less expensive. Frankovsky and his team call this “vanity free” engineering, and now, they’ve extended the philosophy to storage hardware….”

For full article, visit this link

February 26, 2012 at 12:29 am Leave a comment

CPU affinitity – Why need and How to ?

CPU affinity  ( or processor affinity ) is an ability provided by operating systems such as Windows or Linux that allows you to select  specific CPUs or processors to run your program/application on. In a multi-core and multi-processor system, the assignment of a process to a CPU/processor is automatically decided by the OS via its scheduler. However, you still can interfere in this scheduling task by specifying a CPU that your program will be run on.

Why CPU affinity is needed ?

You may ask “Why do I need it if OS is hanlding everything for me?”. You are right. In most cases, you won’t need this function. However, if the runtime performance is your concern,  in some cases, it’s worth a try to decide whether to use CPU affinity. For demonstration, I wrote a simple parallel program doing some computations using multi-cores. When I ran this program on a 12-core machine (two prossesors , six cores per processor) with/without CPU affinity, I got below runtime performance.

CPU affinity improves the  runtime performance because it optimizes cache performance by reducing cache miss. In a NUMA system, setting CPU affinity and allocating memory also on the faster RAM can speed up the process as well.

How to ?

There are two ways to set the CPU affinity in both Linux and Windows.

In Windows:
Method 1: Set the CPU affinity using Task manager

  • Open Task manager by pressing Ctrl + Alt + Delete and selecting the Task manager
  • Select Processes tab 
  • Right click on the process that you want to set CPU affinity
  • Select “Set affinity…” from the drop down menu
  • Set the CPUs that you want your program to run on

Method 2: Second, program from your source code
 You can use the Windows API SetProcessAffinityMask to set the CPU affinity from your program code

In Linux:

Method 1: Launch the program from the command line using settask
The below command will launch gedit in CPU 1 & 4 (or 0 and 3).

taskset -c 0,3 gedit

Method 2: Second, program from your source code
You can use the function sched_setaffinity in sched.h to manage CPU affinity from your code

August 5, 2011 at 2:40 pm 1 comment

Microsoft wants Intel to build 16 core Atom chip

Microsoft has asked Intel to develop a 16-core version of its low power Atom chip for use in servers, part of a wider effort to reduce power consumption in its massive data centres, a Microsoft executive said Thursday.

There’s a “huge opportunity” to improve energy efficiency by using servers based on small, low-power chip designs such as Intel’s Atom and Advanced Micro Devices’ Bobcat, said Dileep Bhandarkar, a distinguished engineer with Microsoft’s Global Foundation Services, which runs the company’s data centres.

Read more

February 4, 2011 at 4:14 pm Leave a comment

Strategy Analytics: Multi-Core Processors to Penetrate 45 Percent of Smartphones by 2015

Over the next few years multi-core processors are expected to play an important role in enabling high-performance mobile computing at low-power consumption, according to, “Multi-core Processor Penetration in Smartphones Will Hit 15 Percent in 2011,” from Strategy Analytics’ Handset Component Technologies service. Samsung, NVIDIA, Qualcomm, ST-Ericsson, Texas Instruments, Marvell, Renesas and Broadcom are well-positioned to capture market share in the smartphone multi-core applications processor market.

Read more

February 3, 2011 at 2:24 pm Leave a comment

Intel is preparing a 1000-core processor

According to Timothy Mattson, scientific expert Intel, speaking at the conference Supercomputing Conference 2010 in New Orleans in the U.S., it is based on a scalable architecture 48-core processor and is based concept chip with 1000 cores working. Mattson said that on the scalability of the 1000 nuclear chips can match the entire datacenter in use today. Read more

1000 core processor ? This is not new because Scottish Scientists recently created 1000-Core Processor on a Single FPGA Chip (Read more). This gave me the feeling that supercomputing era is back. But this hardware must be very expensive because FPGA is never cheap. So, for any one who are looking for a powerful and cheap multi-core processor, why don’t consider GPUs. Nvidia is now has 448 core chip and ATI has GPU with up to 1000 cores. Although, there are limitations on writing programs running on GPU, using GPUs is big shot if we are short in budget.

Anyway, I hope that this Intel 1000-core processor will come soon.

January 12, 2011 at 1:39 pm Leave a comment

Facing the Multicore-Challenge

“Facing the Multicore-Challenge: Aspects of New Paradigms and Technologies in Parallel Computing”, edited by Rainer Keller, David Kramer, and Jan-Philipp Weiss is an outcome of the conference titled “Facing the Multicore” held at the Heidelberger Akademie der Wissenschaften, March 17–19, 2010. The conference focused on topics related to the impact of multicore and coprocessor technologies in science and for large-scale applications in an interdisciplinary environment. Read more

January 12, 2011 at 1:37 pm Leave a comment

Older Posts

Recent Posts

/openmp AMD app fixing dead pixel iphone C++ cmd program CPU C sharp dead pixels directory download ibm synthetic data generator Dynamic Dynamic Memory Allocation Edit Environment Variables in Windows Environment Variables error lnk2019: unresolved external symbol _getprocessmemoryinfo@12 referenced in function error lnk2019: unresolved external symbol _ getprocessmemoryinfo@12 referenced in function "void __cdecl printmemoryinfo(unsigned long example code Fixing Dead Pixels and Gray Lines on the iPhone Screen Fixing Dead Pixels on the iPhone Screen Fixing Gray Lines on the iPhone Screen Fix iPhone getprocessmemoryinfo GPU Gray Line iPhone Screen gray lines gray pixels green pixels GUI how to "new" a two-dimension array in C++ how to use ibm quest synthetic data generator ibm data generator ibm quest data generator ibm quest data generator exe ibm quest data mining project ibm quest market-basket synthetic data generator ibm quest market basket market-basket synthetic data generator ibm quest synthetic data generator ibm quest synthetic data generator linux ibm synthetic data generator ibm synthetic generator Intel iPhone Iphone 3G iPhone 3GS iPhone 4 iphone gray lines on startup iphone pixel damage iPhone Screen iPhone screen damage Linux market-basket synthetic data generator Memory Allocation Multicore multithread multi thread multi threaded multithreading mysql extract data into file new OpenCL Path processor quest data generator quest synthetic data generator R SAS Set Environment Variables Set Environment Variables in Windows souce code source code stuck pixels system file two dimension array Windows 7 Windows Vista