Moore’s “Law”: an empirical observation by Intel co-founder Gordon Moore in 1965. The number of components in computer circuits had doubled each year since 1958, and Moore predicted that this doubling trend would continue for another decade. Incredibly, over four decades later, that number has continued to double each two years or less.
However, since about 2005, it has been impossible to achieve such performance improvements by making larger and faster single CPU circuits. Instead, the industry has created multi-core CPUs – single chips that contain multiple circuits for carrying out instructions (cores) per chip.
The number of cores per CPU chip is growing exponentially, in order to maintain the exponential growth curve of Moore’s Law. But most software has been designed for single cores.
Therefore, CS students must learn principles of parallel computing to be prepared for careers that will require increasing understanding of how to take advantage of multi-core systems.
Comments: | Thus, parallelism takes place in hardware, whereas concurrency takes place in software. Operating systems must use concurrency, since they must manage multiple processes that are abstractly executing at the same time–and can physically execute at the same time, given parallel hardware (and a capable OS). |
---|
Comments: | Every process has at least one thread of execution, defined by that process’s program counter. If there are multiple threads within a process, they share resources such as the process’s memory allocation. This reduces the computational overhead for switching among threads (also called lightweight processes), and enables efficient sharing of resources (e.g., communication through shared memory locations). |
---|
Comments: | CS students have primarily learned sequential programming in the past. These skills are still relevant, because concurrent programs ordinarily consist of sets of sequential programs intended for various cores or computers. |
---|
Comments: | Both of these types of computing may be present in the same system (as in our MistRider and Helios clusters). |
---|
Comments: | A telephone call center illustrates data parallelism: each incoming customer call (or outgoing telemarketer call) represents the services processing on different data. An assembly line (or computational pipeline) illustrates task parallelism: each stage is carried out by a different person (or processor), and all persons are working in parallel (but on different stages of different entities.) |
---|
Comments: | Although multi-core processors are driving the movement to introduce more parallelism in CS courses, distributed computing concepts also merit study. For example, Intel’s recently announced 48-core chip for research behaves like a distributed system with regards to interactions between its cache memories. |
---|