In simple data parallelism, it may not be necessary for parallel computations to share data with each other during the executions of their programs. However, most other forms of concurrency require communication between parallel computations. Here are three options for communicating between various processes/threads running in parallel.
Comments: | For distributed systems, message passing and distributed memory (if available) may be used. All three approaches may be used in concurrent programming in multi-core parallelism. However, shared (local) memory access typically offers an attractive speed advantage over message passing and remote distributed memory access. |
---|
When multiple processes or threads have both read and write access to a memory location, there is potential for a race condition, in which the correct behavior of the system depends on timing. (Example: filling a shared array, with algorithms for accessing and updating a variable nextindex.)
Comments: | A memory location is an example of a (OS) resource; other examples are: files; open files; network connections; disks; GPUs; print queue entries. Race conditions may occur around the handling of any OS resource, not just memory locations. |
---|