In Concurrent computing, the programs are viewed and designed as a collection of computational processes that are interactive enough with each other and so are capable being executed in parallel.
– The major characteristic of these programs is that they can be very well executed on the same processor.
– The processes involved are also known as the threads.
– For executing them, the processor interleaves their execution steps in a timely manner through the technique of time slicing.
– There is another way in which they can be executed.
– Each of the computational processes is assigned to one set of the processors so as for executing them in parallel.
– These processors might be distributed or close to each other.
Challenges while Designing Concurrent Programs
However, there are certain challenges faced while designing the concurrent programs such as the following:
– To ensure that the sequencing of the communications or interactions between many different executions is correct.
– To coordinate the access to the common resources shared among various executions.
How concurrent programs implemented?
There are many methods for the implementation of the concurrent programs out of which the following two are more prominent:
- Implementation of each computational execution as a process of the operating system.
- Implementation of the process in sets of threads within a single process of the operating system.
– C.A.R. Hoare, Per Brinch Hansen and Edsger Dijkstra are the pioneers of the field of concurrent computing.
– Some computer systems make it a point to keep the communications among the concurrent components hidden from the designer and the programmer.
– At the same time we have other computing systems where the programmers have to handle these communications explicitly.
Different forms of Concurrency
– Explicit type of concurrent communication can be divided in to the following two classes:
Shared memory communication:
– Communication between the concurrent components is established by the alteration of the shared memory contents.
– This is a unique style of concurrent programming that applies one or the other form of locking such as monitors, semaphores, mutexes for establishing coordination between the various threads in execution.
Message passing communication:
– Exchange of the messages is one form by which the concurrent components communicate with each other.
– The messages might be exchanged asynchronously or in a rendezvous manner.
– In the latter, the sender blocks till it receives the message.
– In the former method, message passing might be unreliable at times.
– It is because of this that this method is also known as the ‘send and pray’ method.
– The message passing concurrency tends to have more advantages over the Shared memory concurrency and therefore is taken to be as a robust kind of concurrent programming.
– On symmetric multi – processors the message passing can be effectively implemented.
– This may be done by using coherent memory and without it also.
Both these types of con-currency possess different performance characteristics.
– Usually, in message passing the task switching overhead and per process memory overhead is quite low.
– But the overhead of a procedure call is lower than that of the message passing overhead.
– Preventing many concurrent processes from interfering in each other’s task represent one of the major issues in concurrent computing.
– Usually processes with shared resources face such sorts of problems.
– These problems require use of non – blocking algorithms and the concurrency control.
– Concurrent systems rely heavily up on the shared resources.
– There is also required an arbiter in the implementation for mediating the access to the shared resources.
– The unfortunate thing is that while we may have solutions for the existing for these issues have their own concurrency issues.