Asynchronous Processes

When an asynchronous process is initiated, the necessary memory structures are created for the new process. Thereafter, the new process and the initiator execute in parallel. Although they execute at the same time, they do not necessarily execute at the same speed. It is for this reason that the new process is called asynchronous.

Examples of statements that initiate asynchronous processes are the PROCESS statement in ALGOL or COBOL, and the PROCESS RUN or PROCESS <subroutine> statement in WFL.

Asynchronous processes are useful because, in many situations, two or more processes running in parallel can do needed work in less elapsed time than a single process. What is saved in elapsed time does not necessarily translate into savings in processor or I/O time, however.

The task attributes of an asynchronous process can be read or assigned by its initiator while the asynchronous process is executing. This makes it possible for the initiator to intervene in the execution of the asynchronous process.

A disadvantage to initiating processes asynchronously is that, except in WFL, the programmer must take special measures to prevent a critical block exit error from occurring. (See the discussion of “Critical Blocks” in this section.)

In addition, initiating processes asynchronously can create ambiguous timing situations because it is impossible to predict exactly how long a process will take to execute. If an asynchronous process and its initiator share a data item, such as a global variable, and both change the value of that data item, it will be difficult to predict the order in which the changes will occur.

Various methods are used to regulate the timing of asynchronous processes. These methods are discussed in Using Events and Interlocks.