When processes access a single disk file through separate logical files, some additional synchronization problems arise beyond those that occur for shared logical files. These problems arise because read and write operations affect physical files indirectly, through a set of buffers associated with the logical file. The system moves information between the buffers and the physical file only when necessary, and the application typically plays no role in determining when such updates occur.
What is more, the BLOCKSIZE file attribute can cause multiple records to be read into or written from a buffer, so that changes by other processes to nearby records can be accidentally overwritten. (For more information about file blocking and file buffers, refer to the I/O Subsystem Programming Guide.)
These synchronization concerns arise only in cases where at least one of the processes writes to the shared physical file. If all the processes simply read from the file, then the order in which they execute their read operations makes no difference.
However, if at least one of the processes writes to the shared physical file, then you must use one of the following methods to ensure that I/O operations are properly synchronized:
-
Use the BUFFERSHARING file attribute to cause multiple logical files to share the same set of buffers.
-
Use the EXCLUSIVE file attribute to ensure that only one process has the physical file open at any given time.
Using the BUFFERSHARING File Attribute
You can use the BUFFERSHARING file attribute to allow separate logical files to share the same set of buffers. The effect is to make the results of each write statement immediately visible to all processes that use the same buffers, even if the physical file has not yet been updated. The following are the possible mnemonic values for this attribute:
-
NONE. The logical file does not share buffers with any other logical file. This value is the default.
-
SHARED. The logical file shares buffers with any other logical files that have a BUFFERSHARING value of SHARED or EXCLUSIVELYSHARED. Logical files with a BUFFERSHARING value of NONE can link to the same physical file, but use separate buffers.
-
EXCLUSIVELYSHARED. The logical file shares buffers with any other logical files that link to the same physical file. Logical files with a BUFFERSHARING value of NONE are prevented from linking to the same physical file.
Note that several restrictions apply to the use of the BUFFERSHARING file attribute. For example, the physical file must reside on the local host and the BLOCKSTRUCTURE attribute must be FIXED. For a detailed list of the restrictions, refer to the File Attributes Programming Reference Manual.
If you use the SHARED or EXCLUSIVELYSHARED value for all the processes that share the same physical file, then the issue of when the physical file is updated becomes irrelevant. However, you must still handle the types of synchronization problems that were previously described under Synchronizing Access with Shared Logical Files in this section.
You can handle these synchronization problems in any of the following ways:
-
Through the use of event variables, using the techniques described under Synchronizing Access with Shared Logical Files earlier in this section.
-
By setting the APPEND file attribute to TRUE. The primary effect of this setting is to prevent random writes to a file and to cause all serial writes to begin at the current end-of-file position. However, when BUFFERSHARING has a value of SHARED or EXCLUSIVELYSHARED, an APPEND value of TRUE has the additional effect of causing the system to provide implicit locking for write statements. This implicit locking prevents write statements issued by different processes from conflicting and overwriting each other. Note that this locking is effective only if all the processes that write to the file use a BUFFERSHARING value of SHARED or EXCLUSIVELYSHARED and an APPEND value of TRUE.
-
Through the use of a feature called record locking. This record locking feature makes it possible for a process to secure exclusive access to one or more file records while the process reads and updates those records. The record locking feature can be used only for files that have BUFFERSHARING = SHARED or EXCLUSIVELYSHARED or that use direct I/O.
Record locking is supported in COBOL85 through the LOCKRECORD and UNLOCKRECORD statements. Record locking is available in other languages through calls on the MCPSUPPORT library procedures RECORDLOCKER, DIRECTRECORDLOCKER, RECORDLOCKTEST, and DIRECTRECORDLOCKTEST.
Record locks are of two types: shared locks and exclusive locks.
Before performing a read operation, each process should typically secure a shared lock on the file records that are to be read. A shared lock allows other processes to establish shared locks that overlap the same region, but prevents other processes from establishing exclusive locks that overlap that region. After performing the read operation, the process should then remove the shared lock from the region.
Before performing a write operation, each process should typically secure an exclusive lock on the file records that are to be written. An exclusive lock prevents other processes from establishing any shared locks or exclusive locks that overlap the same region. After performing the write operation, the process should then remove the exclusive lock from the region.
Note that it is possible for processes to read or write a record without first locking the record, regardless of the value of the BUFFERSHARING attribute. Locking a record only prevents another process from locking the same record, not from performing I/O on that record. Record locking works well as long as all participating processes follow the convention of locking records before reading or writing them.
If a process locks a record and then terminates before unlocking the record, the system implicitly unlocks the record.
The following paragraphs briefly introduce the MCPSUPPORT procedures and COBOL85 statements that support record locking, and then explain how record locking applies to direct I/O files.
Record Locking with MCPSUPPORT Procedures
A program can lock records by calling the MCPSUPPORT procedure RECORDLOCKER. This procedure includes parameters that allow you to specify
-
Whether a shared lock or exclusive lock is to be used.
-
The starting position and length of the locked region.
-
For regions that already have a conflicting lock, whether the RECORDLOCKER procedure should fail or wait (and if it waits, for how long).
A program can interrogate the availability of a range of records for locking by invoking the procedure RECORDLOCKTEST.
For direct files, the program must use the procedures DIRECTRECORDLOCKER instead of RECORDLOCKER, and DIRECTRECORDLOCKTEST instead of RECORDLOCKTEST.
For detailed descriptions of each of these procedures, refer to the MCP System Interfaces Programming Reference Manual.
Record Locking with COBOL85 Statements
A COBOL85 program can lock a record with the LOCKRECORD <file name> statement. This statement creates exclusive locks only (never shared locks). The record to be locked is not specified by the LOCKRECORD statement, but rather by the ACTUAL KEY phrase for the file. The statement can optionally include an ON EXCEPTION clause and a NOT ON EXCEPTION clause (specifying actions to be taken if the lock fails or succeeds, respectively). For a file named THETA-DATA and subroutines named BAD-LOCK and GOOD-LOCK, you could use a statement of the following form:
LOCKRECORD THETA-DATA ON EXCEPTION PERFORM BAD-LOCK NOT ON EXCEPTION PERFORM GOOD-LOCK.
If a conflicting lock already exists for the requested region, then the LOCKRECORD statement waits for the length of time specified by the FILELOCKTLIMIT (File Lock Time Limit) system command. If the conflicting lock is removed before the time limit expires, the LOCKRECORD statement succeeds, and the NOT ON EXCEPTION clause is invoked (if there is one). If the time limit expires, the LOCKRECORD statement fails, and the ON EXCEPTION clause is invoked (if there is one).
You can unlock a record using the UNLOCKRECORD statement, which has syntax similar to that of the LOCKRECORD statement. The UNLOCKRECORD statement can also include ON EXCEPTION or NOT ON EXCEPTION clauses.
If the compiler control option MUSTLOCK is TRUE and the FD statement for a file assigns a BUFFERSHARING value of SHARED or EXCLUSIVELYSHARED, then the COBOL85 compiler generates extra code for any WRITE statements that write to that file. This extra code checks to see whether the program has previously locked the record being written to. If not, then the program incurs an error. If the program has specified a file status data item, a USE procedure, or an INVALID KEY clause, then the error is nonfatal and control returns to the program. Otherwise, the error is fatal and the program is discontinued.
Note that this special feature of the WRITE statement is enabled only if the FD statement includes the BUFFERSHARING assignment. If the BUFFERSHARING value is altered by a CHANGE ATTRIBUTE statement in the program, the behavior of the WRITE statement is not affected by the change.
For detailed explanations of these COBOL85 features, refer to the COBOL ANSI-85 Programming Reference Manual, Volume 1: Basic Implementation.
Record Locking for Direct I/O Files
Direct I/O is a technique that allows a program to create and control a private buffer for file I/O. The program creates a buffer by declaring a direct array. The application program is responsible for initiating and monitoring the transfer of information between the physical file and the direct array.
The BUFFERSHARING file attribute must have a value of NONE for direct I/O files. If multiple programs access the same physical disk file through direct I/O, each direct array exists independently. Changes made to the contents of one direct array are not propagated to other direct arrays.
Further, changes made through direct I/O have no effect on the file buffers of processes that access the file through normal I/O. This problem exists even if the processes using normal I/O have set BUFFERSHARING = SHARED.
| Note: | If multiple processes update the same disk file through different logical files, then either all the processes should use direct I/O or all the processes should use normal I/O. It is not possible to reliably coordinate the updating of buffers if a file is accessed by both direct I/O and normal I/O. |
The system does support the use of record locking with direct I/O files, by way of the same MCPSUPPORT procedures mentioned previously. You can use this record locking feature effectively for cases where multiple processes all use direct I/O to update the same physical disk file. However, you must take measures to ensure that the contents of any direct arrays used to access a file are kept properly updated.
For example, suppose a certain record in a file stores the current balance in a customer's account. Suppose also that you want a program to use direct I/O to increase the account balance by $10. In order to ensure accurate results, you should design the program to follow these steps:
-
Secure an exclusive lock on the record.
-
Read the latest contents of the record into the direct array. This is necessary because another process might have updated the record since it was last read by this process.
-
Wait for the read operation to complete.
-
Update the data in the direct array to reflect the $10 increase.
-
Write the updated data from the direct array to the physical file. This write is necessary to make the results of the action visible to other processes that do not share this direct array.
-
Wait for the write operation to complete.
-
Remove the exclusive lock from the record.
Using the EXCLUSIVE File Attribute
If a file is used by multiple processes, but the processes access the file only occasionally, then you may find it convenient to simply ensure that only one of those processes has the file open at any given time. When each process closes the file, the system updates the physical file with any outstanding changes that are stored in the file buffers. These changes are therefore visible to the next process that opens the file.
The simplest method of ensuring exclusive access to a physical file is to create the file with the default PROTECTION value, which is TEMPORARY. The file is not entered in the directory and therefore is not visible to other processes. Later, the process can close the file with LOCK, thus entering the file in the directory and making it available to other processes. If another process attempts to access the file before it is locked, the process is suspended with a “NO FILE” condition. When the file is locked, the process resumes.
Another method of securing exclusive access to a file is by setting the EXCLUSIVE file attribute to TRUE before opening the file. The EXCLUSIVE file attribute specifies that no other process can have the physical file open at the same time as this process.
If a process sets EXCLUSIVE to TRUE and then opens a file, then any other process that attempts to open that physical file is suspended until this process closes it. If a process sets EXCLUSIVE and attempts to open a physical file that is already in use by another process, the process is suspended until the other process closes the file. In either case, the RSVP message displayed is “WAITING ON: <file title>.”
It is possible for multiple processes to be waiting to open the same physical file with EXCLUSIVE = TRUE. When the file becomes available, one of the waiting processes opens the file and the other processes continue to wait. It is not possible to predict which of the waiting processes will succeed in opening the file first.
If it is not desirable for the program to be suspended until the file becomes available, the process can attempt a conditional open operation instead. This can be achieved by using an open statement with the AVAILABLE option set or by interrogating the AVAILABLE file attribute. If another process is currently using the file with EXCLUSIVE = TRUE, the conditional open operation fails and returns a result reporting the reason for the failure. (The results are documented in the AVAILABLE file attribute description in the File Attributes Programming Reference Manual.) The process then continues executing normally.
Exclusive files are best suited to situations where a single body of information is to be transmitted from one process to another. An extended dialogue between processes cannot be implemented efficiently by this method, because it requires repeated file open and close operations. Each file open or close operation is an expensive operation that consumes many times the resources required to access an event or perform a simple read or write operation.
Avoiding Nonpreferred Methods
If you do not use the BUFFERSHARING or EXCLUSIVE file attributes, then any number of logical files with separate buffers can be linked to the same physical disk file at the same time. However, you should be aware that this type of disk file sharing involves complexities of synchronization that are extremely difficult to resolve. When different buffers are used it is not possible to predict the order in which read and write operations submitted by different processes will be executed.
Some programmers have mistakenly believed that each SEEK statement to record number -1 causes the physical file to be updated with the contents of the file buffers. Although this statement sometimes has the desired effect, it is not and never has been a reliable method of flushing the buffers.
A newer feature is the SYNCHRONIZE file attribute. When SYNCHRONIZE is set to OUT, the system updates the physical file before completing any given WRITE statement. This technique can help reduce the data loss caused by a program failure or system halt/load. However, you should be aware that SYNCHRONIZE does not provide any buffer updating for READ statements.
For example, suppose that two separate processes access a physical file through separate buffers with SYNCHRONIZE set to OUT. The first process issues a WRITE statement, during which the physical file is updated. The first process then uses an event variable to signal the second process that it can read the file. The second process issues a READ statement, but the read statement may reflect outdated contents of the file buffers rather than the current contents of the physical file.
Because of these and other problems, the concurrent use of physical disk files through separate buffers is not recommended.
Synchronizing Access on Shared Disk Families
You can use the SHARE (Shared Family) system command to enable multiple hosts to share the same disk family. (A host is either an A Series system or the MCP environment of a ClearPath system.) One of the hosts is designated as the master host, and only programs on the master host can create new files, expand existing files, or modify file attributes of files on the share family. For a full description of the SHARE command and its effects, refer to the System Commands Reference.
The shared families feature can be a convenient way of making code files available for execution by multiple hosts or for making files available for reading by programs on multiple hosts.
The shared families feature also allows existing rows of a file to be rewritten by programs on any of the sharing hosts. However, such updating by programs on multiple hosts can result in timing issues that go beyond those previously discussed in this section. For instance:
-
The methods of synchronization discussed under Using Shared Logical Files apply only to processes that are running on the same host.
-
Most of the synchronization methods discussed under Synchronizing Access with Separate Logical Files depend on the use of the BUFFERSHARING file attribute or the EXCLUSIVE file attribute. However, programs are not permitted to set either of these attributes for files on shared families.
-
It is possible to use direct I/O files on shared families, together with record locking through the MCPSUPPORT procedure RECORDLOCKER. However, the record locking affects only processes on a single host. It does nothing to prevent concurrent access by processes running on different hosts.
Given these limitations, it is generally best to avoid application architectures in which processes running on separate hosts are responsible for updating the same file on a shared family during the same period.

