Lines Matching refs:the

17 During "normal" functioning we assume the filesystem ensures that only one
20 - set the appropriate bit (if not already set)
21 - commit the write to all mirrors
22 - schedule the bit to be cleared after a timeout.
24 Reads are just handled normally. It is up to the filesystem to
25 ensure one node doesn't read from a location where another node (or the same
31 There are two locks for managing the device:
35 The bm_lockres protects individual node bitmaps. They are named in the
37 joins the cluster, it acquires the lock in PW mode and it stays so
38 during the lifetime the node is part of the cluster. The lock resource
39 number is based on the slot number returned by the DLM subsystem. Since
41 subtracted from the DLM slot number to arrive at the bitmap slot number.
52 3.1.1 METADATA_UPDATED: informs other nodes that the metadata has been
53 updated, and the node must re-read the md superblock. This is performed
57 so that each node may suspend or resume the region.
61 The DLM LVB is used to communicate within nodes of the cluster. There
62 are three resources used for the purpose:
64 3.2.1 Token: The resource which protects the entire communication
65 system. The node having the token resource is allowed to
68 3.2.2 Message: The lock resource which carries the data to
71 3.2.3 Ack: The resource, acquiring which means the message has been
72 acknowledged by all nodes in the cluster. The BAST of the resource
73 is used to inform the receive node that a node wants to communicate.
90 or other events that happened while waiting for the TOKEN may have made
96 [ wait until all receiver has *processed* the MESSAGE ]
101 receiver processes the message
126 When a node fails, the DLM informs the cluster with the slot. The node
128 - acquires the bitmap<number> lock of the failed node
129 - opens the bitmap
130 - reads the bitmap of the failed node
131 - copies the set bitmap to local node
132 - cleans the bitmap of the failed node
133 - releases bitmap<number> lock of the failed node
134 - initiates resync of the bitmap on the current node
136 The resync process, is the regular md resync. However, in a clustered
138 of the areas which are suspended. Before a resync starts, the node
139 send out RESYNC_START with the (lo,hi) range of the area which needs
141 the list of ranges which are currently suspended. On receiving
142 RESYNC_START, the node adds the range to the suspend_list. Similarly,
143 when the node performing resync finishes, it send RESYNC_FINISHED
144 to other nodes and other nodes remove the corresponding entry from
145 the suspend_list.
151 Device failures are handled and communicated with the metadata update
155 For adding a new device, it is necessary that all nodes "see" the new device
156 to be added. For this, the following algorithm is used:
163 4. In userspace, the node searches for the disk, perhaps
165 5. Other nodes issue either of the following depending on whether the disk
172 8. If node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk
174 9. If not (get no-new-dev lock), it fails the operation and sends METADATA_UPDATED
175 10. Other nodes get the information whether a disk is added or not
176 by the following METADATA_UPDATED.