Lines Matching refs:and
1 Path walking and name lookup locking
9 path string. Then repeating the lookup from the child dentry and finding its
10 child with the next element, and so on.
12 Since it is a frequent operation for workloads like multiuser environments and
16 Prior to 2.5.10, dcache_lock was acquired in d_lookup (dcache hash lookup) and
18 algorithm changed this by holding the dcache_lock at the beginning and walking
21 the lock hold time significantly and affects performance in large SMP machines.
25 All the above algorithms required taking a lock and reference count on the
27 next path element. This is inefficient and unscalable. It is inefficient
28 because of the locks and atomic operations required for every dentry element
32 common path elements causes lock and cacheline queueing.
42 A name string specifies a start (root directory, cwd, fd-relative) and a
52 parent with the given name and if it is not the desired entry, make it the
55 A parent, of course, must be a directory, and we must have appropriate
58 Turning the child into a parent for the next lookup requires more checks and
60 name in the name string, and require some recursive path walking. Mount points
68 - perform permissions and validity checks on inodes;
72 - lookup and create missing parts of the path on demand.
80 and use that to select a bucket in the dcache-hash table. The list of entries
81 in that bucket is then walked, and we do a full comparison of each entry
88 Parent and name members of a dentry, as well as its membership in the dcache
89 hash, and its inode are protected by the per-dentry d_lock spinlock. A
91 and this stabilises its d_inode pointer and actual inode. This gives a stable
95 read-only protection and no durability of results, so care must be taken when
101 will happen to an object is insertion, and then eventually removal from the
107 moved to a new hash list. Allocating and inserting a new alias would be
108 expensive and also problematic for directory dentries. Latency would be far to
109 high to wait for a grace period after removing the dentry and before inserting
115 lookup of the old list veering off into the new (incorrect) list and missing
120 dentry. So a seqlock is used to detect when a rename has occurred, and so the
129 Rename of dentry 2 may require it deleted from the above list, and inserted
170 Between deleting the dentry from the old hash list, and inserting it on the new
178 the dentry, stabilising it while comparing its name and parent and then
185 looks like (its name, parent, and inode). That snapshot is then used to start
187 care must be taken to load the members up-front, and use those pointers rather
192 no non-atomic stores to shared data), and to recheck the seqcount when we are
212 Path walking code now has two distinct modes, ref-walk and rcu-walk. ref-walk
214 serialise concurrent modifications to the dentry and take a reference count on
215 it. ref-walk is simple and obvious, and may sleep, take locks, etc while path
217 lookups, and can perform lookup of intermediate elements without any stores to
247 | name: "/" | inode's permission, and then look up the next
254 | name: "home" | hash lookup, then note d_seq and compare name
255 | inode: 678 | string and parent pointer. When we have a match,
257 +---------------------+ check inode and look up the next element.
262 | inode: 543 | parent for d_seq verification, and grandparents
274 re-checking its d_seq, and then incrementing its refcount is called
287 these cases is fundamental for performance and scalability because blocking
288 operations such as creates and unlinks are not uncommon.
296 access d_ops and i_ops during rcu-walk.
299 lookups, and to assume dentry mount points and mount roots are stable up and
301 * Have a per-dentry seqlock to protect the dentry name, parent, and inode,
302 so we can load this tuple atomically, and also check whether any of its
307 * inode is also RCU protected so we can load d_inode and use the inode for
324 very least because i_mutex needs to be grabbed, and objects allocated.
328 and refcounts (both of which can be made per-cpu), and we also store to the
329 stack (which is essentially CPU-local), and we also have to take locks and
333 or stored into. The result is massive improvements in performance and
342 drop rcu that fail due to d_seq failure and requiring the entire path lookup
346 and link for symlink traversal requiring drop.
375 Papers and other documentation on dcache locking