1What: /sys/fs/lustre/version 2Date: May 2015 3Contact: "Oleg Drokin" <oleg.drokin@intel.com> 4Description: 5 Shows current running lustre version. 6 7What: /sys/fs/lustre/pinger 8Date: May 2015 9Contact: "Oleg Drokin" <oleg.drokin@intel.com> 10Description: 11 Shows if the lustre module has pinger support. 12 "on" means yes and "off" means no. 13 14What: /sys/fs/lustre/health 15Date: May 2015 16Contact: "Oleg Drokin" <oleg.drokin@intel.com> 17Description: 18 Shows whenever current system state believed to be "healthy", 19 "NOT HEALTHY", or "LBUG" whenever lustre has experienced 20 an internal assertion failure 21 22What: /sys/fs/lustre/jobid_name 23Date: May 2015 24Contact: "Oleg Drokin" <oleg.drokin@intel.com> 25Description: 26 Currently running job "name" for this node to be transferred 27 to Lustre servers for purposes of QoS and statistics gathering. 28 Writing into this file will change the name, reading outputs 29 currently set value. 30 31What: /sys/fs/lustre/jobid_var 32Date: May 2015 33Contact: "Oleg Drokin" <oleg.drokin@intel.com> 34Description: 35 Control file for lustre "jobstats" functionality, write new 36 value from the list below to change the mode: 37 disable - disable job name reporting to the servers (default) 38 procname_uid - form the job name as the current running 39 command name and pid with a dot in between 40 e.g. dd.1253 41 nodelocal - use jobid_name value from above. 42 43What: /sys/fs/lustre/timeout 44Date: June 2015 45Contact: "Oleg Drokin" <oleg.drokin@intel.com> 46Description: 47 Controls "lustre timeout" variable, also known as obd_timeout 48 in some old manual. In the past obd_timeout was of paramount 49 importance as the timeout value used everywhere and where 50 other timeouts were derived from. These days it's much less 51 important as network timeouts are mostly determined by 52 AT (adaptive timeouts). 53 Unit: seconds, default: 100 54 55What: /sys/fs/lustre/max_dirty_mb 56Date: June 2015 57Contact: "Oleg Drokin" <oleg.drokin@intel.com> 58Description: 59 Controls total number of dirty cache (in megabytes) allowed 60 across all mounted lustre filesystems. 61 Since writeout of dirty pages in Lustre is somewhat expensive, 62 when you allow to many dirty pages, this might lead to 63 performance degradations as kernel tries to desperately 64 find some pages to free/writeout. 65 Default 1/2 RAM. Min value 4, max value 9/10 of RAM. 66 67What: /sys/fs/lustre/debug_peer_on_timeout 68Date: June 2015 69Contact: "Oleg Drokin" <oleg.drokin@intel.com> 70Description: 71 Control if lnet debug information should be printed when 72 an RPC timeout occurs. 73 0 disabled (default) 74 1 enabled 75 76What: /sys/fs/lustre/dump_on_timeout 77Date: June 2015 78Contact: "Oleg Drokin" <oleg.drokin@intel.com> 79Description: 80 Controls if Lustre debug log should be dumped when an RPC 81 timeout occurs. This is useful if yout debug buffer typically 82 rolls over by the time you notice RPC timeouts. 83 84What: /sys/fs/lustre/dump_on_eviction 85Date: June 2015 86Contact: "Oleg Drokin" <oleg.drokin@intel.com> 87Description: 88 Controls if Lustre debug log should be dumped when an this 89 client is evicted from one of the servers. 90 This is useful if yout debug buffer typically rolls over 91 by the time you notice the eviction event. 92 93What: /sys/fs/lustre/at_min 94Date: July 2015 95Contact: "Oleg Drokin" <oleg.drokin@intel.com> 96Description: 97 Controls minimum adaptive timeout in seconds. If you encounter 98 a case where clients timeout due to server-reported processing 99 time being too short, you might consider increasing this value. 100 One common case of this if the underlying network has 101 unpredictable long delays. 102 Default: 0 103 104What: /sys/fs/lustre/at_max 105Date: July 2015 106Contact: "Oleg Drokin" <oleg.drokin@intel.com> 107Description: 108 Controls maximum adaptive timeout in seconds. If at_max timeout 109 is reached for an RPC, the RPC will time out. 110 Some genuinuely slow network hardware might warrant increasing 111 this value. 112 Setting this value to 0 disables Adaptive Timeouts 113 functionality and old-style obd_timeout value is then used. 114 Default: 600 115 116What: /sys/fs/lustre/at_extra 117Date: July 2015 118Contact: "Oleg Drokin" <oleg.drokin@intel.com> 119Description: 120 Controls how much extra time to request for unfinished requests 121 in processing in seconds. Normally a server-side parameter, it 122 is also used on the client for responses to various LDLM ASTs 123 that are handled with a special server thread on the client. 124 This is a way for the servers to ask the clients not to time 125 out the request that reached current servicing time estimate 126 yet and give it some more time. 127 Default: 30 128 129What: /sys/fs/lustre/at_early_margin 130Date: July 2015 131Contact: "Oleg Drokin" <oleg.drokin@intel.com> 132Description: 133 Controls when to send the early reply for requests that are 134 about to timeout as an offset to the estimated service time in 135 seconds.. 136 Default: 5 137 138What: /sys/fs/lustre/at_history 139Date: July 2015 140Contact: "Oleg Drokin" <oleg.drokin@intel.com> 141Description: 142 Controls for how many seconds to remember slowest events 143 encountered by adaptive timeouts code. 144 Default: 600 145 146What: /sys/fs/lustre/llite/<fsname>-<uuid>/blocksize 147Date: May 2015 148Contact: "Oleg Drokin" <oleg.drokin@intel.com> 149Description: 150 Biggest blocksize on object storage server for this filesystem. 151 152What: /sys/fs/lustre/llite/<fsname>-<uuid>/kbytestotal 153Date: May 2015 154Contact: "Oleg Drokin" <oleg.drokin@intel.com> 155Description: 156 Shows total number of kilobytes of space on this filesystem 157 158What: /sys/fs/lustre/llite/<fsname>-<uuid>/kbytesfree 159Date: May 2015 160Contact: "Oleg Drokin" <oleg.drokin@intel.com> 161Description: 162 Shows total number of free kilobytes of space on this filesystem 163 164What: /sys/fs/lustre/llite/<fsname>-<uuid>/kbytesavail 165Date: May 2015 166Contact: "Oleg Drokin" <oleg.drokin@intel.com> 167Description: 168 Shows total number of free kilobytes of space on this filesystem 169 actually available for use (taking into account per-client 170 grants and filesystem reservations). 171 172What: /sys/fs/lustre/llite/<fsname>-<uuid>/filestotal 173Date: May 2015 174Contact: "Oleg Drokin" <oleg.drokin@intel.com> 175Description: 176 Shows total number of inodes on the filesystem. 177 178What: /sys/fs/lustre/llite/<fsname>-<uuid>/filesfree 179Date: May 2015 180Contact: "Oleg Drokin" <oleg.drokin@intel.com> 181Description: 182 Shows estimated number of free inodes on the filesystem 183 184What: /sys/fs/lustre/llite/<fsname>-<uuid>/client_type 185Date: May 2015 186Contact: "Oleg Drokin" <oleg.drokin@intel.com> 187Description: 188 Shows whenever this filesystem considers this client to be 189 compute cluster-local or remote. Remote clients have 190 additional uid/gid convrting logic applied. 191 192What: /sys/fs/lustre/llite/<fsname>-<uuid>/fstype 193Date: May 2015 194Contact: "Oleg Drokin" <oleg.drokin@intel.com> 195Description: 196 Shows filesystem type of the filesystem 197 198What: /sys/fs/lustre/llite/<fsname>-<uuid>/uuid 199Date: May 2015 200Contact: "Oleg Drokin" <oleg.drokin@intel.com> 201Description: 202 Shows this filesystem superblock uuid 203 204What: /sys/fs/lustre/llite/<fsname>-<uuid>/max_read_ahead_mb 205Date: May 2015 206Contact: "Oleg Drokin" <oleg.drokin@intel.com> 207Description: 208 Sets maximum number of megabytes in system memory to be 209 given to read-ahead cache. 210 211What: /sys/fs/lustre/llite/<fsname>-<uuid>/max_read_ahead_per_file_mb 212Date: May 2015 213Contact: "Oleg Drokin" <oleg.drokin@intel.com> 214Description: 215 Sets maximum number of megabytes to read-ahead for a single file 216 217What: /sys/fs/lustre/llite/<fsname>-<uuid>/max_read_ahead_whole_mb 218Date: May 2015 219Contact: "Oleg Drokin" <oleg.drokin@intel.com> 220Description: 221 For small reads, how many megabytes to actually request from 222 the server as initial read-ahead. 223 224What: /sys/fs/lustre/llite/<fsname>-<uuid>/checksum_pages 225Date: May 2015 226Contact: "Oleg Drokin" <oleg.drokin@intel.com> 227Description: 228 Enables or disables per-page checksum at llite layer, before 229 the pages are actually given to lower level for network transfer 230 231What: /sys/fs/lustre/llite/<fsname>-<uuid>/stats_track_pid 232Date: May 2015 233Contact: "Oleg Drokin" <oleg.drokin@intel.com> 234Description: 235 Limit Lustre vfs operations gathering to just a single pid. 236 0 to track everything. 237 238What: /sys/fs/lustre/llite/<fsname>-<uuid>/stats_track_ppid 239Date: May 2015 240Contact: "Oleg Drokin" <oleg.drokin@intel.com> 241Description: 242 Limit Lustre vfs operations gathering to just a single ppid. 243 0 to track everything. 244 245What: /sys/fs/lustre/llite/<fsname>-<uuid>/stats_track_gid 246Date: May 2015 247Contact: "Oleg Drokin" <oleg.drokin@intel.com> 248Description: 249 Limit Lustre vfs operations gathering to just a single gid. 250 0 to track everything. 251 252What: /sys/fs/lustre/llite/<fsname>-<uuid>/statahead_max 253Date: May 2015 254Contact: "Oleg Drokin" <oleg.drokin@intel.com> 255Description: 256 Controls maximum number of statahead requests to send when 257 sequential readdir+stat pattern is detected. 258 259What: /sys/fs/lustre/llite/<fsname>-<uuid>/statahead_agl 260Date: May 2015 261Contact: "Oleg Drokin" <oleg.drokin@intel.com> 262Description: 263 Controls if AGL (async glimpse ahead - obtain object information 264 from OSTs in parallel with MDS during statahead) should be 265 enabled or disabled. 266 0 to disable, 1 to enable. 267 268What: /sys/fs/lustre/llite/<fsname>-<uuid>/lazystatfs 269Date: May 2015 270Contact: "Oleg Drokin" <oleg.drokin@intel.com> 271Description: 272 Controls statfs(2) behaviour in the face of down servers. 273 If 0, always wait for all servers to come online, 274 if 1, ignote inactive servers. 275 276What: /sys/fs/lustre/llite/<fsname>-<uuid>/max_easize 277Date: May 2015 278Contact: "Oleg Drokin" <oleg.drokin@intel.com> 279Description: 280 Shows maximum number of bytes file striping data could be 281 in current configuration of storage. 282 283What: /sys/fs/lustre/llite/<fsname>-<uuid>/default_easize 284Date: May 2015 285Contact: "Oleg Drokin" <oleg.drokin@intel.com> 286Description: 287 Shows maximum observed file striping data seen by this 288 filesystem client instance. 289 290What: /sys/fs/lustre/llite/<fsname>-<uuid>/xattr_cache 291Date: May 2015 292Contact: "Oleg Drokin" <oleg.drokin@intel.com> 293Description: 294 Controls extended attributes client-side cache. 295 1 to enable, 0 to disable. 296 297What: /sys/fs/lustre/ldlm/cancel_unused_locks_before_replay 298Date: May 2015 299Contact: "Oleg Drokin" <oleg.drokin@intel.com> 300Description: 301 Controls if client should replay unused locks during recovery 302 If a client tends to have a lot of unused locks in LRU, 303 recovery times might become prolonged. 304 1 - just locally cancel unused locks (default) 305 0 - replay unused locks. 306 307What: /sys/fs/lustre/ldlm/namespaces/<name>/resource_count 308Date: May 2015 309Contact: "Oleg Drokin" <oleg.drokin@intel.com> 310Description: 311 Displays number of lock resources (objects on which individual 312 locks are taken) currently allocated in this namespace. 313 314What: /sys/fs/lustre/ldlm/namespaces/<name>/lock_count 315Date: May 2015 316Contact: "Oleg Drokin" <oleg.drokin@intel.com> 317Description: 318 Displays number or locks allocated in this namespace. 319 320What: /sys/fs/lustre/ldlm/namespaces/<name>/lru_size 321Date: May 2015 322Contact: "Oleg Drokin" <oleg.drokin@intel.com> 323Description: 324 Controls and displays LRU size limit for unused locks for this 325 namespace. 326 0 - LRU size is unlimited, controlled by server resources 327 positive number - number of locks to allow in lock LRU list 328 329What: /sys/fs/lustre/ldlm/namespaces/<name>/lock_unused_count 330Date: May 2015 331Contact: "Oleg Drokin" <oleg.drokin@intel.com> 332Description: 333 Display number of locks currently sitting in the LRU list 334 of this namespace 335 336What: /sys/fs/lustre/ldlm/namespaces/<name>/lru_max_age 337Date: May 2015 338Contact: "Oleg Drokin" <oleg.drokin@intel.com> 339Description: 340 Maximum number of milliseconds a lock could sit in LRU list 341 before client would voluntarily cancel it as unused. 342 343What: /sys/fs/lustre/ldlm/namespaces/<name>/early_lock_cancel 344Date: May 2015 345Contact: "Oleg Drokin" <oleg.drokin@intel.com> 346Description: 347 Controls "early lock cancellation" feature on this namespace 348 if supported by the server. 349 When enabled, tries to preemtively cancel locks that would be 350 cancelled by verious operations and bundle the cancellation 351 requests in the same RPC as the main operation, which results 352 in significant speedups due to reduced lock-pingpong RPCs. 353 0 - disabled 354 1 - enabled (default) 355 356What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/granted 357Date: May 2015 358Contact: "Oleg Drokin" <oleg.drokin@intel.com> 359Description: 360 Displays number of granted locks in this namespace 361 362What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/grant_rate 363Date: May 2015 364Contact: "Oleg Drokin" <oleg.drokin@intel.com> 365Description: 366 Number of granted locks in this namespace during last 367 time interval 368 369What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/cancel_rate 370Date: May 2015 371Contact: "Oleg Drokin" <oleg.drokin@intel.com> 372Description: 373 Number of lock cancellations in this namespace during 374 last time interval 375 376What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/grant_speed 377Date: May 2015 378Contact: "Oleg Drokin" <oleg.drokin@intel.com> 379Description: 380 Calculated speed of lock granting (grant_rate - cancel_rate) 381 in this namespace 382 383What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/grant_plan 384Date: May 2015 385Contact: "Oleg Drokin" <oleg.drokin@intel.com> 386Description: 387 Estimated number of locks to be granted in the next time 388 interval in this namespace 389 390What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/limit 391Date: May 2015 392Contact: "Oleg Drokin" <oleg.drokin@intel.com> 393Description: 394 Controls number of allowed locks in this pool. 395 When lru_size is 0, this is the actual limit then. 396 397What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/lock_volume_factor 398Date: May 2015 399Contact: "Oleg Drokin" <oleg.drokin@intel.com> 400Description: 401 Multiplier for all lock volume calculations above. 402 Default is 1. Increase to make the client to more agressively 403 clean it's lock LRU list for this namespace. 404 405What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/server_lock_volume 406Date: May 2015 407Contact: "Oleg Drokin" <oleg.drokin@intel.com> 408Description: 409 Calculated server lock volume. 410 411What: /sys/fs/lustre/ldlm/namespaces/<name>/pool/recalc_period 412Date: May 2015 413Contact: "Oleg Drokin" <oleg.drokin@intel.com> 414Description: 415 Controls length of time between recalculation of above 416 values (in seconds). 417 418What: /sys/fs/lustre/ldlm/services/ldlm_cbd/threads_min 419Date: May 2015 420Contact: "Oleg Drokin" <oleg.drokin@intel.com> 421Description: 422 Controls minimum number of ldlm callback threads to start. 423 424What: /sys/fs/lustre/ldlm/services/ldlm_cbd/threads_max 425Date: May 2015 426Contact: "Oleg Drokin" <oleg.drokin@intel.com> 427Description: 428 Controls maximum number of ldlm callback threads to start. 429 430What: /sys/fs/lustre/ldlm/services/ldlm_cbd/threads_started 431Date: May 2015 432Contact: "Oleg Drokin" <oleg.drokin@intel.com> 433Description: 434 Shows actual number of ldlm callback threads running. 435 436What: /sys/fs/lustre/ldlm/services/ldlm_cbd/high_priority_ratio 437Date: May 2015 438Contact: "Oleg Drokin" <oleg.drokin@intel.com> 439Description: 440 Controls what percentage of ldlm callback threads is dedicated 441 to "high priority" incoming requests. 442 443What: /sys/fs/lustre/{obdtype}/{connection_name}/blocksize 444Date: May 2015 445Contact: "Oleg Drokin" <oleg.drokin@intel.com> 446Description: 447 Blocksize on backend filesystem for service behind this obd 448 device (or biggest blocksize for compound devices like lov 449 and lmv) 450 451What: /sys/fs/lustre/{obdtype}/{connection_name}/kbytestotal 452Date: May 2015 453Contact: "Oleg Drokin" <oleg.drokin@intel.com> 454Description: 455 Total number of kilobytes of space on backend filesystem 456 for service behind this obd (or total amount for compound 457 devices like lov lmv) 458 459What: /sys/fs/lustre/{obdtype}/{connection_name}/kbytesfree 460Date: May 2015 461Contact: "Oleg Drokin" <oleg.drokin@intel.com> 462Description: 463 Number of free kilobytes on backend filesystem for service 464 behind this obd (or total amount for compound devices 465 like lov lmv) 466 467What: /sys/fs/lustre/{obdtype}/{connection_name}/kbytesavail 468Date: May 2015 469Contact: "Oleg Drokin" <oleg.drokin@intel.com> 470Description: 471 Number of kilobytes of free space on backend filesystem 472 for service behind this obd (or total amount for compound 473 devices like lov lmv) that is actually available for use 474 (taking into account per-client and filesystem reservations). 475 476What: /sys/fs/lustre/{obdtype}/{connection_name}/filestotal 477Date: May 2015 478Contact: "Oleg Drokin" <oleg.drokin@intel.com> 479Description: 480 Number of inodes on backend filesystem for service behind this 481 obd. 482 483What: /sys/fs/lustre/{obdtype}/{connection_name}/filesfree 484Date: May 2015 485Contact: "Oleg Drokin" <oleg.drokin@intel.com> 486Description: 487 Number of free inodes on backend filesystem for service 488 behind this obd. 489 490What: /sys/fs/lustre/mdc/{connection_name}/max_pages_per_rpc 491Date: May 2015 492Contact: "Oleg Drokin" <oleg.drokin@intel.com> 493Description: 494 Maximum number of readdir pages to fit into a single readdir 495 RPC. 496 497What: /sys/fs/lustre/{mdc,osc}/{connection_name}/max_rpcs_in_flight 498Date: May 2015 499Contact: "Oleg Drokin" <oleg.drokin@intel.com> 500Description: 501 Maximum number of parallel RPCs on the wire to allow on 502 this connection. Increasing this number would help on higher 503 latency links, but has a chance of overloading a server 504 if you have too many clients like this. 505 Default: 8 506 507What: /sys/fs/lustre/osc/{connection_name}/max_pages_per_rpc 508Date: May 2015 509Contact: "Oleg Drokin" <oleg.drokin@intel.com> 510Description: 511 Maximum number of pages to fit into a single RPC. 512 Typically bigger RPCs allow for better performance. 513 Default: however many pages to form 1M of data (256 pages 514 for 4K page sized platforms) 515 516What: /sys/fs/lustre/osc/{connection_name}/active 517Date: May 2015 518Contact: "Oleg Drokin" <oleg.drokin@intel.com> 519Description: 520 Controls accessibility of this connection. If set to 0, 521 fail all accesses immediately. 522 523What: /sys/fs/lustre/osc/{connection_name}/checksums 524Date: May 2015 525Contact: "Oleg Drokin" <oleg.drokin@intel.com> 526Description: 527 Controls whenever to checksum bulk RPC data over the wire 528 to this target. 529 1: enable (default) ; 0: disable 530 531What: /sys/fs/lustre/osc/{connection_name}/contention_seconds 532Date: May 2015 533Contact: "Oleg Drokin" <oleg.drokin@intel.com> 534Description: 535 Controls for how long to consider a file contended once 536 indicated as such by the server. 537 When a file is considered contended, all operations switch to 538 synchronous lockless mode to avoid cache and lock pingpong. 539 540What: /sys/fs/lustre/osc/{connection_name}/cur_dirty_bytes 541Date: May 2015 542Contact: "Oleg Drokin" <oleg.drokin@intel.com> 543Description: 544 Displays how many dirty bytes is presently in the cache for this 545 target. 546 547What: /sys/fs/lustre/osc/{connection_name}/cur_grant_bytes 548Date: May 2015 549Contact: "Oleg Drokin" <oleg.drokin@intel.com> 550Description: 551 Shows how many bytes we have as a "dirty cache" grant from the 552 server. Writing a value smaller than shown allows to release 553 some grant back to the server. 554 Dirty cache grant is a way Lustre ensures that cached successful 555 writes on client do not end up discarded by the server due to 556 lack of space later on. 557 558What: /sys/fs/lustre/osc/{connection_name}/cur_lost_grant_bytes 559Date: May 2015 560Contact: "Oleg Drokin" <oleg.drokin@intel.com> 561Description: 562 Shows how many granted bytes were released to the server due 563 to lack of write activity on this client. 564 565What: /sys/fs/lustre/osc/{connection_name}/grant_shrink_interval 566Date: May 2015 567Contact: "Oleg Drokin" <oleg.drokin@intel.com> 568Description: 569 Number of seconds with no write activity for this target 570 to start releasing dirty grant back to the server. 571 572What: /sys/fs/lustre/osc/{connection_name}/destroys_in_flight 573Date: May 2015 574Contact: "Oleg Drokin" <oleg.drokin@intel.com> 575Description: 576 Number of DESTROY RPCs currently in flight to this target. 577 578What: /sys/fs/lustre/osc/{connection_name}/lockless_truncate 579Date: May 2015 580Contact: "Oleg Drokin" <oleg.drokin@intel.com> 581Description: 582 Controls whether lockless truncate RPCs are allowed to this 583 target. 584 Lockless truncate causes server to perform the locking which 585 is beneficial if the truncate is not followed by a write 586 immediately. 587 1: enable ; 0: disable (default) 588 589What: /sys/fs/lustre/osc/{connection_name}/max_dirty_mb 590Date: May 2015 591Contact: "Oleg Drokin" <oleg.drokin@intel.com> 592Description: 593 Controls how much dirty data this client can accumulate 594 for this target. This is orthogonal to dirty grant and is 595 a hard limit even if the server would allow a bigger dirty 596 cache. 597 While allowing higher dirty cache is beneficial for write 598 performance, flushing write cache takes longer and as such 599 the node might be more prone to OOMs. 600 Having this value set too low might result in not being able 601 to sent too many parallel WRITE RPCs. 602 Default: 32 603 604What: /sys/fs/lustre/osc/{connection_name}/resend_count 605Date: May 2015 606Contact: "Oleg Drokin" <oleg.drokin@intel.com> 607Description: 608 Controls how many times to try and resend RPCs to this target 609 that failed with "recoverable" status, such as EAGAIN, 610 ENOMEM. 611 612What: /sys/fs/lustre/lov/{connection_name}/numobd 613Date: May 2015 614Contact: "Oleg Drokin" <oleg.drokin@intel.com> 615Description: 616 Number of OSC targets managed by this LOV instance. 617 618What: /sys/fs/lustre/lov/{connection_name}/activeobd 619Date: May 2015 620Contact: "Oleg Drokin" <oleg.drokin@intel.com> 621Description: 622 Number of OSC targets managed by this LOV instance that are 623 actually active. 624 625What: /sys/fs/lustre/lmv/{connection_name}/numobd 626Date: May 2015 627Contact: "Oleg Drokin" <oleg.drokin@intel.com> 628Description: 629 Number of MDC targets managed by this LMV instance. 630 631What: /sys/fs/lustre/lmv/{connection_name}/activeobd 632Date: May 2015 633Contact: "Oleg Drokin" <oleg.drokin@intel.com> 634Description: 635 Number of MDC targets managed by this LMV instance that are 636 actually active. 637 638What: /sys/fs/lustre/lmv/{connection_name}/placement 639Date: May 2015 640Contact: "Oleg Drokin" <oleg.drokin@intel.com> 641Description: 642 Determines policy of inode placement in case of multiple 643 metadata servers: 644 CHAR - based on a hash of the file name used at creation time 645 (Default) 646 NID - based on a hash of creating client network id. 647