Lines Matching refs:queue
47 - Per-queue parameters
112 Tuning at a per-queue level:
114 i. Per-queue limits/values exported to the generic layer by the driver
117 a per-queue level (e.g maximum request size, maximum number of segments in
121 major/minor are now directly associated with the queue. Some of these may
123 have been incorporated into a queue flags field rather than separate fields
127 Some new queue property settings:
136 - The request queue's max_sectors, which is a soft size in
140 - The request queue's max_hw_sectors, which is a hard limit
163 New queue flags:
176 setting the queue bounce limit for the request queue for the device
227 queue or pick from (copy) existing generic schedulers and replace/override
235 I/O scheduler wrappers are to be used instead of accessing the queue directly.
267 requests in the queue. For example it allows reads for bringing in an
269 requests which haven't aged too much on the queue. Potentially this priority
292 can instead be used to directly insert such requests in the queue or preferably
293 the blk_do_rq routine can be used to place the request on the queue and
356 request on the queue, rather than construct the command on the fly in the
357 driver while servicing the request queue when it may affect latencies in
432 struct bio *bi_next; /* request queue link */
496 places it on the queue and invokes the drivers request_fn. The driver makes
498 off the queue. Control or diagnostic functions might bypass block and directly
513 rq->queue is gone
670 the i/o hardware can handle, based on various queue properties.
718 one outstanding command on a queue at any given time.
727 Teardown tag info associated with the queue. This will be done
728 automatically by block if blk_queue_cleanup() is called on a queue
739 for this queue is already achieved (or if the tag wasn't started for
747 To minimize struct request and queue overhead, the tag helpers utilize some
748 of the same request members that are used for normal request queue management.
749 This means that a request cannot both be an active tag and be on the queue
757 queue. For instance, on IDE any tagged request error needs to clear both
758 the hardware and software block queue and enable the driver to sanely restart
763 Clear the internal block tag queue and re-add all the pending requests
764 to the request queue. The driver will receive them again on the
775 Returns 1 if the queue 'q' is using tagging, 0 if not.
783 Return current queue depth.
787 Returns 1 if the queue can accept a new queued command, 0 if we are
802 int busy; /* queue depth */
803 int max_depth; /* max queue depth */
808 but in the event of any barrier requests in the tag queue we need to ensure
809 that requests are restarted in the order they were queue. This may happen
887 queue and specific I/O schedulers. Unless stated otherwise, elevator is used
890 Block layer implements generic dispatch queue in block/*.c.
891 The generic dispatch queue is responsible for requeueing, handling non-fs
898 be built inside the kernel. Each queue can choose different one and can also
933 elevator_dispatch_fn* fills the dispatch queue with ready requests.
935 not filling the dispatch queue unless @force
938 they belong to generic dispatch queue.
950 current context to queue a new request even if
951 it is over the queue limit. This must be used
967 for a queue.
1011 iii. Plugging the queue to batch requests in anticipation of opportunities for
1015 that it collects up enough requests in the queue to be able to take
1017 queue is empty when a request comes in, then it plugs the request queue
1021 passing them down to the device. There are various conditions when the queue is
1026 the queue gets explicitly unplugged as part of waiting for completion on that
1033 and allowing a big queue to build up in software, while letting the device be
1048 5.1 Granular Locking: io_request_lock replaced by a per-queue lock
1052 granular locking. The request queue structure has a pointer to the
1053 lock to be used for that queue. As a result, locking can now be
1054 per-queue, with a provision for sharing a lock across queues if
1055 necessary (e.g the scsi layer sets the queue lock pointers to the
1060 should still be SMP safe. Drivers are free to drop the queue
1083 generic_make_request even before invoking the queue specific make_request_fn,
1104 (struct request->queue has been removed)
1118 etc per queue now. Drivers that used to define their own merge functions i