Searched refs:throughput (Results 1 - 174 of 174) sorted by relevance

/linux-4.4.14/drivers/net/wireless/iwlwifi/dvm/
H A Dled.c56 { .throughput = 0, .blink_time = 334 },
57 { .throughput = 1 * 1024 - 1, .blink_time = 260 },
58 { .throughput = 5 * 1024 - 1, .blink_time = 220 },
59 { .throughput = 10 * 1024 - 1, .blink_time = 190 },
60 { .throughput = 20 * 1024 - 1, .blink_time = 170 },
61 { .throughput = 50 * 1024 - 1, .blink_time = 150 },
62 { .throughput = 70 * 1024 - 1, .blink_time = 130 },
63 { .throughput = 100 * 1024 - 1, .blink_time = 110 },
64 { .throughput = 200 * 1024 - 1, .blink_time = 80 },
65 { .throughput = 300 * 1024 - 1, .blink_time = 50 },
H A Drs.h131 /* uCode API values for OFDM high-throughput (HT) bit rates */
270 LQ_SISO, /* high-throughput types */
300 s32 average_tpt; /* success ratio * expected throughput */
318 const u16 *expected_tpt; /* throughput metrics; expected_tpt_G, etc. */
H A Drs.c169 * The following tables contain the expected throughput metrics for all rates
450 * Static function to get the expected throughput from an iwl_scale_tbl_info
480 /* Get expected throughput */ rs_collect_tx_data()
528 /* Calculate average throughput, if we have enough history. */ rs_collect_tx_data()
1081 * Set frame tx success limits according to legacy vs. high-throughput,
1108 * Find correct throughput table for given mode of modulation
1155 * Find starting rate for new "search" high-throughput mode of modulation.
1157 * above the current measured throughput of "active" mode, to give new mode
1163 * to decrease to match "active" throughput. When moving from MIMO to SISO,
1176 /* expected "search" throughput */ rs_get_best_rate()
1194 * approximately the same throughput as "active" if: rs_get_best_rate()
1197 * great), and expected "search" throughput (under perfect rs_get_best_rate()
1199 * measured "active" throughput (but less than expected rs_get_best_rate()
1200 * "active" throughput under perfect conditions). rs_get_best_rate()
1203 * and expected "search" throughput (under perfect rs_get_best_rate()
1205 * "active" throughput (under perfect conditions). rs_get_best_rate()
2316 /* Get expected throughput table and history window for current rate */ rs_rate_scale_perform()
2335 * throughput, keep analyzing results of more tx frames, without rs_rate_scale_perform()
2357 * actual average throughput */ rs_rate_scale_perform()
2402 /* Revert to "active" rate and throughput info */ rs_rate_scale_perform()
2445 /* No throughput measured yet for adjacent rates; try increase. */ rs_rate_scale_perform()
2456 * throughput; we're using the best rate, don't change it! */ rs_rate_scale_perform()
2463 /* At least one adjacent rate's throughput is measured, rs_rate_scale_perform()
2466 /* Higher adjacent rate's throughput is measured */ rs_rate_scale_perform()
2468 /* Higher rate has better throughput */ rs_rate_scale_perform()
2476 /* Lower adjacent rate's throughput is measured */ rs_rate_scale_perform()
2478 /* Lower rate has better throughput */ rs_rate_scale_perform()
2489 /* Sanity check; asked for decrease, but success rate or throughput rs_rate_scale_perform()
2566 /* Save current throughput to compare with "search" throughput*/ rs_rate_scale_perform()
H A Dcommands.h247 * High-throughput (HT) rate format for bits 7:0 (bit 8 must be "1"):
1645 * 1) If using High-throughput (HT) (SISO or MIMO) initial rate:
1705 * 1) Calculate actual throughput (success ratio * expected throughput, see
1717 * enough history to calculate a throughput. That's okay, we might try
1722 * b) lower adjacent rate has better measured throughput ||
1723 * c) higher adjacent rate has worse throughput, and lower is unmeasured
1729 * c) current measured throughput is better than expected throughput
1735 * b) higher adjacent rate has better measured throughput ||
1736 * c) lower adjacent rate has worse throughput, and higher is unmeasured
1752 * throughput. The "while" is measured by numbers of attempted frames:
1756 * For high-throughput modes (SISO or MIMO), search for new mode after:
1771 * for which the expected throughput (under perfect conditions) is about the
1772 * same or slightly better than the actual measured throughput delivered by
1775 * Actual throughput can be estimated by multiplying the expected throughput
1779 * metric values for expected throughput assuming 100% success ratio.
1796 * frames or 8 successful frames), compare success ratio and actual throughput
H A Dtt.c352 * 3) Avoid throughput performance impact as much as possible
H A Dlib.c750 * at the expense of throughput, but only when not in powersave to
H A Drx.c230 * to improve the throughput.
/linux-4.4.14/net/x25/
H A Dx25_facilities.c108 facilities->throughput = p[1]; x25_parse_facilities()
215 if (facilities->throughput && (facil_mask & X25_MASK_THROUGHPUT)) { x25_create_facilities()
217 *p++ = facilities->throughput; x25_create_facilities()
296 if (theirs.throughput) { x25_negotiate_facilities()
297 int theirs_in = theirs.throughput & 0x0f; x25_negotiate_facilities()
298 int theirs_out = theirs.throughput & 0xf0; x25_negotiate_facilities()
299 int ours_in = ours->throughput & 0x0f; x25_negotiate_facilities()
300 int ours_out = ours->throughput & 0xf0; x25_negotiate_facilities()
302 SOCK_DEBUG(sk, "X.25: inbound throughput negotiated\n"); x25_negotiate_facilities()
303 new->throughput = (new->throughput & 0xf0) | theirs_in; x25_negotiate_facilities()
307 "X.25: outbound throughput negotiated\n"); x25_negotiate_facilities()
308 new->throughput = (new->throughput & 0x0f) | theirs_out; x25_negotiate_facilities()
H A Daf_x25.c23 * the throughput upper limit.
582 x25->facilities.throughput = 0; /* by default don't negotiate x25_create()
583 throughput */ x25_create()
1457 if (facilities.throughput) { x25_ioctl()
1458 int out = facilities.throughput & 0xf0; x25_ioctl()
1459 int in = facilities.throughput & 0x0f; x25_ioctl()
1461 facilities.throughput |= x25_ioctl()
1466 facilities.throughput |= x25_ioctl()
/linux-4.4.14/drivers/gpu/drm/radeon/
H A Dradeon_benchmark.c80 unsigned int throughput = (n * (size >> 10)) / time; radeon_benchmark_log_results() local
82 " %d to %d in %u ms, throughput: %u Mb/s or %u MB/s\n", radeon_benchmark_log_results()
84 throughput * 8, throughput); radeon_benchmark_log_results()
H A Dradeon_vm.c744 * caching. This leads to large improvements in throughput when the radeon_vm_frag_ptes()
/linux-4.4.14/include/uapi/linux/
H A Dgen_stats.h18 * struct gnet_stats_basic - byte/packet throughput statistics
H A Dx25.h105 unsigned int throughput; member in struct:x25_facilities
H A Dwireless.h971 __u32 throughput; /* To give an idea... */ member in struct:iw_range
973 * TCP/IP throughput, because with most of these devices the
H A Dethtool.h405 * throughput under high packet rates. Some drivers only implement
H A Dnl80211.h2448 * @NL80211_STA_EXPECTED_THROUGHPUT: expected throughput considering also the
/linux-4.4.14/drivers/gpu/drm/amd/amdgpu/
H A Damdgpu_benchmark.c65 unsigned int throughput = (n * (size >> 10)) / time; amdgpu_benchmark_log_results() local
67 " %d to %d in %u ms, throughput: %u Mb/s or %u MB/s\n", amdgpu_benchmark_log_results()
69 throughput * 8, throughput); amdgpu_benchmark_log_results()
H A Damdgpu_vm.c555 * caching. This leads to large improvements in throughput when the amdgpu_vm_frag_ptes()
/linux-4.4.14/drivers/md/
H A Ddm-service-time.c119 * <relative_throughput>: The relative throughput value of st_add_path()
205 * Case 1: Both have same throughput value. Choose less loaded path. st_compare_load()
211 * Case 2a: Both have same load. Choose higher throughput path. st_compare_load()
212 * Case 2b: One path has no throughput value. Choose the other one. st_compare_load()
253 * Case 4: Service time is equal. Choose higher throughput path. st_compare_load()
341 MODULE_DESCRIPTION(DM_NAME " throughput oriented path selector");
H A Draid5.h388 * To improve write throughput, we need to delay the handling of some
H A Draid5.c6631 * it reduces the queue depth and so can hurt throughput.
/linux-4.4.14/include/linux/
H A Dtfrc.h35 * @tfrctx_x_calc: return value of throughput equation (3.1)
H A Dnfs_iostat.h39 * These counters give a view of the data throughput into and out
H A Dswap.h196 * throughput.
H A Dpagemap.h92 * throughput (it can then be mapped into user
H A Dieee80211.h1302 * Maximum length of AMPDU that the STA can receive in high-throughput (HT).
/linux-4.4.14/arch/arm/mach-omap2/
H A Domap-pm.h96 * omap_pm_set_min_bus_tput - set minimum bus throughput needed by device
99 * @r: minimum throughput (in KiB/s)
101 * Request that the minimum data throughput on the OCP interconnect
119 * throughput restriction for this device, call with r = 0.
H A Domap_hwmod.c121 * - bus throughput & module latency measurement code
/linux-4.4.14/fs/nfsd/
H A Dstats.c12 * statistics for IO throughput
/linux-4.4.14/drivers/net/wireless/iwlegacy/
H A D4965-rs.c161 * The following tables contain the expected throughput metrics for all rates
400 * Static function to get the expected throughput from an il_scale_tbl_info
432 /* Get expected throughput */ il4965_rs_collect_tx_data()
480 /* Calculate average throughput, if we have enough history. */ il4965_rs_collect_tx_data()
995 * Set frame tx success limits according to legacy vs. high-throughput,
1023 * Find correct throughput table for given mode of modulation
1067 * Find starting rate for new "search" high-throughput mode of modulation.
1069 * above the current measured throughput of "active" mode, to give new mode
1075 * to decrease to match "active" throughput. When moving from MIMO to SISO,
1089 /* expected "search" throughput */ il4965_rs_get_best_rate()
1108 * approximately the same throughput as "active" if: il4965_rs_get_best_rate()
1111 * great), and expected "search" throughput (under perfect il4965_rs_get_best_rate()
1113 * measured "active" throughput (but less than expected il4965_rs_get_best_rate()
1114 * "active" throughput under perfect conditions). il4965_rs_get_best_rate()
1117 * and expected "search" throughput (under perfect il4965_rs_get_best_rate()
1119 * "active" throughput (under perfect conditions). il4965_rs_get_best_rate()
1849 /* Get expected throughput table and history win for current rate */ il4965_rs_rate_scale_perform()
1867 * throughput, keep analyzing results of more tx frames, without il4965_rs_rate_scale_perform()
1888 * actual average throughput */ il4965_rs_rate_scale_perform()
1930 /* Revert to "active" rate and throughput info */ il4965_rs_rate_scale_perform()
1973 /* No throughput measured yet for adjacent rates; try increase. */ il4965_rs_rate_scale_perform()
1983 * throughput; we're using the best rate, don't change it! */ il4965_rs_rate_scale_perform()
1988 /* At least one adjacent rate's throughput is measured, il4965_rs_rate_scale_perform()
1991 /* Higher adjacent rate's throughput is measured */ il4965_rs_rate_scale_perform()
1993 /* Higher rate has better throughput */ il4965_rs_rate_scale_perform()
1999 /* Lower adjacent rate's throughput is measured */ il4965_rs_rate_scale_perform()
2001 /* Lower rate has better throughput */ il4965_rs_rate_scale_perform()
2011 /* Sanity check; asked for decrease, but success rate or throughput il4965_rs_rate_scale_perform()
2059 /* Save current throughput to compare with "search" throughput */ il4965_rs_rate_scale_perform()
H A D3945-rs.c315 /* Calculate average throughput, if we have enough history. */ il3945_collect_tx_data()
734 /* No throughput measured yet for adjacent rates, il3945_rs_get_rate()
745 * better throughput; we're using the best rate, don't change il3945_rs_get_rate()
754 /* At least one of the rates has better throughput */ il3945_rs_get_rate()
758 /* High rate has better throughput, Increase il3945_rs_get_rate()
773 * throughput,decrease rate */ il3945_rs_get_rate()
779 /* Sanity check; asked for decrease, but success rate or throughput il3945_rs_get_rate()
H A Dcommands.h248 * High-throughput (HT) rate format for bits 7:0 (bit 8 must be "1"):
1895 * 1) If using High-throughput (HT) (SISO or MIMO) initial rate:
1955 * 1) Calculate actual throughput (success ratio * expected throughput, see
1967 * enough history to calculate a throughput. That's okay, we might try
1972 * b) lower adjacent rate has better measured throughput ||
1973 * c) higher adjacent rate has worse throughput, and lower is unmeasured
1979 * c) current measured throughput is better than expected throughput
1985 * b) higher adjacent rate has better measured throughput ||
1986 * c) lower adjacent rate has worse throughput, and higher is unmeasured
2002 * throughput. The "while" is measured by numbers of attempted frames:
2006 * For high-throughput modes (SISO or MIMO), search for new mode after:
2021 * for which the expected throughput (under perfect conditions) is about the
2022 * same or slightly better than the actual measured throughput delivered by
2025 * Actual throughput can be estimated by multiplying the expected throughput
2029 * metric values for expected throughput assuming 100% success ratio.
2046 * frames or 8 successful frames), compare success ratio and actual throughput
H A Dcommon.c456 {.throughput = 0, .blink_time = 334},
457 {.throughput = 1 * 1024 - 1, .blink_time = 260},
458 {.throughput = 5 * 1024 - 1, .blink_time = 220},
459 {.throughput = 10 * 1024 - 1, .blink_time = 190},
460 {.throughput = 20 * 1024 - 1, .blink_time = 170},
461 {.throughput = 50 * 1024 - 1, .blink_time = 150},
462 {.throughput = 70 * 1024 - 1, .blink_time = 130},
463 {.throughput = 100 * 1024 - 1, .blink_time = 110},
464 {.throughput = 200 * 1024 - 1, .blink_time = 80},
465 {.throughput = 300 * 1024 - 1, .blink_time = 50},
H A Dcommon.h156 * @sched_retry: indicates queue is high-throughput aggregation (HT AGG) enabled
2583 /* uCode API values for OFDM high-throughput (HT) bit rates */
2721 LQ_SISO, /* high-throughput types */
2758 s32 average_tpt; /* success ratio * expected throughput */
2776 s32 *expected_tpt; /* throughput metrics; expected_tpt_G, etc. */
2884 * The specific throughput table used is based on the type of network
H A D4965-mac.c624 * N_RX_MPDU (HT high-throughput N frames). */
1095 * at the expense of throughput, but only when not in powersave to
/linux-4.4.14/drivers/net/wireless/ath/ath9k/
H A Dhtc_drv_init.c51 { .throughput = 0 * 1024, .blink_time = 334 },
52 { .throughput = 1 * 1024, .blink_time = 260 },
53 { .throughput = 5 * 1024, .blink_time = 220 },
54 { .throughput = 10 * 1024, .blink_time = 190 },
55 { .throughput = 20 * 1024, .blink_time = 170 },
56 { .throughput = 50 * 1024, .blink_time = 150 },
57 { .throughput = 70 * 1024, .blink_time = 130 },
58 { .throughput = 100 * 1024, .blink_time = 110 },
59 { .throughput = 200 * 1024, .blink_time = 80 },
60 { .throughput = 300 * 1024, .blink_time = 50 },
H A Dinit.c80 { .throughput = 0 * 1024, .blink_time = 334 },
81 { .throughput = 1 * 1024, .blink_time = 260 },
82 { .throughput = 5 * 1024, .blink_time = 220 },
83 { .throughput = 10 * 1024, .blink_time = 190 },
84 { .throughput = 20 * 1024, .blink_time = 170 },
85 { .throughput = 50 * 1024, .blink_time = 150 },
86 { .throughput = 70 * 1024, .blink_time = 130 },
87 { .throughput = 100 * 1024, .blink_time = 110 },
88 { .throughput = 200 * 1024, .blink_time = 80 },
89 { .throughput = 300 * 1024, .blink_time = 50 },
H A Dmci.c158 * to improve WLAN throughput. ath_mci_update_scheme()
/linux-4.4.14/drivers/net/wireless/iwlwifi/mvm/
H A Dfw-api-rs.h135 * High-throughput (HT) rate format
137 * Very High-throughput (VHT) rate format
159 * High-throughput (HT) rate format for bits 7:0
192 * Very High-throughput (VHT) rate format for bits 7:0
H A Drs.h224 s32 average_tpt; /* success ratio * expected throughput */
267 const u16 *expected_tpt; /* throughput metrics; expected_tpt_G, etc. */
H A Drs.c409 * The following tables contain the expected throughput metrics for all rates
644 * Static function to get the expected throughput from an iwl_scale_tbl_info
669 /* Get expected throughput */ _rs_collect_tx_data()
716 /* Calculate average throughput, if we have enough history. */ _rs_collect_tx_data()
1392 * Set frame tx success limits according to legacy vs. high-throughput,
2241 /* Get expected throughput table and history window for current rate */ rs_rate_scale_perform()
2252 * throughput, keep analyzing results of more tx frames, without rs_rate_scale_perform()
2307 /* Revert to "active" rate and throughput info */ rs_rate_scale_perform()
2410 /* Save current throughput to compare with "search" throughput*/ rs_rate_scale_perform()
/linux-4.4.14/net/mac80211/
H A Drc80211_minstrel_ht.c317 * Return current throughput based on the average A-MPDU length, taking into
326 /* do not account throughput if sucess prob is below 10% */ minstrel_ht_get_tp_avg()
336 * For the throughput calculation, limit the probability value to 90% to minstrel_ht_get_tp_avg()
348 * Find & sort topmost throughput rates
350 * If multiple rates provide equal throughput the sorting is based on their
509 * probability and throughput during strong fluctuations
511 * higher throughput rates, even if the probablity is a bit lower
565 /* Find max throughput rate set */ minstrel_ht_update_stats()
574 /* Find max throughput rate set within a group */ minstrel_ht_update_stats()
H A Dled.c271 if (tpt_trig->blink_table[i].throughput < 0 || tpt_trig_timer()
272 tpt > tpt_trig->blink_table[i].throughput) { tpt_trig_timer()
H A Drc80211_minstrel.c72 /* return current EMWA throughput */ minstrel_get_tp_avg()
91 /* find & sort topmost throughput rates */
229 * choose the maximum throughput rate as max_prob_rate minstrel_update_stats()
390 * in a large throughput loss. */ minstrel_get_rate()
H A Drc80211_minstrel.h21 /* number of highest throughput rates to consider*/
H A Dstatus.c546 * - current throughput (higher value for higher tpt)?
/linux-4.4.14/arch/ia64/lib/
H A Dmemcpy.S156 * an overriding concern here, but throughput is. We first do
171 * latency is 2 cycles/iteration. This gives us a _copy_ throughput
/linux-4.4.14/drivers/usb/gadget/legacy/
H A Dmass_storage.c21 * double-buffering for increased throughput. Last but not least, it
/linux-4.4.14/drivers/staging/vt6655/
H A Dchannel.c197 it is for better TX throughput */ set_channel()
H A Dcard.c314 * better TX throughput; MAC will need 2 us to process, so the CARDbSetPhyParameter()
/linux-4.4.14/drivers/usb/c67x00/
H A Dc67x00-hcd.h47 * isochronous transfers are scheduled), in order to optimize the throughput
/linux-4.4.14/arch/unicore32/include/mach/
H A Dregs-umal.h77 * throughput
/linux-4.4.14/drivers/staging/fbtft/
H A Dfbtft-core.c350 long fps, throughput; fbtft_update_display() local
402 throughput = ktime_us_delta(ts_end, ts_start); fbtft_update_display()
403 throughput = throughput ? (len * 1000) / throughput : 0; fbtft_update_display()
404 throughput = throughput * 1000 / 1024; fbtft_update_display()
408 throughput, fps); fbtft_update_display()
/linux-4.4.14/drivers/staging/rtl8192u/
H A Dr8192U_wx.c208 __u32 throughput; /* To give an idea... */ member in struct:iw_range_with_scan_capa
210 * TCP/IP throughput, because with most of these devices the
249 range->throughput = 5 * 1000 * 1000; rtl8180_wx_get_range()
H A Dr8192U.h284 /* Interpret RtsRate field as high throughput data rate */
H A Dr8192U_core.c4620 /* 11n High throughput rate */ UpdateReceivedRateHistogramStatistics8190()
/linux-4.4.14/drivers/spi/
H A Dspi-mpc512x-psc.c251 * to balance throughput against system load; the mpc512x_psc_spi_transfer_rxtx()
259 * of the timeout either decreases throughput mpc512x_psc_spi_transfer_rxtx()
/linux-4.4.14/drivers/infiniband/hw/mthca/
H A Dmthca_profile.c263 mthca_warn(dev, "Disabling memory key throughput optimization.\n"); mthca_make_profile()
H A Dmthca_mr.c845 mthca_dbg(dev, "Memory key throughput optimization activated.\n"); mthca_init_mr_table()
/linux-4.4.14/block/
H A Dblk.h160 * throughput too. For example, we have request flush1, write1, __elv_next_request()
H A Ddeadline-iosched.c24 by the above parameters. For throughput. */
H A Dblk-settings.c459 * sustained throughput is desired.
478 * sustained throughput is desired.
H A Dblk-core.c3013 * of the request stacking driver and prevents I/O throughput regression
3528 /* used for unplugging and affects IO latency/throughput - HIGHPRI */ blk_dev_init()
/linux-4.4.14/net/dccp/ccids/lib/
H A Dtfrc_equation.c24 The following two-column lookup table implements a part of the TCP throughput
/linux-4.4.14/net/ipv4/
H A Dtcp_htcp.c118 /* achieved throughput calculations */ measure_achieved_throughput()
H A Dtcp_cdg.c12 * throughput and delay. Future work is needed to determine better defaults,
H A Dtcp_dctcp.c13 * - High throughput (continuous data updates, large file transfers)
H A Dtcp.c191 * algorithm. This doubles throughput
H A Dtcp_input.c326 * throughput and the higher sensitivity of the connection to losses. 8)
/linux-4.4.14/drivers/scsi/
H A Dconstants.c931 {0x5D19, "Hardware impending failure throughput performance"},
944 {0x5D29, "Controller impending failure throughput performance"},
957 {0x5D39, "Data channel impending failure throughput performance"},
971 {0x5D49, "Servo impending failure throughput performance"},
984 {0x5D59, "Spindle impending failure throughput performance"},
997 {0x5D69, "Firmware impending failure throughput performance"},
H A Dfdomain.c254 up the machine. I have found that 2 is a good number, but throughput may
H A Dhpsa.c994 * processor. This seems to give the best I/O throughput. set_ioaccel1_performant_mode()
1018 * processor. This seems to give the best I/O throughput. set_ioaccel2_tmf_performant_mode()
1040 * processor. This seems to give the best I/O throughput. set_ioaccel2_performant_mode()
H A DNCR5380.c131 * throughput. Note that both I_T_L and I_T_L_Q nexuses are supported,
H A Datari_NCR5380.c118 * throughput. Note that both I_T_L and I_T_L_Q nexuses are supported,
H A Deata.c279 * increase in the range 10%-20% on i/o throughput.
H A Dadvansys.c10819 * by enabling clustering, I/O throughput increases as well.
/linux-4.4.14/drivers/net/ethernet/intel/fm10k/
H A Dfm10k_pci.c1783 /* 8b/10b encoding reduces max throughput by 20% */ fm10k_slot_warn()
1787 /* 8b/10b encoding reduces max throughput by 20% */ fm10k_slot_warn()
1791 /* 128b/130b encoding has less than 2% impact on throughput */ fm10k_slot_warn()
1821 /* 8b/10b encoding reduces max throughput by 20% */ fm10k_slot_warn()
1825 /* 8b/10b encoding reduces max throughput by 20% */ fm10k_slot_warn()
1829 /* 128b/130b encoding has less than 2% impact on throughput */ fm10k_slot_warn()
H A Dfm10k_main.c1360 * minimize response time while increasing bulk throughput.
H A Dfm10k_pf.c741 * of Mb/s of outgoing Tx throughput.
/linux-4.4.14/drivers/staging/rtl8192u/ieee80211/
H A Dieee80211_tx.c477 {// 11n High throughput case. ieee80211_query_protectionmode()
514 // throughput around 10M, so we disable of this mechanism. 2007.08.03 by Emily ieee80211_query_protectionmode()
/linux-4.4.14/kernel/power/
H A Dqos.c11 * There are 3 basic classes of QoS parameter: latency, timeout, throughput
15 * throughput: kbs (kilo byte / sec)
/linux-4.4.14/drivers/usb/serial/
H A Dkeyspan_usa26msg.h97 1999feb10 add txAckThreshold for fast+loose throughput enhancement
H A Dkeyspan_usa49msg.h100 1999feb10 add txAckThreshold for fast+loose throughput enhancement
H A Dkeyspan_usa67msg.h101 1999feb10 add txAckThreshold for fast+loose throughput enhancement
H A Dcypress_m8.c258 * 115200bps (but the actual throughput is around 3kBps). analyze_baud_rate()
/linux-4.4.14/drivers/net/wireless/ath/carl9170/
H A Dmac.c355 * vicinity and the network throughput will suffer carl9170_set_operating_mode()
H A Dcarl9170.h160 * retries => Latency goes up, whereas the throughput goes down. CRASH!
/linux-4.4.14/drivers/net/ethernet/stmicro/stmmac/
H A Ddwmac1000.h273 * but packet throughput performance may not be as expected.
/linux-4.4.14/drivers/fpga/
H A Dzynq-fpga.c255 * - set throughput for maximum speed zynq_fpga_ops_write_init()
/linux-4.4.14/drivers/staging/rtl8192e/rtl8192e/
H A Drtl_wx.c298 __u32 throughput; /* To give an idea... */ member in struct:iw_range_with_scan_capa
300 * TCP/IP throughput, because with most of these devices the
331 range->throughput = 130 * 1000 * 1000; _rtl92e_wx_get_range()
/linux-4.4.14/drivers/net/wireless/ti/wlcore/
H A Dwlcore_i.h88 * as it might hurt the throughput of active STAs.
/linux-4.4.14/include/math-emu/
H A Dop-2.h330 multiplication has much bigger throughput than integer multiply.
/linux-4.4.14/include/linux/mtd/
H A Dubi.h178 * improves write throughput.
/linux-4.4.14/drivers/input/serio/
H A Dhp_sdc.c42 * fully in the ISR, so there are no latency/throughput problems there.
46 * keeping outbound throughput flowing at the 6500KBps that the HIL is
/linux-4.4.14/drivers/tty/
H A Dmetag_da.c61 * A short put delay improves latency but has a high throughput overhead
/linux-4.4.14/drivers/infiniband/hw/qib/
H A Dqib_pcie.c556 * Check and optionally adjust them to maximize our throughput.
/linux-4.4.14/drivers/parisc/
H A Dled.c348 ** calculate if there was TX- or RX-throughput on the network interfaces
/linux-4.4.14/drivers/net/wimax/i2400m/
H A Dnetdev.c94 * for minimizing the jitter in the throughput.
/linux-4.4.14/drivers/net/wireless/ath/ath6kl/
H A Dhif.c520 * determine that we are in a low-throughput mode, we can rely on proc_pending_irqs()
H A Dtxrx.c1363 * Take lock to protect buffer counts and adaptive power throughput ath6kl_rx()
H A Dhtc_mbox.c2219 * performance in high throughput situations. ath6kl_htc_rxmsg_pending_handler()
/linux-4.4.14/drivers/net/ethernet/sfc/
H A Dnic.h99 * throughput, so we only do this if both hardware and software TX rings
/linux-4.4.14/drivers/net/ethernet/atheros/alx/
H A Dreg.h83 /* bit30: L0s/L1 controlled by MAC based on throughput(setting in 15A0) */
/linux-4.4.14/arch/alpha/lib/
H A Dev6-memset.S18 * however the loop has been unrolled to enable better memory throughput,
/linux-4.4.14/fs/nfs/
H A Dproc.c9 * so at last we can have decent(ish) throughput off a
/linux-4.4.14/include/trace/events/
H A Dblock.h486 * the queue to improve throughput performance of the block device.
/linux-4.4.14/arch/m68k/fpsp040/
H A Dbugfix.S14 | * the handler permanently to improve throughput.
/linux-4.4.14/drivers/usb/host/
H A Dehci-hcd.c545 * NVidia and ALI silicon), maximizes throughput on the async ehci_init()
549 * make problems: throughput reduction (!), data errors... ehci_init()
H A Doxu210hp-hcd.c2645 * NVidia and ALI silicon), maximizes throughput on the async oxu_hcd_init()
2649 * make problems: throughput reduction (!), data errors... oxu_hcd_init()
H A Dfotg210-hcd.c5017 * NVidia and ALI silicon), maximizes throughput on the async hcd_fotg210_init()
5021 * make problems: throughput reduction (!), data errors... hcd_fotg210_init()
/linux-4.4.14/kernel/sched/
H A Dsched.h1521 * also adds more overhead and therefore may reduce throughput.
1536 * Unfair double_lock_balance: Optimizes throughput at the expense of
H A Dfair.c1864 * little over-all impact on throughput, and thus their for_each_online_node()
6614 /* Move if we gain throughput */ fix_small_imbalance()
/linux-4.4.14/drivers/ide/
H A Dide-tape.c71 * using a high value might improve system throughput.
1115 * The tape is optimized to maximize throughput when it is transferring an
/linux-4.4.14/include/net/
H A Dmac80211.h3237 * @get_expected_throughput: extract the expected throughput towards the
3558 * struct ieee80211_tpt_blink - throughput blink description
3559 * @throughput: throughput in Kbit/sec
3564 int throughput; member in struct:ieee80211_tpt_blink
3569 * enum ieee80211_tpt_led_trigger_flags - throughput trigger flags
3677 * ieee80211_create_tpt_led_trigger - create throughput LED trigger
3680 * @blink_table: the blink table -- needs to be ordered by throughput
H A Dcfg80211.h1058 * @expected_throughput: expected throughput in kbps (including 802.11 headers)
4820 * but probably no less than maybe 50, or maybe a throughput dependent
/linux-4.4.14/drivers/staging/rtl8712/
H A Drtl8712_recv.c1028 /* Test throughput with Netgear 3700 (No security) with Chariot 3T3R recvbuf2recvframe()
H A Drtl871x_ioctl_linux.c884 range->throughput = 5 * 1000 * 1000; r8711_wx_get_range()
/linux-4.4.14/drivers/staging/lustre/lustre/lov/
H A Dlov_cl_internal.h459 * throughput.
/linux-4.4.14/drivers/staging/rdma/hfi1/
H A Dpcie.c468 * Check and optionally adjust them to maximize our throughput.
/linux-4.4.14/drivers/scsi/ufs/
H A Dufshcd.h528 * CAUTION: Enabling this might reduce overall UFS throughput.
/linux-4.4.14/drivers/mmc/host/
H A Ddavinci_mmc.c155 * platform data) == 16 gives at least the same throughput boost, using
/linux-4.4.14/drivers/net/caif/
H A Dcaif_hsi.c48 * Warning: A high threshold value might increase throughput but it
/linux-4.4.14/drivers/net/wireless/cw1200/
H A Dtxrx.c123 * of time (100-200 ms), leading to valuable throughput drop. tx_policy_build()
/linux-4.4.14/drivers/ata/
H A Dsata_sil24.c1340 * write throughput for pci-e variants. sil24_init_one()
/linux-4.4.14/drivers/atm/
H A Dfore200e.h709 u32 tq_plen; /* transmit throughput measurements */
/linux-4.4.14/net/dccp/ccids/
H A Dccid3.c673 * Assume that X_recv can be computed by the throughput equation
/linux-4.4.14/fs/ufs/
H A Dufs_fs.h193 * however throughput drops by fifty percent if the file system
/linux-4.4.14/net/rds/
H A Dib_send.c346 * using a spinlock showed a 5% degradation in throughput at some
H A Diw_send.c332 * using a spinlock showed a 5% degradation in throughput at some
H A Dsend.c840 * throughput hits a certain threshold. rds_send_queue_rm()
/linux-4.4.14/drivers/usb/dwc3/
H A Dgadget.c153 * to improve FIFO usage and throughput, while still allowing
2483 * Due to this problem, we might experience lower throughput. The dwc3_gadget_linksts_change_interrupt()
/linux-4.4.14/drivers/usb/gadget/function/
H A Df_mass_storage.c45 * double-buffering for increased throughput.
143 * To provide maximum throughput, the driver uses a circular pipeline of
H A Df_ncm.c115 * throughput and will be mstly sending smaller infrequent frames.
/linux-4.4.14/drivers/net/wireless/hostap/
H A Dhostap_ioctl.c1049 /* estimated maximum TCP throughput values (bps) */ prism2_ioctl_giwrange()
1050 range->throughput = over2 ? 5500000 : 1500000; prism2_ioctl_giwrange()
/linux-4.4.14/drivers/staging/rtl8723au/include/
H A Drtl8723a_spec.h801 /* Difference of gain index between legacy and high throughput OFDM. */
/linux-4.4.14/drivers/usb/core/
H A Durb.c254 * throughput. With that queuing policy, an endpoint's queue would never
H A Dmessage.c472 * significantly improve USB throughput.
/linux-4.4.14/drivers/net/ethernet/nvidia/
H A Dforcedeth.c3622 * (reduce CPU and increase throughput). They use descripter version 3,
5873 /* start off in throughput mode */ nv_probe()
6362 MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer. In dynamic mode (2), the mode toggles between throughput and CPU mode based on network load.");
/linux-4.4.14/drivers/crypto/
H A Dpicoxcell_crypto.c1695 * reasonable trade off of latency against throughput but can be spacc_probe()
/linux-4.4.14/drivers/base/power/opp/
H A Dcore.c177 * conditions) for short duration of times to finish high throughput work
/linux-4.4.14/drivers/usb/gadget/udc/
H A Dat91_udc.c412 * throughput much. (Unlike preventing OUT-NAKing!) write_fifo()
/linux-4.4.14/drivers/tty/serial/
H A Dioc3_serial.c464 * throughput by 10% or so unless we enable high speed polling port_init()
/linux-4.4.14/drivers/net/wireless/realtek/rtlwifi/rtl8192de/
H A Dhw.c834 /* For throughput */ _rtl92de_hw_configure()
/linux-4.4.14/drivers/net/ethernet/intel/i40evf/
H A Di40e_txrx.c329 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/ethernet/intel/ixgbe/
H A Dixgbe_main.c277 /* 8b/10b encoding reduces max throughput by 20% */ ixgbe_check_minimum_link()
281 /* 8b/10b encoding reduces max throughput by 20% */ ixgbe_check_minimum_link()
285 /* 128b/130b encoding reduces throughput by less than 2% */ ixgbe_check_minimum_link()
2256 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/wireless/
H A Dwl3501_cs.c1513 range->throughput = 2 * 1000 * 1000; /* ~2 Mb/s */ wl3501_get_range()
H A Dairo.c6943 /* Set an indication of the max TCP throughput airo_get_range()
6947 range->throughput = 5000 * 1000; airo_get_range()
6949 range->throughput = 1500 * 1000; airo_get_range()
H A Dray_cs.c1336 range->throughput = 1.1 * 1000 * 1000; /* Put the right number here */ ray_get_range()
H A Dmwl8k.c4464 /* Set if peer supports 802.11n high throughput (HT). */
/linux-4.4.14/drivers/net/wireless/ath/ath10k/
H A Dhtt_rx.c178 * This probably comes at a cost of lower maximum throughput but ath10k_htt_rx_msdu_buff_replenish()
H A Dmac.c2336 * zero in VHT IE. Using it would result in degraded throughput. ath10k_peer_assoc_h_vht()
/linux-4.4.14/drivers/net/wireless/ath/ath5k/
H A Dath5k.h394 * that is supposed to provide a throughput transmission speed up to 40Mbit/s
/linux-4.4.14/drivers/net/ethernet/smsc/
H A Dsmc91x.c145 * but to the expense of reduced TX throughput and increased IRQ overhead.
/linux-4.4.14/net/batman-adv/
H A Dbat_iv_ogm.c1228 * interfaces and other half duplex devices suffer from throughput batadv_iv_ogm_calc_tq()
/linux-4.4.14/sound/isa/wavefront/
H A Dwavefront_synth.c62 throughput based on my limited experimentation.
/linux-4.4.14/drivers/staging/rtl8188eu/os_dep/
H A Dioctl_linux.c883 range->throughput = 5 * 1000 * 1000; rtw_wx_get_range()
/linux-4.4.14/drivers/staging/fwserial/
H A Dfwserial.c488 * relatively high throughput, the ldisc frequently lags well behind the driver,
/linux-4.4.14/drivers/net/ethernet/intel/i40e/
H A Di40e_txrx.c826 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/ethernet/intel/igbvf/
H A Dnetdev.c660 * time while increasing bulk throughput.
/linux-4.4.14/drivers/net/usb/
H A Dhso.c43 * throughput.
/linux-4.4.14/drivers/net/wan/
H A Dfarsync.c60 * and maximise throughput
/linux-4.4.14/drivers/net/ethernet/intel/
H A De100.c1222 * the ACKs were received was enough to reduce total throughput, because
/linux-4.4.14/drivers/gpu/drm/i915/
H A Dintel_ringbuffer.c901 /* Improve HiZ throughput on CHV. */ chv_init_workarounds()
/linux-4.4.14/drivers/block/
H A Dpktcdvd.c832 * - Optimize for throughput at the expense of latency. This means that streaming
/linux-4.4.14/mm/
H A Dpage-writeback.c720 * The wb's share of dirty limit will be adapting to its throughput and
/linux-4.4.14/net/core/
H A Dsock.c65 * Alan Cox : Added optimistic memory grabbing for AF_UNIX throughput.
/linux-4.4.14/drivers/net/ethernet/intel/e1000e/
H A Dnetdev.c1390 * packet throughput, so unsplit small packets and e1000_clean_rx_irq_ps()
2505 * while increasing bulk throughput. This functionality is controlled
/linux-4.4.14/drivers/net/ethernet/intel/igb/
H A Digb_main.c4428 * throughput.
4502 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/wireless/brcm80211/brcmsmac/
H A Dmain.c1958 * accesses phyreg throughput mac. This can be skipped since brcms_b_radio_read_hwdisabled()
2040 * phyreg throughput mac, AND phy_reset is skipped at early stage when brcms_b_corereset()
/linux-4.4.14/drivers/net/ethernet/intel/ixgbevf/
H A Dixgbevf_main.c1182 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/ethernet/myricom/myri10ge/
H A Dmyri10ge.c3321 * The Lanai Z8E PCI-E interface achieves higher Read-DMA throughput
/linux-4.4.14/drivers/net/ethernet/intel/e1000/
H A De1000_main.c2579 * while increasing bulk throughput.
/linux-4.4.14/drivers/net/wireless/ipw2x00/
H A Dipw2100.c6805 range->throughput = 5 * 1000 * 1000; ipw2100_wx_get_range()
H A Dipw2200.c8852 range->throughput = 27 * 1000 * 1000; ipw_wx_get_range()

Completed in 7865 milliseconds