-
Notifications
You must be signed in to change notification settings - Fork 150
ath11k: fix memory leak of 'combinations' #183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Author
|
Master branch: d82a532 Pull request is NOT updated. Failed to apply https://patchwork.kernel.org/project/bpf/list/?series=360459 conflict: |
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 12, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 12, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 12, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 12, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 12, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 15, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 15, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 16, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 16, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 16, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 16, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 16, 2021
This patch adds '--timing' to test_progs. It tracks and print timing information for each tests, it also prints top 10 slowest tests in the summary. Example output: $./test_progs --timing -j #1 align:OK (16 ms) ... #203 xdp_bonding:OK (2019 ms) #206 xdp_cpumap_attach:OK (3 ms) #207 xdp_devmap_attach:OK (4 ms) #208 xdp_info:OK (4 ms) #209 xdp_link:OK (4 ms) Top 10 Slowest tests: #48 fexit_stress: 34356 ms #160 test_lsm: 29602 ms #161 test_overhead: 29190 ms #159 test_local_storage: 28959 ms #158 test_ima: 28521 ms #185 verif_scale_pyperf600: 19524 ms #199 vmlinux: 17310 ms #154 tc_redirect: 11491 ms (serial) #147 task_local_storage: 7612 ms #183 verif_scale_pyperf180: 7186 ms Summary: 212/973 PASSED, 3 SKIPPED, 0 FAILED Signed-off-by: Yucong Sun <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 18, 2021
In this patch -
1) Add a new prog "for_each_helper" which tests the basic functionality of
the bpf_for_each helper.
2) Add pyperf600_foreach and strobemeta_foreach to test the performance
of using bpf_for_each instead of a for loop
The results of pyperf600 and strobemeta are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 592132 insns (limit 1000000) max_states_per_insn 14
total_states 16018 peak_states 13684 mark_read 3132
#188 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_for_each
verification time 31589 usec
stack depth 96+408
processed 1630 insns (limit 1000000) max_states_per_insn 4
total_states 107 peak_states 107 mark_read 60
#189 verif_scale_strobemeta_foreach:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_for_each
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_foreach:OK
Using the bpf_for_each helper led to approximately a 100% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 18, 2021
In this patch -
1) Add a new prog "for_each_helper" which tests the basic functionality of
the bpf_for_each helper.
2) Add pyperf600_foreach and strobemeta_foreach to test the performance
of using bpf_for_each instead of a for loop
The results of pyperf600 and strobemeta are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 592132 insns (limit 1000000) max_states_per_insn 14
total_states 16018 peak_states 13684 mark_read 3132
#188 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_for_each
verification time 31589 usec
stack depth 96+408
processed 1630 insns (limit 1000000) max_states_per_insn 4
total_states 107 peak_states 107 mark_read 60
#189 verif_scale_strobemeta_foreach:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_for_each
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_foreach:OK
Using the bpf_for_each helper led to approximately a 100% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 23, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 29, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 30, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 30, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 30, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
kernel-patches-bot
pushed a commit
that referenced
this pull request
Nov 30, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.
The results are as follows:
~strobemeta~
Baseline
verification time 6808200 usec
stack depth 496
processed 554252 insns (limit 1000000) max_states_per_insn 16
total_states 15878 peak_states 13489 mark_read 3110
#192 verif_scale_strobemeta:OK (unrolled loop)
Using bpf_loop
verification time 31589 usec
stack depth 96+400
processed 1513 insns (limit 1000000) max_states_per_insn 2
total_states 106 peak_states 106 mark_read 60
#193 verif_scale_strobemeta_bpf_loop:OK
~pyperf600~
Baseline
verification time 29702486 usec
stack depth 368
processed 626838 insns (limit 1000000) max_states_per_insn 7
total_states 30368 peak_states 30279 mark_read 748
#182 verif_scale_pyperf600:OK (unrolled loop)
Using bpf_loop
verification time 148488 usec
stack depth 320+40
processed 10518 insns (limit 1000000) max_states_per_insn 10
total_states 705 peak_states 517 mark_read 38
#183 verif_scale_pyperf600_bpf_loop:OK
Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.
Signed-off-by: Joanne Koong <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
kernel-patches-bot
pushed a commit
that referenced
this pull request
Oct 4, 2022
Add a big batch of selftest to extend test_progs with various tc link, attach ops and old-style tc BPF attachments via libbpf APIs. Also test multi-program attachments including mixing the various attach options: # ./test_progs -t tc_link #179 tc_link_base:OK #180 tc_link_detach:OK #181 tc_link_mix:OK #182 tc_link_opts:OK #183 tc_link_run_base:OK #184 tc_link_run_chain:OK Summary: 6/0 PASSED, 0 SKIPPED, 0 FAILED All new and existing test cases pass. Co-developed-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
kernel-patches-daemon-bpf bot
pushed a commit
that referenced
this pull request
Mar 12, 2024
Attemting to do sock_lock on .recvmsg may cause a deadlock as shown
bellow, so instead of using sock_sock this uses sk_receive_queue.lock
on bt_sock_ioctl to avoid the UAF:
INFO: task kworker/u9:1:121 blocked for more than 30 seconds.
Not tainted 6.7.6-lemon #183
Workqueue: hci0 hci_rx_work
Call Trace:
<TASK>
__schedule+0x37d/0xa00
schedule+0x32/0xe0
__lock_sock+0x68/0xa0
? __pfx_autoremove_wake_function+0x10/0x10
lock_sock_nested+0x43/0x50
l2cap_sock_recv_cb+0x21/0xa0
l2cap_recv_frame+0x55b/0x30a0
? psi_task_switch+0xeb/0x270
? finish_task_switch.isra.0+0x93/0x2a0
hci_rx_work+0x33a/0x3f0
process_one_work+0x13a/0x2f0
worker_thread+0x2f0/0x410
? __pfx_worker_thread+0x10/0x10
kthread+0xe0/0x110
? __pfx_kthread+0x10/0x10
ret_from_fork+0x2c/0x50
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1b/0x30
</TASK>
Fixes: 2e07e83 ("Bluetooth: af_bluetooth: Fix Use-After-Free in bt_sock_recvmsg")
Signed-off-by: Luiz Augusto von Dentz <[email protected]>
guidosarducci
added a commit
to guidosarducci/bpf-ci
that referenced
this pull request
Sep 19, 2025
This test_progs test fails on 32-bit armhf: root@qemu-armhf:/usr/libexec/kselftests-bpf# test_progs -a lwt_seg6local [...] test_lwt_seg6local:PASS:setup 0 nsec test_lwt_seg6local:PASS:open ns6 0 nsec test_lwt_seg6local:PASS:start server 0 nsec test_lwt_seg6local:PASS:open ns1 0 nsec test_lwt_seg6local:PASS:start client 0 nsec test_lwt_seg6local:PASS:build target addr 0 nsec test_lwt_seg6local:PASS:send packet 0 nsec test_lwt_seg6local:FAIL:receive packet unexpected receive packet: actual 4 != expected 7 kernel-patches#183 lwt_seg6local:FAIL This happens because a sendto() call mistakenly uses 'sizeof(char *)' as message length rather than the actual string ("foobar\0") size. e.g. bytes = sendto(cfd, foobar, sizeof(foobar), 0, ... This likely passed by accident till now because BPF CI only tests 64-bit targets. Fix by using strlen() to determine the message length. Fixes: 1041b8b ("selftests/bpf: lwt_seg6local: Move test to test_progs") Signed-off-by: Tony Ambardar <[email protected]>
guidosarducci
added a commit
to guidosarducci/bpf-ci
that referenced
this pull request
Sep 19, 2025
This test_progs test fails on 32-bit armhf: root@qemu-armhf:/usr/libexec/kselftests-bpf# test_progs -a lwt_seg6local [...] test_lwt_seg6local:PASS:setup 0 nsec test_lwt_seg6local:PASS:open ns6 0 nsec test_lwt_seg6local:PASS:start server 0 nsec test_lwt_seg6local:PASS:open ns1 0 nsec test_lwt_seg6local:PASS:start client 0 nsec test_lwt_seg6local:PASS:build target addr 0 nsec test_lwt_seg6local:PASS:send packet 0 nsec test_lwt_seg6local:FAIL:receive packet unexpected receive packet: actual 4 != expected 7 kernel-patches#183 lwt_seg6local:FAIL This happens because a sendto() call mistakenly uses 'sizeof(char *)' as message length rather than the actual string ("foobar\0") size. e.g. bytes = sendto(cfd, foobar, sizeof(foobar), 0, ... This likely passed by accident till now because BPF CI only tests 64-bit targets. Fix by using strlen() to determine the message length. Fixes: 1041b8b ("selftests/bpf: lwt_seg6local: Move test to test_progs") Signed-off-by: Tony Ambardar <[email protected]>
guidosarducci
added a commit
to guidosarducci/bpf-ci
that referenced
this pull request
Sep 22, 2025
This test_progs test fails on 32-bit armhf: root@qemu-armhf:/usr/libexec/kselftests-bpf# test_progs -a lwt_seg6local [...] test_lwt_seg6local:PASS:setup 0 nsec test_lwt_seg6local:PASS:open ns6 0 nsec test_lwt_seg6local:PASS:start server 0 nsec test_lwt_seg6local:PASS:open ns1 0 nsec test_lwt_seg6local:PASS:start client 0 nsec test_lwt_seg6local:PASS:build target addr 0 nsec test_lwt_seg6local:PASS:send packet 0 nsec test_lwt_seg6local:FAIL:receive packet unexpected receive packet: actual 4 != expected 7 kernel-patches#183 lwt_seg6local:FAIL This happens because a sendto() call mistakenly uses 'sizeof(char *)' as message length rather than the actual string ("foobar\0") size. e.g. bytes = sendto(cfd, foobar, sizeof(foobar), 0, ... This likely passed by accident till now because BPF CI only tests 64-bit targets. Fix by using strlen() to determine the message length. Fixes: 1041b8b ("selftests/bpf: lwt_seg6local: Move test to test_progs") Signed-off-by: Tony Ambardar <[email protected]>
guidosarducci
added a commit
to guidosarducci/bpf-ci
that referenced
this pull request
Sep 23, 2025
This test_progs test fails on 32-bit armhf: root@qemu-armhf:/usr/libexec/kselftests-bpf# test_progs -a lwt_seg6local [...] test_lwt_seg6local:PASS:setup 0 nsec test_lwt_seg6local:PASS:open ns6 0 nsec test_lwt_seg6local:PASS:start server 0 nsec test_lwt_seg6local:PASS:open ns1 0 nsec test_lwt_seg6local:PASS:start client 0 nsec test_lwt_seg6local:PASS:build target addr 0 nsec test_lwt_seg6local:PASS:send packet 0 nsec test_lwt_seg6local:FAIL:receive packet unexpected receive packet: actual 4 != expected 7 kernel-patches#183 lwt_seg6local:FAIL This happens because a sendto() call mistakenly uses 'sizeof(char *)' as message length rather than the actual string ("foobar\0") size. e.g. bytes = sendto(cfd, foobar, sizeof(foobar), 0, ... This likely passed by accident till now because BPF CI only tests 64-bit targets. Fix by using strlen() to determine the message length. Fixes: 1041b8b ("selftests/bpf: lwt_seg6local: Move test to test_progs") Signed-off-by: Tony Ambardar <[email protected]>
guidosarducci
added a commit
to guidosarducci/bpf-ci
that referenced
this pull request
Sep 24, 2025
This test_progs test fails on 32-bit armhf: root@qemu-armhf:/usr/libexec/kselftests-bpf# test_progs -a lwt_seg6local [...] test_lwt_seg6local:PASS:setup 0 nsec test_lwt_seg6local:PASS:open ns6 0 nsec test_lwt_seg6local:PASS:start server 0 nsec test_lwt_seg6local:PASS:open ns1 0 nsec test_lwt_seg6local:PASS:start client 0 nsec test_lwt_seg6local:PASS:build target addr 0 nsec test_lwt_seg6local:PASS:send packet 0 nsec test_lwt_seg6local:FAIL:receive packet unexpected receive packet: actual 4 != expected 7 kernel-patches#183 lwt_seg6local:FAIL This happens because a sendto() call mistakenly uses 'sizeof(char *)' as message length rather than the actual string ("foobar\0") size. e.g. bytes = sendto(cfd, foobar, sizeof(foobar), 0, ... This likely passed by accident till now because BPF CI only tests 64-bit targets. Fix by using strlen() to determine the message length. Fixes: 1041b8b ("selftests/bpf: lwt_seg6local: Move test to test_progs") Signed-off-by: Tony Ambardar <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull request for series with
subject: ath11k: fix memory leak of 'combinations'
version: 1
url: https://patchwork.kernel.org/project/bpf/list/?series=360459