-
Notifications
You must be signed in to change notification settings - Fork 58.3k
Fixed overflow bug in binary search. #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Author
|
Just saw the Documentation/SubmittingPatches file; will resubmit properly. |
torvalds
pushed a commit
that referenced
this pull request
Feb 8, 2012
Overly indented code should be refactored.
Suggest refactoring excessive indentation of of
if/else/for/do/while/switch statements.
For example:
$ cat t.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
if (1)
if (2)
if (3)
if (4)
if (5)
if (6)
if (7)
if (8)
;
return 0;
}
$ ./scripts/checkpatch.pl -f t.c
WARNING: Too many leading tabs - consider code refactoring
#12: FILE: t.c:12:
+ if (6)
WARNING: Too many leading tabs - consider code refactoring
#13: FILE: t.c:13:
+ if (7)
WARNING: Too many leading tabs - consider code refactoring
#14: FILE: t.c:14:
+ if (8)
total: 0 errors, 3 warnings, 17 lines checked
t.c has style problems, please review.
If any of these errors are false positives, please report
them to the maintainer, see CHECKPATCH in MAINTAINERS.
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
jkstrick
pushed a commit
to jkstrick/linux
that referenced
this pull request
Feb 11, 2012
If the netdev is already in NETREG_UNREGISTERING/_UNREGISTERED state, do not update the real num tx queues. netdev_queue_update_kobjects() is already called via remove_queue_kobjects() at NETREG_UNREGISTERING time. So, when upper layer driver, e.g., FCoE protocol stack is monitoring the netdev event of NETDEV_UNREGISTER and calls back to LLD ndo_fcoe_disable() to remove extra queues allocated for FCoE, the associated txq sysfs kobjects are already removed, and trying to update the real num queues would cause something like below: ... PID: 25138 TASK: ffff88021e64c440 CPU: 3 COMMAND: "kworker/3:3" #0 [ffff88021f007760] machine_kexec at ffffffff810226d9 #1 [ffff88021f0077d0] crash_kexec at ffffffff81089d2d #2 [ffff88021f0078a0] oops_end at ffffffff813bca78 #3 [ffff88021f0078d0] no_context at ffffffff81029e72 #4 [ffff88021f007920] __bad_area_nosemaphore at ffffffff8102a155 #5 [ffff88021f0079f0] bad_area_nosemaphore at ffffffff8102a23e torvalds#6 [ffff88021f007a00] do_page_fault at ffffffff813bf32e torvalds#7 [ffff88021f007b10] page_fault at ffffffff813bc045 [exception RIP: sysfs_find_dirent+17] RIP: ffffffff81178611 RSP: ffff88021f007bc0 RFLAGS: 00010246 RAX: ffff88021e64c440 RBX: ffffffff8156cc63 RCX: 0000000000000004 RDX: ffffffff8156cc63 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff88021f007be0 R8: 0000000000000004 R9: 0000000000000008 R10: ffffffff816fed00 R11: 0000000000000004 R12: 0000000000000000 R13: ffffffff8156cc63 R14: 0000000000000000 R15: ffff8802222a0000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 torvalds#8 [ffff88021f007be8] sysfs_get_dirent at ffffffff81178c07 torvalds#9 [ffff88021f007c18] sysfs_remove_group at ffffffff8117ac27 torvalds#10 [ffff88021f007c48] netdev_queue_update_kobjects at ffffffff813178f9 torvalds#11 [ffff88021f007c88] netif_set_real_num_tx_queues at ffffffff81303e38 torvalds#12 [ffff88021f007cc8] ixgbe_set_num_queues at ffffffffa0249763 [ixgbe] torvalds#13 [ffff88021f007cf8] ixgbe_init_interrupt_scheme at ffffffffa024ea89 [ixgbe] torvalds#14 [ffff88021f007d48] ixgbe_fcoe_disable at ffffffffa0267113 [ixgbe] torvalds#15 [ffff88021f007d68] vlan_dev_fcoe_disable at ffffffffa014fef5 [8021q] torvalds#16 [ffff88021f007d78] fcoe_interface_cleanup at ffffffffa02b7dfd [fcoe] torvalds#17 [ffff88021f007df8] fcoe_destroy_work at ffffffffa02b7f08 [fcoe] torvalds#18 [ffff88021f007e18] process_one_work at ffffffff8105d7ca torvalds#19 [ffff88021f007e68] worker_thread at ffffffff81060513 torvalds#20 [ffff88021f007ee8] kthread at ffffffff810648b6 torvalds#21 [ffff88021f007f48] kernel_thread_helper at ffffffff813c40f4 Signed-off-by: Yi Zou <[email protected]> Tested-by: Ross Brattain <[email protected]> Tested-by: Stephen Ko <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
zachariasmaladroit
pushed a commit
to galaxys-cm7miui-kernel/linux
that referenced
this pull request
Feb 11, 2012
If the netdev is already in NETREG_UNREGISTERING/_UNREGISTERED state, do not update the real num tx queues. netdev_queue_update_kobjects() is already called via remove_queue_kobjects() at NETREG_UNREGISTERING time. So, when upper layer driver, e.g., FCoE protocol stack is monitoring the netdev event of NETDEV_UNREGISTER and calls back to LLD ndo_fcoe_disable() to remove extra queues allocated for FCoE, the associated txq sysfs kobjects are already removed, and trying to update the real num queues would cause something like below: ... PID: 25138 TASK: ffff88021e64c440 CPU: 3 COMMAND: "kworker/3:3" #0 [ffff88021f007760] machine_kexec at ffffffff810226d9 #1 [ffff88021f0077d0] crash_kexec at ffffffff81089d2d #2 [ffff88021f0078a0] oops_end at ffffffff813bca78 #3 [ffff88021f0078d0] no_context at ffffffff81029e72 #4 [ffff88021f007920] __bad_area_nosemaphore at ffffffff8102a155 #5 [ffff88021f0079f0] bad_area_nosemaphore at ffffffff8102a23e torvalds#6 [ffff88021f007a00] do_page_fault at ffffffff813bf32e torvalds#7 [ffff88021f007b10] page_fault at ffffffff813bc045 [exception RIP: sysfs_find_dirent+17] RIP: ffffffff81178611 RSP: ffff88021f007bc0 RFLAGS: 00010246 RAX: ffff88021e64c440 RBX: ffffffff8156cc63 RCX: 0000000000000004 RDX: ffffffff8156cc63 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff88021f007be0 R8: 0000000000000004 R9: 0000000000000008 R10: ffffffff816fed00 R11: 0000000000000004 R12: 0000000000000000 R13: ffffffff8156cc63 R14: 0000000000000000 R15: ffff8802222a0000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 torvalds#8 [ffff88021f007be8] sysfs_get_dirent at ffffffff81178c07 torvalds#9 [ffff88021f007c18] sysfs_remove_group at ffffffff8117ac27 torvalds#10 [ffff88021f007c48] netdev_queue_update_kobjects at ffffffff813178f9 torvalds#11 [ffff88021f007c88] netif_set_real_num_tx_queues at ffffffff81303e38 torvalds#12 [ffff88021f007cc8] ixgbe_set_num_queues at ffffffffa0249763 [ixgbe] torvalds#13 [ffff88021f007cf8] ixgbe_init_interrupt_scheme at ffffffffa024ea89 [ixgbe] torvalds#14 [ffff88021f007d48] ixgbe_fcoe_disable at ffffffffa0267113 [ixgbe] torvalds#15 [ffff88021f007d68] vlan_dev_fcoe_disable at ffffffffa014fef5 [8021q] torvalds#16 [ffff88021f007d78] fcoe_interface_cleanup at ffffffffa02b7dfd [fcoe] torvalds#17 [ffff88021f007df8] fcoe_destroy_work at ffffffffa02b7f08 [fcoe] torvalds#18 [ffff88021f007e18] process_one_work at ffffffff8105d7ca torvalds#19 [ffff88021f007e68] worker_thread at ffffffff81060513 torvalds#20 [ffff88021f007ee8] kthread at ffffffff810648b6 torvalds#21 [ffff88021f007f48] kernel_thread_helper at ffffffff813c40f4 Signed-off-by: Yi Zou <[email protected]> Tested-by: Ross Brattain <[email protected]> Tested-by: Stephen Ko <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
tworaz
pushed a commit
to tworaz/linux
that referenced
this pull request
Feb 13, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 torvalds#6 [d72d3cb4] isolate_migratepages at c030b15a torvalds#7 [d72d3d14] zone_watermark_ok at c02d26cb torvalds#8 [d72d3d2c] compact_zone at c030b8d torvalds#9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
xXorAa
pushed a commit
to xXorAa/linux
that referenced
this pull request
Feb 17, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 torvalds#6 [d72d3cb4] isolate_migratepages at c030b15a torvalds#7 [d72d3d14] zone_watermark_ok at c02d26cb torvalds#8 [d72d3d2c] compact_zone at c030b8d torvalds#9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Feb 23, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Mar 1, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Mar 19, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Mar 22, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Apr 2, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Apr 9, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Apr 11, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Apr 12, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
psanford
pushed a commit
to retailnext/linux
that referenced
this pull request
Apr 16, 2012
…S block during isolation for migration BugLink: http://bugs.launchpad.net/bugs/931719 commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 torvalds#6 [d72d3cb4] isolate_migratepages at c030b15a torvalds#7 [d72d3d14] zone_watermark_ok at c02d26cb torvalds#8 [d72d3d2c] compact_zone at c030b8d torvalds#9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Tim Gardner <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
Apr 19, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 4, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 4, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 5, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
ssvb
pushed a commit
to ssvb/linux-n810
that referenced
this pull request
May 6, 2012
commit 3d2606f upstream. IBS initialization is a mix of per-core register access and per-node pci device setup. Register access should be pinned to the cpu, but pci setup must run with preemption enabled. This patch better separates the code into non-/preemptible sections and fixes sleeping with preemption disabled. See bug message below. Fixes also freeing the eilvt entry by introducing put_eilvt(). BUG: sleeping function called from invalid context at mm/slub.c:824 in_atomic(): 1, irqs_disabled(): 0, pid: 32357, name: modprobe INFO: lockdep is turned off. Pid: 32357, comm: modprobe Not tainted 2.6.39-rc7+ torvalds#14 Call Trace: [<ffffffff8104bdc8>] __might_sleep+0x112/0x117 [<ffffffff81129693>] kmem_cache_alloc_trace+0x4b/0xe7 [<ffffffff81278f14>] kzalloc.constprop.0+0x29/0x2b [<ffffffff81278f4c>] pci_get_subsys+0x36/0x78 [<ffffffff81022689>] ? setup_APIC_eilvt+0xfb/0x139 [<ffffffff81278fa4>] pci_get_device+0x16/0x18 [<ffffffffa06c8b5d>] op_amd_init+0xd3/0x211 [oprofile] [<ffffffffa064d000>] ? 0xffffffffa064cfff [<ffffffffa064d298>] op_nmi_init+0x21e/0x26a [oprofile] [<ffffffffa064d062>] oprofile_arch_init+0xe/0x26 [oprofile] [<ffffffffa064d010>] oprofile_init+0x10/0x42 [oprofile] [<ffffffff81002099>] do_one_initcall+0x7f/0x13a [<ffffffff81096524>] sys_init_module+0x132/0x281 [<ffffffff814cc682>] system_call_fastpath+0x16/0x1b Reported-by: Dave Jones <[email protected]> Signed-off-by: Robert Richter <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 7, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 9, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 14, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 16, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 17, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 21, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 22, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
koenkooi
pushed a commit
to koenkooi/linux
that referenced
this pull request
May 22, 2012
…S block during isolation for migration commit 0bf380b upstream. When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 #6 [d72d3cb4] isolate_migratepages at c030b15a #7 [d72d3d14] zone_watermark_ok at c02d26cb #8 [d72d3d2c] compact_zone at c030b8d #9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 25, 2025
…S block during isolation for migration When isolating for migration, migration starts at the start of a zone which is not necessarily pageblock aligned. Further, it stops isolating when COMPACT_CLUSTER_MAX pages are isolated so migrate_pfn is generally not aligned. This allows isolate_migratepages() to call pfn_to_page() on an invalid PFN which can result in a crash. This was originally reported against a 3.0-based kernel with the following trace in a crash dump. PID: 9902 TASK: d47aecd0 CPU: 0 COMMAND: "memcg_process_s" #0 [d72d3ad0] crash_kexec at c028cfdb #1 [d72d3b24] oops_end at c05c5322 #2 [d72d3b38] __bad_area_nosemaphore at c0227e60 #3 [d72d3bec] bad_area at c0227fb6 #4 [d72d3c00] do_page_fault at c05c72e #5 [d72d3c80] error_code (via page_fault) at c05c47a4 EAX: 00000000 EBX: 000c0000 ECX: 00000001 EDX: 00000807 EBP: 000c0000 DS: 007b ESI: 00000001 ES: 007b EDI: f3000a80 GS: 6f50 CS: 0060 EIP: c030b15a ERR: ffffffff EFLAGS: 00010002 torvalds#6 [d72d3cb4] isolate_migratepages at c030b15a torvalds#7 [d72d3d14] zone_watermark_ok at c02d26cb torvalds#8 [d72d3d2c] compact_zone at c030b8d torvalds#9 [d72d3d68] compact_zone_order at c030bba1 torvalds#10 [d72d3db4] try_to_compact_pages at c030bc84 torvalds#11 [d72d3ddc] __alloc_pages_direct_compact at c02d61e7 torvalds#12 [d72d3e08] __alloc_pages_slowpath at c02d66c7 torvalds#13 [d72d3e78] __alloc_pages_nodemask at c02d6a97 torvalds#14 [d72d3eb8] alloc_pages_vma at c030a845 torvalds#15 [d72d3ed4] do_huge_pmd_anonymous_page at c03178eb torvalds#16 [d72d3f00] handle_mm_fault at c02f36c6 torvalds#17 [d72d3f30] do_page_fault at c05c70ed torvalds#18 [d72d3fb] error_code (via page_fault) at c05c47a4 EAX: b71ff000 EBX: 00000001 ECX: 00001600 EDX: 00000431 DS: 007b ESI: 08048950 ES: 007b EDI: bfaa3788 SS: 007b ESP: bfaa36e0 EBP: bfaa3828 GS: 6f50 CS: 0073 EIP: 080487c8 ERR: ffffffff EFLAGS: 00010202 It was also reported by Herbert van den Bergh against 3.1-based kernel with the following snippet from the console log. BUG: unable to handle kernel paging request at 01c00008 IP: [<c0522399>] isolate_migratepages+0x119/0x390 *pdpt = 000000002f7ce001 *pde = 0000000000000000 It is expected that it also affects 3.2.x and current mainline. The problem is that pfn_valid is only called on the first PFN being checked and that PFN is not necessarily aligned. Lets say we have a case like this H = MAX_ORDER_NR_PAGES boundary | = pageblock boundary m = cc->migrate_pfn f = cc->free_pfn o = memory hole H------|------H------|----m-Hoooooo|ooooooH-f----|------H The migrate_pfn is just below a memory hole and the free scanner is beyond the hole. When isolate_migratepages started, it scans from migrate_pfn to migrate_pfn+pageblock_nr_pages which is now in a memory hole. It checks pfn_valid() on the first PFN but then scans into the hole where there are not necessarily valid struct pages. This patch ensures that isolate_migratepages calls pfn_valid when necessary. Reported-by: Herbert van den Bergh <[email protected]> Tested-by: Herbert van den Bergh <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Acked-by: Michal Nazarewicz <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 25, 2025
Overly indented code should be refactored.
Suggest refactoring excessive indentation of of
if/else/for/do/while/switch statements.
For example:
$ cat t.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
if (1)
if (2)
if (3)
if (4)
if (5)
if (6)
if (7)
if (8)
;
return 0;
}
$ ./scripts/checkpatch.pl -f t.c
WARNING: Too many leading tabs - consider code refactoring
torvalds#12: FILE: t.c:12:
+ if (6)
WARNING: Too many leading tabs - consider code refactoring
torvalds#13: FILE: t.c:13:
+ if (7)
WARNING: Too many leading tabs - consider code refactoring
torvalds#14: FILE: t.c:14:
+ if (8)
total: 0 errors, 3 warnings, 17 lines checked
t.c has style problems, please review.
If any of these errors are false positives, please report
them to the maintainer, see CHECKPATCH in MAINTAINERS.
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 25, 2025
If the netdev is already in NETREG_UNREGISTERING/_UNREGISTERED state, do not update the real num tx queues. netdev_queue_update_kobjects() is already called via remove_queue_kobjects() at NETREG_UNREGISTERING time. So, when upper layer driver, e.g., FCoE protocol stack is monitoring the netdev event of NETDEV_UNREGISTER and calls back to LLD ndo_fcoe_disable() to remove extra queues allocated for FCoE, the associated txq sysfs kobjects are already removed, and trying to update the real num queues would cause something like below: ... PID: 25138 TASK: ffff88021e64c440 CPU: 3 COMMAND: "kworker/3:3" #0 [ffff88021f007760] machine_kexec at ffffffff810226d9 #1 [ffff88021f0077d0] crash_kexec at ffffffff81089d2d #2 [ffff88021f0078a0] oops_end at ffffffff813bca78 #3 [ffff88021f0078d0] no_context at ffffffff81029e72 #4 [ffff88021f007920] __bad_area_nosemaphore at ffffffff8102a155 #5 [ffff88021f0079f0] bad_area_nosemaphore at ffffffff8102a23e torvalds#6 [ffff88021f007a00] do_page_fault at ffffffff813bf32e torvalds#7 [ffff88021f007b10] page_fault at ffffffff813bc045 [exception RIP: sysfs_find_dirent+17] RIP: ffffffff81178611 RSP: ffff88021f007bc0 RFLAGS: 00010246 RAX: ffff88021e64c440 RBX: ffffffff8156cc63 RCX: 0000000000000004 RDX: ffffffff8156cc63 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff88021f007be0 R8: 0000000000000004 R9: 0000000000000008 R10: ffffffff816fed00 R11: 0000000000000004 R12: 0000000000000000 R13: ffffffff8156cc63 R14: 0000000000000000 R15: ffff8802222a0000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 torvalds#8 [ffff88021f007be8] sysfs_get_dirent at ffffffff81178c07 torvalds#9 [ffff88021f007c18] sysfs_remove_group at ffffffff8117ac27 torvalds#10 [ffff88021f007c48] netdev_queue_update_kobjects at ffffffff813178f9 torvalds#11 [ffff88021f007c88] netif_set_real_num_tx_queues at ffffffff81303e38 torvalds#12 [ffff88021f007cc8] ixgbe_set_num_queues at ffffffffa0249763 [ixgbe] torvalds#13 [ffff88021f007cf8] ixgbe_init_interrupt_scheme at ffffffffa024ea89 [ixgbe] torvalds#14 [ffff88021f007d48] ixgbe_fcoe_disable at ffffffffa0267113 [ixgbe] torvalds#15 [ffff88021f007d68] vlan_dev_fcoe_disable at ffffffffa014fef5 [8021q] torvalds#16 [ffff88021f007d78] fcoe_interface_cleanup at ffffffffa02b7dfd [fcoe] torvalds#17 [ffff88021f007df8] fcoe_destroy_work at ffffffffa02b7f08 [fcoe] torvalds#18 [ffff88021f007e18] process_one_work at ffffffff8105d7ca torvalds#19 [ffff88021f007e68] worker_thread at ffffffff81060513 torvalds#20 [ffff88021f007ee8] kthread at ffffffff810648b6 torvalds#21 [ffff88021f007f48] kernel_thread_helper at ffffffff813c40f4 Signed-off-by: Yi Zou <[email protected]> Tested-by: Ross Brattain <[email protected]> Tested-by: Stephen Ko <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
…nally If - generic_file_read_iter() gets called with a zero read length, - the read offset is at a page boundary, - IOCB_DIRECT is not set - and the page in question hasn't made it into the page cache yet, then do_generic_file_read() will trigger a readahead with a req_size hint of zero. Since roundup_pow_of_two(0) is undefined, UBSAN reports UBSAN: Undefined behaviour in include/linux/log2.h:63:13 shift exponent 64 is too large for 64-bit type 'long unsigned int' CPU: 3 PID: 1017 Comm: sa1 Tainted: G L 4.5.0-next-20160318+ torvalds#14 [...] Call Trace: [...] [<ffffffff813ef61a>] ondemand_readahead+0x3aa/0x3d0 [<ffffffff813ef61a>] ? ondemand_readahead+0x3aa/0x3d0 [<ffffffff813c73bd>] ? find_get_entry+0x2d/0x210 [<ffffffff813ef9c3>] page_cache_sync_readahead+0x63/0xa0 [<ffffffff813cc04d>] do_generic_file_read+0x80d/0xf90 [<ffffffff813cc955>] generic_file_read_iter+0x185/0x420 [...] [<ffffffff81510b06>] __vfs_read+0x256/0x3d0 [...] when get_init_ra_size() gets called from ondemand_readahead(). The net effect is that the initial readahead size is arch dependent for requested read lengths of zero: for example, since 1UL << (sizeof(unsigned long) * 8) evaluates to 1 on x86 while its result is 0 on ARMv7, the initial readahead size becomes 4 on the former and 0 on the latter. What's more, whether or not the file access timestamp is updated for zero length reads is decided differently for the two cases of IOCB_DIRECT being set or cleared: in the first case, generic_file_read_iter() explicitly skips updating that timestamp while in the latter case, it is always updated through the call to do_generic_file_read(). According to POSIX, zero length reads "do not modify the last data access timestamp" and thus, the IOCB_DIRECT behaviour is POSIXly correct. Let generic_file_read_iter() unconditionally check the requested read length at its entry and return immediately with success if it is zero. Signed-off-by: Nicolai Stange <[email protected]> Cc: Al Viro <[email protected]> Reviewed-by: Jan Kara <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
Calling btrfs_qgroup_reserve_meta_prealloc from btrfs_delayed_inode_reserve_metadata can result in flushing delalloc while holding a transaction and delayed node locks. This is deadlock prone. In the past multiple commits: * f2d7f11 ("btrfs: qgroup: don't try to wait flushing if we're already holding a transaction") * 8eee22d ("btrfs: qgroup: don't commit transaction when we already hold the handle") Tried to solve various aspects of this but this was always a whack-a-mole game. Unfortunately those 2 fixes don't solve a deadlock scenario involving btrfs_delayed_node::mutex. Namely, one thread can call btrfs_dirty_inode as a result of reading a file and modifying its atime: PID: 6963 TASK: ffff8c7f3f94c000 CPU: 2 COMMAND: "test" #0 __schedule at ffffffffa529e07d #1 schedule at ffffffffa529e4ff #2 schedule_timeout at ffffffffa52a1bdd #3 wait_for_completion at ffffffffa529eeea <-- sleeps with delayed node mutex held #4 start_delalloc_inodes at ffffffffc0380db5 #5 btrfs_start_delalloc_snapshot at ffffffffc0393836 torvalds#6 try_flush_qgroup at ffffffffc03f04b2 torvalds#7 __btrfs_qgroup_reserve_meta at ffffffffc03f5bb6 <-- tries to reserve space and starts delalloc inodes. torvalds#8 btrfs_delayed_update_inode at ffffffffc03e31aa <-- acquires delayed node mutex torvalds#9 btrfs_update_inode at ffffffffc0385ba8 torvalds#10 btrfs_dirty_inode at ffffffffc038627b <-- TRANSACTIION OPENED torvalds#11 touch_atime at ffffffffa4cf0000 torvalds#12 generic_file_read_iter at ffffffffa4c1f123 torvalds#13 new_sync_read at ffffffffa4ccdc8a torvalds#14 vfs_read at ffffffffa4cd0849 torvalds#15 ksys_read at ffffffffa4cd0bd1 torvalds#16 do_syscall_64 at ffffffffa4a052eb torvalds#17 entry_SYSCALL_64_after_hwframe at ffffffffa540008c This will cause an asynchronous work to flush the delalloc inodes to happen which can try to acquire the same delayed_node mutex: PID: 455 TASK: ffff8c8085fa4000 CPU: 5 COMMAND: "kworker/u16:30" #0 __schedule at ffffffffa529e07d #1 schedule at ffffffffa529e4ff #2 schedule_preempt_disabled at ffffffffa529e80a #3 __mutex_lock at ffffffffa529fdcb <-- goes to sleep, never wakes up. #4 btrfs_delayed_update_inode at ffffffffc03e3143 <-- tries to acquire the mutex #5 btrfs_update_inode at ffffffffc0385ba8 <-- this is the same inode that pid 6963 is holding torvalds#6 cow_file_range_inline.constprop.78 at ffffffffc0386be7 torvalds#7 cow_file_range at ffffffffc03879c1 torvalds#8 btrfs_run_delalloc_range at ffffffffc038894c torvalds#9 writepage_delalloc at ffffffffc03a3c8f torvalds#10 __extent_writepage at ffffffffc03a4c01 torvalds#11 extent_write_cache_pages at ffffffffc03a500b torvalds#12 extent_writepages at ffffffffc03a6de2 torvalds#13 do_writepages at ffffffffa4c277eb torvalds#14 __filemap_fdatawrite_range at ffffffffa4c1e5bb torvalds#15 btrfs_run_delalloc_work at ffffffffc0380987 <-- starts running delayed nodes torvalds#16 normal_work_helper at ffffffffc03b706c torvalds#17 process_one_work at ffffffffa4aba4e4 torvalds#18 worker_thread at ffffffffa4aba6fd torvalds#19 kthread at ffffffffa4ac0a3d torvalds#20 ret_from_fork at ffffffffa54001ff To fully address those cases the complete fix is to never issue any flushing while holding the transaction or the delayed node lock. This patch achieves it by calling qgroup_reserve_meta directly which will either succeed without flushing or will fail and return -EDQUOT. In the latter case that return value is going to be propagated to btrfs_dirty_inode which will fallback to start a new transaction. That's fine as the majority of time we expect the inode will have BTRFS_DELAYED_NODE_INODE_DIRTY flag set which will result in directly copying the in-memory state. Fixes: c69d420 ("btrfs: qgroup: try to flush qgroup space when we get -EDQUOT") CC: [email protected] # 5.10+ Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Signed-off-by: David Sterba <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
The evlist has the maps with its own refcounts so we don't need to set
the pointers to NULL. Otherwise following error was reported by Asan.
# perf test -v 4
4: Read samples using the mmap interface :
--- start ---
test child forked, pid 139782
mmap size 528384B
=================================================================
==139782==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7f1f76daee8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x564ba21a0fea in cpu_map__trim_new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:79
#2 0x564ba21a1a0f in perf_cpu_map__read /home/namhyung/project/linux/tools/lib/perf/cpumap.c:149
#3 0x564ba21a21cf in cpu_map__read_all_cpu_map /home/namhyung/project/linux/tools/lib/perf/cpumap.c:166
#4 0x564ba21a21cf in perf_cpu_map__new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:181
#5 0x564ba1e48298 in test__basic_mmap tests/mmap-basic.c:55
torvalds#6 0x564ba1e278fb in run_test tests/builtin-test.c:428
torvalds#7 0x564ba1e278fb in test_and_print tests/builtin-test.c:458
torvalds#8 0x564ba1e29a53 in __cmd_test tests/builtin-test.c:679
torvalds#9 0x564ba1e29a53 in cmd_test tests/builtin-test.c:825
torvalds#10 0x564ba1e95cb4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#11 0x564ba1d1fa88 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#12 0x564ba1d1fa88 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#13 0x564ba1d1fa88 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#14 0x7f1f768e4d09 in __libc_start_main ../csu/libc-start.c:308
...
test child finished with 1
---- end ----
Read samples using the mmap interface: FAILED!
failed to open shell test directory: /home/namhyung/libexec/perf-core/tests/shell
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
The evlist and the cpu/thread maps should be released together.
Otherwise following error was reported by Asan.
Note that this test still has memory leaks in DSOs so it still fails
even after this change. I'll take a look at that too.
# perf test -v 26
26: Object code reading :
--- start ---
test child forked, pid 154184
Looking at the vmlinux_path (8 entries long)
symsrc__init: build id mismatch for vmlinux.
symsrc__init: cannot get elf header.
Using /proc/kcore for kernel data
Using /proc/kallsyms for symbols
Parsing event 'cycles'
mmap size 528384B
...
=================================================================
==154184==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 439 byte(s) in 1 object(s) allocated from:
#0 0x7fcb66e77037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x55ad9b7e821e in dso__new_id util/dso.c:1256
#2 0x55ad9b8cfd4a in __machine__addnew_vdso util/vdso.c:132
#3 0x55ad9b8cfd4a in machine__findnew_vdso util/vdso.c:347
#4 0x55ad9b845b7e in map__new util/map.c:176
#5 0x55ad9b8415a2 in machine__process_mmap2_event util/machine.c:1787
torvalds#6 0x55ad9b8fab16 in perf_tool__process_synth_event util/synthetic-events.c:64
torvalds#7 0x55ad9b8fab16 in perf_event__synthesize_mmap_events util/synthetic-events.c:499
torvalds#8 0x55ad9b8fbfdf in __event__synthesize_thread util/synthetic-events.c:741
torvalds#9 0x55ad9b8ff3e3 in perf_event__synthesize_thread_map util/synthetic-events.c:833
torvalds#10 0x55ad9b738585 in do_test_code_reading tests/code-reading.c:608
torvalds#11 0x55ad9b73b25d in test__code_reading tests/code-reading.c:722
torvalds#12 0x55ad9b6f28fb in run_test tests/builtin-test.c:428
torvalds#13 0x55ad9b6f28fb in test_and_print tests/builtin-test.c:458
torvalds#14 0x55ad9b6f4a53 in __cmd_test tests/builtin-test.c:679
torvalds#15 0x55ad9b6f4a53 in cmd_test tests/builtin-test.c:825
torvalds#16 0x55ad9b760cc4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#17 0x55ad9b5eaa88 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#18 0x55ad9b5eaa88 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#19 0x55ad9b5eaa88 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#20 0x7fcb669acd09 in __libc_start_main ../csu/libc-start.c:308
...
SUMMARY: AddressSanitizer: 471 byte(s) leaked in 2 allocation(s).
test child finished with 1
---- end ----
Object code reading: FAILED!
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
The evlist and the cpu/thread maps should be released together.
Otherwise following error was reported by Asan.
$ perf test -v 28
28: Use a dummy software event to keep tracking:
--- start ---
test child forked, pid 156810
mmap size 528384B
=================================================================
==156810==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7f637d2bce8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x55cc6295cffa in cpu_map__trim_new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:79
#2 0x55cc6295da1f in perf_cpu_map__read /home/namhyung/project/linux/tools/lib/perf/cpumap.c:149
#3 0x55cc6295e1df in cpu_map__read_all_cpu_map /home/namhyung/project/linux/tools/lib/perf/cpumap.c:166
#4 0x55cc6295e1df in perf_cpu_map__new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:181
#5 0x55cc626287cf in test__keep_tracking tests/keep-tracking.c:84
torvalds#6 0x55cc625e38fb in run_test tests/builtin-test.c:428
torvalds#7 0x55cc625e38fb in test_and_print tests/builtin-test.c:458
torvalds#8 0x55cc625e5a53 in __cmd_test tests/builtin-test.c:679
torvalds#9 0x55cc625e5a53 in cmd_test tests/builtin-test.c:825
torvalds#10 0x55cc62651cc4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#11 0x55cc624dba88 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#12 0x55cc624dba88 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#13 0x55cc624dba88 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#14 0x7f637cdf2d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: 72 byte(s) leaked in 2 allocation(s).
test child finished with 1
---- end ----
Use a dummy software event to keep tracking: FAILED!
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
The evlist and cpu/thread maps should be released together.
Otherwise the following error was reported by Asan.
$ perf test -v 35
35: Track with sched_switch :
--- start ---
test child forked, pid 159287
Using CPUID GenuineIntel-6-8E-C
mmap size 528384B
1295 events recorded
=================================================================
==159287==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7fa28d9a2e8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x5652f5a5affa in cpu_map__trim_new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:79
#2 0x5652f5a5ba1f in perf_cpu_map__read /home/namhyung/project/linux/tools/lib/perf/cpumap.c:149
#3 0x5652f5a5c1df in cpu_map__read_all_cpu_map /home/namhyung/project/linux/tools/lib/perf/cpumap.c:166
#4 0x5652f5a5c1df in perf_cpu_map__new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:181
#5 0x5652f5723bbf in test__switch_tracking tests/switch-tracking.c:350
torvalds#6 0x5652f56e18fb in run_test tests/builtin-test.c:428
torvalds#7 0x5652f56e18fb in test_and_print tests/builtin-test.c:458
torvalds#8 0x5652f56e3a53 in __cmd_test tests/builtin-test.c:679
torvalds#9 0x5652f56e3a53 in cmd_test tests/builtin-test.c:825
torvalds#10 0x5652f574fcc4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#11 0x5652f55d9a88 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#12 0x5652f55d9a88 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#13 0x5652f55d9a88 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#14 0x7fa28d4d8d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: 72 byte(s) leaked in 2 allocation(s).
test child finished with 1
---- end ----
Track with sched_switch: FAILED!
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
It should release the maps at the end.
$ perf test -v 71
71: Convert perf time to TSC :
--- start ---
test child forked, pid 178744
mmap size 528384B
1st event perf time 59207256505278 tsc 13187166645142
rdtsc time 59207256542151 tsc 13187166723020
2nd event perf time 59207256543749 tsc 13187166726393
=================================================================
==178744==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7faf601f9e8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145
#1 0x55b620cfc00a in cpu_map__trim_new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:79
#2 0x55b620cfca2f in perf_cpu_map__read /home/namhyung/project/linux/tools/lib/perf/cpumap.c:149
#3 0x55b620cfd1ef in cpu_map__read_all_cpu_map /home/namhyung/project/linux/tools/lib/perf/cpumap.c:166
#4 0x55b620cfd1ef in perf_cpu_map__new /home/namhyung/project/linux/tools/lib/perf/cpumap.c:181
#5 0x55b6209ef1b2 in test__perf_time_to_tsc tests/perf-time-to-tsc.c:73
torvalds#6 0x55b6209828fb in run_test tests/builtin-test.c:428
torvalds#7 0x55b6209828fb in test_and_print tests/builtin-test.c:458
torvalds#8 0x55b620984a53 in __cmd_test tests/builtin-test.c:679
torvalds#9 0x55b620984a53 in cmd_test tests/builtin-test.c:825
torvalds#10 0x55b6209f0cd4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#11 0x55b62087aa88 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#12 0x55b62087aa88 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#13 0x55b62087aa88 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#14 0x7faf5fd2fd09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: 72 byte(s) leaked in 2 allocation(s).
test child finished with 1
---- end ----
Convert perf time to TSC: FAILED!
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
I got a segfault when using -r option with event groups. The option
makes it run the workload multiple times and it will reuse the evlist
and evsel for each run.
While most of resources are allocated and freed properly, the id hash
in the evlist was not and it resulted in the bug. You can see it with
the address sanitizer like below:
$ perf stat -r 100 -e '{cycles,instructions}' true
=================================================================
==693052==ERROR: AddressSanitizer: heap-use-after-free on
address 0x6080000003d0 at pc 0x558c57732835 bp 0x7fff1526adb0 sp 0x7fff1526ada8
WRITE of size 8 at 0x6080000003d0 thread T0
#0 0x558c57732834 in hlist_add_head /home/namhyung/project/linux/tools/include/linux/list.h:644
#1 0x558c57732834 in perf_evlist__id_hash /home/namhyung/project/linux/tools/lib/perf/evlist.c:237
#2 0x558c57732834 in perf_evlist__id_add /home/namhyung/project/linux/tools/lib/perf/evlist.c:244
#3 0x558c57732834 in perf_evlist__id_add_fd /home/namhyung/project/linux/tools/lib/perf/evlist.c:285
#4 0x558c5747733e in store_evsel_ids util/evsel.c:2765
#5 0x558c5747733e in evsel__store_ids util/evsel.c:2782
torvalds#6 0x558c5730b717 in __run_perf_stat /home/namhyung/project/linux/tools/perf/builtin-stat.c:895
torvalds#7 0x558c5730b717 in run_perf_stat /home/namhyung/project/linux/tools/perf/builtin-stat.c:1014
torvalds#8 0x558c5730b717 in cmd_stat /home/namhyung/project/linux/tools/perf/builtin-stat.c:2446
torvalds#9 0x558c57427c24 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313
torvalds#10 0x558c572b1a48 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365
torvalds#11 0x558c572b1a48 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409
torvalds#12 0x558c572b1a48 in main /home/namhyung/project/linux/tools/perf/perf.c:539
torvalds#13 0x7fcadb9f7d09 in __libc_start_main ../csu/libc-start.c:308
torvalds#14 0x558c572b60f9 in _start (/home/namhyung/project/linux/tools/perf/perf+0x45d0f9)
Actually the nodes in the hash table are struct perf_stream_id and
they were freed in the previous run. Fix it by resetting the hash.
Signed-off-by: Namhyung Kim <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Ian Rogers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
aotot
pushed a commit
to jove-decompiler/linux
that referenced
this pull request
Oct 26, 2025
…faces Robert Morris created a test program which can cause usb_hub_to_struct_hub() to dereference a NULL or inappropriate pointer: Oops: general protection fault, probably for non-canonical address 0xcccccccccccccccc: 0000 [#1] SMP DEBUG_PAGEALLOC PTI CPU: 7 UID: 0 PID: 117 Comm: kworker/7:1 Not tainted 6.13.0-rc3-00017-gf44d154d6e3d torvalds#14 Hardware name: FreeBSD BHYVE/BHYVE, BIOS 14.0 10/17/2021 Workqueue: usb_hub_wq hub_event RIP: 0010:usb_hub_adjust_deviceremovable+0x78/0x110 ... Call Trace: <TASK> ? die_addr+0x31/0x80 ? exc_general_protection+0x1b4/0x3c0 ? asm_exc_general_protection+0x26/0x30 ? usb_hub_adjust_deviceremovable+0x78/0x110 hub_probe+0x7c7/0xab0 usb_probe_interface+0x14b/0x350 really_probe+0xd0/0x2d0 ? __pfx___device_attach_driver+0x10/0x10 __driver_probe_device+0x6e/0x110 driver_probe_device+0x1a/0x90 __device_attach_driver+0x7e/0xc0 bus_for_each_drv+0x7f/0xd0 __device_attach+0xaa/0x1a0 bus_probe_device+0x8b/0xa0 device_add+0x62e/0x810 usb_set_configuration+0x65d/0x990 usb_generic_driver_probe+0x4b/0x70 usb_probe_device+0x36/0xd0 The cause of this error is that the device has two interfaces, and the hub driver binds to interface 1 instead of interface 0, which is where usb_hub_to_struct_hub() looks. We can prevent the problem from occurring by refusing to accept hub devices that violate the USB spec by having more than one configuration or interface. Reported-and-tested-by: Robert Morris <[email protected]> Cc: stable <[email protected]> Closes: https://lore.kernel.org/linux-usb/95564.1737394039@localhost/ Signed-off-by: Alan Stern <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 27, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
thestinger
pushed a commit
to GrapheneOS/kernel_common-6.12
that referenced
this pull request
Oct 28, 2025
…th_get() Fix the following KASAN complaint: BUG: KASAN: slab-out-of-bounds in firmware_param_path_get+0x11e/0x130 Write of size 1 at addr ffff888156945fff by task dracut/7151 CPU: 114 UID: 0 PID: 7151 Comm: dracut Not tainted 6.12.23-dbg torvalds#14 b37048002fbe82089398ca883b3197e4fe6e7ef6 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Call Trace: <TASK> show_stack+0x4d/0x60 dump_stack_lvl+0x61/0x80 print_address_description.constprop.0+0x8b/0x320 print_report+0xe7/0x1c5 kasan_report+0xd1/0x1c0 __asan_report_store1_noabort+0x1b/0x20 firmware_param_path_get+0x11e/0x130 param_attr_show+0x13b/0x200 module_attr_show+0x46/0x70 sysfs_kf_seq_show+0x1f2/0x350 kernfs_seq_show+0x118/0x160 seq_read_iter+0x2bb/0x1040 kernfs_fop_read_iter+0xe9/0x150 vfs_read+0x711/0xd40 ksys_read+0x10b/0x200 __x64_sys_read+0x76/0xb0 x64_sys_call+0x1678/0x1790 do_syscall_64+0x92/0x180 entry_SYSCALL_64_after_hwframe+0x4b/0x53 </TASK> Allocated by task 7093: kasan_save_stack+0x2f/0x50 kasan_save_track+0x18/0x40 kasan_save_alloc_info+0x3b/0x50 __kasan_kmalloc+0xaf/0xc0 __kmalloc_node_noprof+0x1cd/0x4c0 __kvmalloc_node_noprof+0x55/0x100 seq_read_iter+0x6af/0x1040 proc_reg_read_iter+0x1a6/0x270 vfs_read+0x711/0xd40 ksys_read+0x10b/0x200 __x64_sys_read+0x76/0xb0 x64_sys_call+0x1678/0x1790 do_syscall_64+0x92/0x180 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Freed by task 7093: kasan_save_stack+0x2f/0x50 kasan_save_track+0x18/0x40 kasan_save_free_info+0x3f/0x50 __kasan_slab_free+0x56/0x70 kfree+0x13b/0x3f0 kvfree+0x2d/0x40 single_release+0x77/0xc0 close_pdeo.part.0+0xe3/0x2d0 close_pdeo+0x155/0x170 proc_reg_release+0x16d/0x1d0 __fput+0x356/0xa40 __fput_sync+0x2f/0x40 __x64_sys_close+0x81/0xd0 x64_sys_call+0x11fc/0x1790 do_syscall_64+0x92/0x180 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Bug: 420669812 Test: blktests nvme/044 Fixes: ac1a02d ("ANDROID: firmware_loader: Add support for customer firmware paths") Change-Id: I1fbf1f8d48e4611468e2ad80650001afdd3ea784 Signed-off-by: Bart Van Assche <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 29, 2025
[ Upstream commit 738d5a5 ] hfsplus_bmap_alloc can trigger a crash if a record offset or length is larger than node_size [ 15.264282] BUG: KASAN: slab-out-of-bounds in hfsplus_bmap_alloc+0x887/0x8b0 [ 15.265192] Read of size 8 at addr ffff8881085ca188 by task test/183 [ 15.265949] [ 15.266163] CPU: 0 UID: 0 PID: 183 Comm: test Not tainted 6.17.0-rc2-gc17b750b3ad9 torvalds#14 PREEMPT(voluntary) [ 15.266165] Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 15.266167] Call Trace: [ 15.266168] <TASK> [ 15.266169] dump_stack_lvl+0x53/0x70 [ 15.266173] print_report+0xd0/0x660 [ 15.266181] kasan_report+0xce/0x100 [ 15.266185] hfsplus_bmap_alloc+0x887/0x8b0 [ 15.266208] hfs_btree_inc_height.isra.0+0xd5/0x7c0 [ 15.266217] hfsplus_brec_insert+0x870/0xb00 [ 15.266222] __hfsplus_ext_write_extent+0x428/0x570 [ 15.266225] __hfsplus_ext_cache_extent+0x5e/0x910 [ 15.266227] hfsplus_ext_read_extent+0x1b2/0x200 [ 15.266233] hfsplus_file_extend+0x5a7/0x1000 [ 15.266237] hfsplus_get_block+0x12b/0x8c0 [ 15.266238] __block_write_begin_int+0x36b/0x12c0 [ 15.266251] block_write_begin+0x77/0x110 [ 15.266252] cont_write_begin+0x428/0x720 [ 15.266259] hfsplus_write_begin+0x51/0x100 [ 15.266262] cont_write_begin+0x272/0x720 [ 15.266270] hfsplus_write_begin+0x51/0x100 [ 15.266274] generic_perform_write+0x321/0x750 [ 15.266285] generic_file_write_iter+0xc3/0x310 [ 15.266289] __kernel_write_iter+0x2fd/0x800 [ 15.266296] dump_user_range+0x2ea/0x910 [ 15.266301] elf_core_dump+0x2a94/0x2ed0 [ 15.266320] vfs_coredump+0x1d85/0x45e0 [ 15.266349] get_signal+0x12e3/0x1990 [ 15.266357] arch_do_signal_or_restart+0x89/0x580 [ 15.266362] irqentry_exit_to_user_mode+0xab/0x110 [ 15.266364] asm_exc_page_fault+0x26/0x30 [ 15.266366] RIP: 0033:0x41bd35 [ 15.266367] Code: bc d1 f3 0f 7f 27 f3 0f 7f 6f 10 f3 0f 7f 77 20 f3 0f 7f 7f 30 49 83 c0 0f 49 29 d0 48 8d 7c 17 31 e9 9f 0b 00 00 66 0f ef c0 <f3> 0f 6f 0e f3 0f 6f 56 10 66 0f 74 c1 66 0f d7 d0 49 83 f8f [ 15.266369] RSP: 002b:00007ffc9e62d078 EFLAGS: 00010283 [ 15.266371] RAX: 00007ffc9e62d100 RBX: 0000000000000000 RCX: 0000000000000000 [ 15.266372] RDX: 00000000000000e0 RSI: 0000000000000000 RDI: 00007ffc9e62d100 [ 15.266373] RBP: 0000400000000040 R08: 00000000000000e0 R09: 0000000000000000 [ 15.266374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [ 15.266375] R13: 0000000000000000 R14: 0000000000000000 R15: 0000400000000000 [ 15.266376] </TASK> When calling hfsplus_bmap_alloc to allocate a free node, this function first retrieves the bitmap from header node and map node using node->page together with the offset and length from hfs_brec_lenoff ``` len = hfs_brec_lenoff(node, 2, &off16); off = off16; off += node->page_offset; pagep = node->page + (off >> PAGE_SHIFT); data = kmap_local_page(*pagep); ``` However, if the retrieved offset or length is invalid(i.e. exceeds node_size), the code may end up accessing pages outside the allocated range for this node. This patch adds proper validation of both offset and length before use, preventing out-of-bounds page access. Move is_bnode_offset_valid and check_and_correct_requested_length to hfsplus_fs.h, as they may be required by other functions. Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yang Chenzhi <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Viacheslav Dubeyko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Nov 3, 2025
When using perf record with the `--overwrite` option, a segmentation fault
occurs if an event fails to open. For example:
perf record -e cycles-ct -F 1000 -a --overwrite
Error:
cycles-ct:H: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'
perf: Segmentation fault
#0 0x6466b6 in dump_stack debug.c:366
#1 0x646729 in sighandler_dump_stack debug.c:378
#2 0x453fd1 in sigsegv_handler builtin-record.c:722
#3 0x7f8454e65090 in __restore_rt libc-2.32.so[54090]
#4 0x6c5671 in __perf_event__synthesize_id_index synthetic-events.c:1862
#5 0x6c5ac0 in perf_event__synthesize_id_index synthetic-events.c:1943
torvalds#6 0x458090 in record__synthesize builtin-record.c:2075
torvalds#7 0x45a85a in __cmd_record builtin-record.c:2888
torvalds#8 0x45deb6 in cmd_record builtin-record.c:4374
torvalds#9 0x4e5e33 in run_builtin perf.c:349
torvalds#10 0x4e60bf in handle_internal_command perf.c:401
torvalds#11 0x4e6215 in run_argv perf.c:448
torvalds#12 0x4e653a in main perf.c:555
torvalds#13 0x7f8454e4fa72 in __libc_start_main libc-2.32.so[3ea72]
torvalds#14 0x43a3ee in _start ??:0
The --overwrite option implies --tail-synthesize, which collects non-sample
events reflecting the system status when recording finishes. However, when
evsel opening fails (e.g., unsupported event 'cycles-ct'), session->evlist
is not initialized and remains NULL. The code unconditionally calls
record__synthesize() in the error path, which iterates through the NULL
evlist pointer and causes a segfault.
To fix it, move the record__synthesize() call inside the error check block, so
it's only called when there was no error during recording, ensuring that evlist
is properly initialized.
Fixes: 4ea648a ("perf record: Add --tail-synthesize option")
Signed-off-by: Shuai Xue <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
thestinger
pushed a commit
to GrapheneOS/kernel_common-6.12
that referenced
this pull request
Nov 3, 2025
…ccess to avoid Priority Inversion in SRIOV commit dc0297f upstream. RLCG Register Access is a way for virtual functions to safely access GPU registers in a virtualized environment., including TLB flushes and register reads. When multiple threads or VFs try to access the same registers simultaneously, it can lead to race conditions. By using the RLCG interface, the driver can serialize access to the registers. This means that only one thread can access the registers at a time, preventing conflicts and ensuring that operations are performed correctly. Additionally, when a low-priority task holds a mutex that a high-priority task needs, ie., If a thread holding a spinlock tries to acquire a mutex, it can lead to priority inversion. register access in amdgpu_virt_rlcg_reg_rw especially in a fast code path is critical. The call stack shows that the function amdgpu_virt_rlcg_reg_rw is being called, which attempts to acquire the mutex. This function is invoked from amdgpu_sriov_wreg, which in turn is called from gmc_v11_0_flush_gpu_tlb. The [ BUG: Invalid wait context ] indicates that a thread is trying to acquire a mutex while it is in a context that does not allow it to sleep (like holding a spinlock). Fixes the below: [ 253.013423] ============================= [ 253.013434] [ BUG: Invalid wait context ] [ 253.013446] 6.12.0-amdstaging-drm-next-lol-050225 torvalds#14 Tainted: G U OE [ 253.013464] ----------------------------- [ 253.013475] kworker/0:1/10 is trying to lock: [ 253.013487] ffff9f30542e3cf8 (&adev->virt.rlcg_reg_lock){+.+.}-{3:3}, at: amdgpu_virt_rlcg_reg_rw+0xf6/0x330 [amdgpu] [ 253.013815] other info that might help us debug this: [ 253.013827] context-{4:4} [ 253.013835] 3 locks held by kworker/0:1/10: [ 253.013847] #0: ffff9f3040050f58 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x3f5/0x680 [ 253.013877] #1: ffffb789c008be40 ((work_completion)(&wfc.work)){+.+.}-{0:0}, at: process_one_work+0x1d6/0x680 [ 253.013905] #2: ffff9f3054281838 (&adev->gmc.invalidate_lock){+.+.}-{2:2}, at: gmc_v11_0_flush_gpu_tlb+0x198/0x4f0 [amdgpu] [ 253.014154] stack backtrace: [ 253.014164] CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Tainted: G U OE 6.12.0-amdstaging-drm-next-lol-050225 torvalds#14 [ 253.014189] Tainted: [U]=USER, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE [ 253.014203] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/18/2024 [ 253.014224] Workqueue: events work_for_cpu_fn [ 253.014241] Call Trace: [ 253.014250] <TASK> [ 253.014260] dump_stack_lvl+0x9b/0xf0 [ 253.014275] dump_stack+0x10/0x20 [ 253.014287] __lock_acquire+0xa47/0x2810 [ 253.014303] ? srso_alias_return_thunk+0x5/0xfbef5 [ 253.014321] lock_acquire+0xd1/0x300 [ 253.014333] ? amdgpu_virt_rlcg_reg_rw+0xf6/0x330 [amdgpu] [ 253.014562] ? __lock_acquire+0xa6b/0x2810 [ 253.014578] __mutex_lock+0x85/0xe20 [ 253.014591] ? amdgpu_virt_rlcg_reg_rw+0xf6/0x330 [amdgpu] [ 253.014782] ? sched_clock_noinstr+0x9/0x10 [ 253.014795] ? srso_alias_return_thunk+0x5/0xfbef5 [ 253.014808] ? local_clock_noinstr+0xe/0xc0 [ 253.014822] ? amdgpu_virt_rlcg_reg_rw+0xf6/0x330 [amdgpu] [ 253.015012] ? srso_alias_return_thunk+0x5/0xfbef5 [ 253.015029] mutex_lock_nested+0x1b/0x30 [ 253.015044] ? mutex_lock_nested+0x1b/0x30 [ 253.015057] amdgpu_virt_rlcg_reg_rw+0xf6/0x330 [amdgpu] [ 253.015249] amdgpu_sriov_wreg+0xc5/0xd0 [amdgpu] [ 253.015435] gmc_v11_0_flush_gpu_tlb+0x44b/0x4f0 [amdgpu] [ 253.015667] gfx_v11_0_hw_init+0x499/0x29c0 [amdgpu] [ 253.015901] ? __pfx_smu_v13_0_update_pcie_parameters+0x10/0x10 [amdgpu] [ 253.016159] ? srso_alias_return_thunk+0x5/0xfbef5 [ 253.016173] ? smu_hw_init+0x18d/0x300 [amdgpu] [ 253.016403] amdgpu_device_init+0x29ad/0x36a0 [amdgpu] [ 253.016614] amdgpu_driver_load_kms+0x1a/0xc0 [amdgpu] [ 253.017057] amdgpu_pci_probe+0x1c2/0x660 [amdgpu] [ 253.017493] local_pci_probe+0x4b/0xb0 [ 253.017746] work_for_cpu_fn+0x1a/0x30 [ 253.017995] process_one_work+0x21e/0x680 [ 253.018248] worker_thread+0x190/0x330 [ 253.018500] ? __pfx_worker_thread+0x10/0x10 [ 253.018746] kthread+0xe7/0x120 [ 253.018988] ? __pfx_kthread+0x10/0x10 [ 253.019231] ret_from_fork+0x3c/0x60 [ 253.019468] ? __pfx_kthread+0x10/0x10 [ 253.019701] ret_from_fork_asm+0x1a/0x30 [ 253.019939] </TASK> v2: s/spin_trylock/spin_lock_irqsave to be safe (Christian). Fixes: e864180 ("drm/amdgpu: Add lock around VF RLCG interface") Cc: lin cao <[email protected]> Cc: Jingwen Chen <[email protected]> Cc: Victor Skvortsov <[email protected]> Cc: Zhigang Luo <[email protected]> Cc: Christian König <[email protected]> Cc: Alex Deucher <[email protected]> Change-Id: I07b8ef7deb0bf74b4855b1199ac0708a74ac9078 Signed-off-by: Srinivasan Shanmugam <[email protected]> Suggested-by: Alex Deucher <[email protected]> Reviewed-by: Christian König <[email protected]> Signed-off-by: Alex Deucher <[email protected]> [ Minor context change fixed. ] Signed-off-by: Wenshan Lan <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> (cherry picked from commit 07ed75b) Signed-off-by: Greg Kroah-Hartman <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Nov 7, 2025
When capturing PMU info with perf, it is necessary to know the nodeid of the interface. Here, we add topology debug info for the clock domain to display the nodeid of the interfaces. $ cat /sys/kernel/debug/arm-ni/map_1f3c06 as shown below: | HSNI #5 || PMNI torvalds#6 || PMU torvalds#14 | Signed-off-by: Shouping Wang <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Nov 7, 2025
When capturing PMU info with perf, it is necessary to know the nodeid of the interface. Here, we add topology debug info for the clock domain to display the nodeid of the interfaces. $ cat /sys/kernel/debug/arm-ni/map_1f3c06 as shown below: | HSNI #5 || PMNI torvalds#6 || PMU torvalds#14 | Signed-off-by: Shouping Wang <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
(left + right)/2 can overflow in cases where left + (right-left)/2 wouldn't.
See, for example left=2 and right=2^32-1.