-
Notifications
You must be signed in to change notification settings - Fork 57.9k
Putting some beer in the freezer #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
beer rulez! |
epic |
Hahaha nice! |
github is too github... |
Oh my... EPIC! |
#win! |
Unecessary =p |
Unecessary * 2 This is not Orkut =/ |
Thanks for linux! |
totally unnecessary, congratz! |
yeah make pull requests either vanish or be a link to https://github.com/torvalds/linux/tree/master/Documentation/development-process |
This is not Orkut =/ 2 |
I am thoroughly disappoint. |
...This is just crazy |
@torvalds I will volunteer to help clean up spam requests if there is a way to do so. |
Great way to introduce someone very prominent in the open source |
No more beers for you, going back to BSD |
"No more beers for you, going back to BSD" :D |
@diegoviola you might want to cool it a bit. We're not a lynch mob, the goal was to stop having joke pull requests started on @torvalds repository. Save the 'saving the world' bit for later. :) |
@diegoviola, you're cool. Just something we all might want to keep in |
The amount of social networking b.s. for an operating system kernel's source code repository IS TOO DAMN HIGH. |
+1 |
Add mount options backupuid and backugid. It allows an authenticated user to access files with the intent to back them up including their ACLs, who may not have access permission but has "Backup files and directories user right" on them (by virtue of being part of the built-in group Backup Operators. When mount options backupuid is specified, cifs client restricts the use of backup intents to the user whose effective user id is specified along with the mount option. When mount options backupgid is specified, cifs client restricts the use of backup intents to the users whose effective user id belongs to the group id specified along with the mount option. If an authenticated user is not part of the built-in group Backup Operators at the server, access to such files is denied, even if allowed by the client. Signed-off-by: Shirish Pargaonkar <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]>
This patch validates sdev pointer in scsi_dh_activate before proceeding further. Without this check we might see the panic as below. I have seen this panic multiple times.. Call trace: #0 [ffff88007d647b50] machine_kexec at ffffffff81020902 #1 [ffff88007d647ba0] crash_kexec at ffffffff810875b0 #2 [ffff88007d647c70] oops_end at ffffffff8139c650 #3 [ffff88007d647c90] __bad_area_nosemaphore at ffffffff8102dd15 #4 [ffff88007d647d50] page_fault at ffffffff8139b8cf [exception RIP: scsi_dh_activate+0x82] RIP: ffffffffa0041922 RSP: ffff88007d647e00 RFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000093c5 RDX: 00000000000093c5 RSI: ffffffffa02e6640 RDI: ffff88007cc88988 RBP: 000000000000000f R8: ffff88007d646000 R9: 0000000000000000 R10: ffff880082293790 R11: 00000000ffffffff R12: ffff88007cc88988 R13: 0000000000000000 R14: 0000000000000286 R15: ffff880037b845e0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000 #5 [ffff88007d647e38] run_workqueue at ffffffff81060268 torvalds#6 [ffff88007d647e78] worker_thread at ffffffff81060386 torvalds#7 [ffff88007d647ee8] kthread at ffffffff81064436 torvalds#8 [ffff88007d647f48] kernel_thread at ffffffff81003fba Signed-off-by: Babu Moger <[email protected]> Cc: [email protected] Signed-off-by: James Bottomley <[email protected]>
commit a18a920 upstream. This patch validates sdev pointer in scsi_dh_activate before proceeding further. Without this check we might see the panic as below. I have seen this panic multiple times.. Call trace: #0 [ffff88007d647b50] machine_kexec at ffffffff81020902 #1 [ffff88007d647ba0] crash_kexec at ffffffff810875b0 #2 [ffff88007d647c70] oops_end at ffffffff8139c650 #3 [ffff88007d647c90] __bad_area_nosemaphore at ffffffff8102dd15 #4 [ffff88007d647d50] page_fault at ffffffff8139b8cf [exception RIP: scsi_dh_activate+0x82] RIP: ffffffffa0041922 RSP: ffff88007d647e00 RFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000093c5 RDX: 00000000000093c5 RSI: ffffffffa02e6640 RDI: ffff88007cc88988 RBP: 000000000000000f R8: ffff88007d646000 R9: 0000000000000000 R10: ffff880082293790 R11: 00000000ffffffff R12: ffff88007cc88988 R13: 0000000000000000 R14: 0000000000000286 R15: ffff880037b845e0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000 #5 [ffff88007d647e38] run_workqueue at ffffffff81060268 torvalds#6 [ffff88007d647e78] worker_thread at ffffffff81060386 torvalds#7 [ffff88007d647ee8] kthread at ffffffff81064436 torvalds#8 [ffff88007d647f48] kernel_thread at ffffffff81003fba Signed-off-by: Babu Moger <[email protected]> Signed-off-by: James Bottomley <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
If the pte mapping in generic_perform_write() is unmapped between iov_iter_fault_in_readable() and iov_iter_copy_from_user_atomic(), the "copied" parameter to ->end_write can be zero. ext4 couldn't cope with it with delayed allocations enabled. This skips the i_disksize enlargement logic if copied is zero and no new data was appeneded to the inode. gdb> bt #0 0xffffffff811afe80 in ext4_da_should_update_i_disksize (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x1\ 08000, len=0x1000, copied=0x0, page=0xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2467 #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 #2 0xffffffff810d97f1 in generic_perform_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value o\ ptimized out>, pos=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2440 #3 generic_file_buffered_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value optimized out>, p\ os=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2482 #4 0xffffffff810db5d1 in __generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, ppos=0\ xffff88001e26be40) at mm/filemap.c:2600 #5 0xffffffff810db853 in generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=<value optimi\ zed out>, pos=<value optimized out>) at mm/filemap.c:2632 #6 0xffffffff811a71aa in ext4_file_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, pos=0x108000) a\ t fs/ext4/file.c:136 #7 0xffffffff811375aa in do_sync_write (filp=0xffff88003f606a80, buf=<value optimized out>, len=<value optimized out>, \ ppos=0xffff88001e26bf48) at fs/read_write.c:406 #8 0xffffffff81137e56 in vfs_write (file=0xffff88003f606a80, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x4\ 000, pos=0xffff88001e26bf48) at fs/read_write.c:435 #9 0xffffffff8113816c in sys_write (fd=<value optimized out>, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x\ 4000) at fs/read_write.c:487 #10 <signal handler called> #11 0x00007f120077a390 in __brk_reservation_fn_dmi_alloc__ () #12 0x0000000000000000 in ?? () gdb> print offset $22 = 0xffffffffffffffff gdb> print idx $23 = 0xffffffff gdb> print inode->i_blkbits $24 = 0xc gdb> up #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 2512 if (ext4_da_should_update_i_disksize(page, end)) { gdb> print start $25 = 0x0 gdb> print end $26 = 0xffffffffffffffff gdb> print pos $27 = 0x108000 gdb> print new_i_size $28 = 0x108000 gdb> print ((struct ext4_inode_info *)((char *)inode-((int)(&((struct ext4_inode_info *)0)->vfs_inode))))->i_disksize $29 = 0xd9000 gdb> down 2467 for (i = 0; i < idx; i++) gdb> print i $30 = 0xd44acbee This is 100% reproducible with some autonuma development code tuned in a very aggressive manner (not normal way even for knumad) which does "exotic" changes to the ptes. It wouldn't normally trigger but I don't see why it can't happen normally if the page is added to swap cache in between the two faults leading to "copied" being zero (which then hangs in ext4). So it should be fixed. Especially possible with lumpy reclaim (albeit disabled if compaction is enabled) as that would ignore the young bits in the ptes. Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]> Cc: [email protected]
Cancel idle timer in musb_platform_exit. The idle timer could trigger after clock had been disabled leading to kernel panic when MUSB_DEVCTL is accessed in musb_do_idle on 2.6.37. The fault below is no longer triggered on 2.6.38-rc4 (clock is disabled later, and only if compiled as a module, and the offending memory access has moved) but the timer should be cancelled nonetheless. Rebooting... musb_hdrc musb_hdrc: remove, state 4 usb usb1: USB disconnect, address 1 musb_hdrc musb_hdrc: USB bus 1 deregistered Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa0ab060 Internal error: : 1028 [#1] PREEMPT last sysfs file: /sys/kernel/uevent_seqnum Modules linked in: CPU: 0 Not tainted (2.6.37+ torvalds#6) PC is at musb_do_idle+0x24/0x138 LR is at musb_do_idle+0x18/0x138 pc : [<c02377d8>] lr : [<c02377cc>] psr: 80000193 sp : cf2bdd80 ip : cf2bdd80 fp : c048a20c r10: c048a60c r9 : c048a40c r8 : cf85e110 r7 : cf2bc000 r6 : 40000113 r5 : c0489800 r4 : cf85e110 r3 : 00000004 r2 : 00000006 r1 : fa0ab000 r0 : cf8a7000 Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 8faac019 DAC: 00000015 Process reboot (pid: 769, stack limit = 0xcf2bc2f0) Stack: (0xcf2bdd80 to 0xcf2be000) dd80: 00000103 c0489800 c02377b4 c005fa34 00000555 c0071a8c c04a3858 cf2bdda8 dda0: 00000555 c048a00c cf2bdda8 cf2bdda8 1838beb0 00000103 00000004 cf2bc000 ddc0: 00000001 00000001 c04896c8 0000000a 00000000 c005ac14 00000001 c003f32c dde0: 00000000 00000025 00000000 cf2bc000 00000002 00000001 cf2bc000 00000000 de00: 00000001 c005ad08 cf2bc000 c002e07c c03ec039 ffffffff fa200000 c0033608 de20: 00000001 00000000 cf852c14 cf81f200 c045b714 c045b708 cf2bc000 c04a37e8 de40: c0033c04 cf2bc000 00000000 00000001 cf2bde68 cf2bde68 c01c3abc c004f7d8 de60: 60000013 ffffffff c0033c04 00000000 01234567 fee1dead 00000000 c006627c de80: 00000001 c00662c8 28121969 c00663ec cfa38c40 cf9f6a00 cf2bded0 cf9f6a0c dea0: 00000000 cf92f000 00008914 c02cd284 c04a55c8 c028b398 c00715c0 becf24a8 dec0: 30687465 00000000 00000000 00000000 00000002 1301a8c0 00000000 00000000 dee0: 00000002 1301a8c0 00000000 00000000 c0450494 cf52792 00011f10 cf2bdf08 df00: 00011f10 cf2bdf10 00011f10 cf2bdf18 c00f0b44 c004f7e8 cf2bdf18 cf2bdf18 df20: 00011f10 cf2bdf30 00011f10 cf2bdf38 cf401300 cf486100 00000008 c00d2b28 df40: 00011f10 cf401300 00200200 c00d3388 00011f10 cfb63a88 cfb63a80 c00c2f08 df60: 00000000 00000000 cfb63a80 00000000 cf0a3480 00000006 c0033c04 cfb63a80 df80: 00000000 c00c0104 00000003 cf0a3480 cfb63a80 00000000 00000001 00000004 dfa0: 00000058 c0033a80 00000000 00000001 fee1dead 28121969 01234567 00000000 dfc0: 00000000 00000001 00000004 00000058 00000001 00000001 00000000 00000001 dfe0: 4024d200 becf2cb0 00009210 4024d218 60000010 fee1dead 00000000 00000000 [<c02377d8>] (musb_do_idle+0x24/0x138) from [<c005fa34>] (run_timer_softirq+0x1a8/0x26) [<c005fa34>] (run_timer_softirq+0x1a8/0x26c) from [<c005ac14>] (__do_softirq+0x88/0x13) [<c005ac14>] (__do_softirq+0x88/0x138) from [<c005ad08>] (irq_exit+0x44/0x98) [<c005ad08>] (irq_exit+0x44/0x98) from [<c002e07c>] (asm_do_IRQ+0x7c/0xa0) [<c002e07c>] (asm_do_IRQ+0x7c/0xa0) from [<c0033608>] (__irq_svc+0x48/0xa8) Exception stack(0xcf2bde20 to 0xcf2bde68) de20: 00000001 00000000 cf852c14 cf81f200 c045b714 c045b708 cf2bc000 c04a37e8 de40: c0033c04 cf2bc000 00000000 00000001 cf2bde68 cf2bde68 c01c3abc c004f7d8 de60: 60000013 ffffffff [<c0033608>] (__irq_svc+0x48/0xa8) from [<c004f7d8>] (sub_preempt_count+0x0/0xb8) Code: ebf86030 e5940098 e594108c e5902010 (e5d13060) ---[ end trace 3689c0d808f9bf7c ]--- Kernel panic - not syncing: Fatal exception in interrupt Cc: [email protected] Signed-off-by: Johan Hovold <[email protected]> Signed-off-by: Felipe Balbi <[email protected]> Signed-off-by: Sriramakrishnan A G <[email protected]>
[ Upstream commit e226930 ] This code has been broken forever, but in several different and creative ways. So far as I can work out, the R6040 MAC filter has 4 exact-match entries, the first of which the driver uses for its assigned unicast address, plus a 64-entry hash-based filter for multicast addresses (maybe unicast as well?). The original version of this code would write the first 4 multicast addresses as exact-match entries from offset 1 (bug #1: there is no entry 4 so this could write to some PHY registers). It would fill the remainder of the exact-match entries with the broadcast address (bug #2: this would overwrite the last used entry). If more than 4 multicast addresses were configured, it would set up the hash table, write some random crap to the MAC control register (bug #3) and finally walk off the end of the list when filling the exact-match entries (bug #4). All of this seems to be pointless, since it sets the promiscuous bit when the interface is made promiscuous or if >4 multicast addresses are enabled, and never clears it (bug #5, masking bug #2). The recent(ish) changes to the multicast list fixed bug #4, but completely removed the limit on iteration over the exact-match entries (bug torvalds#6). Bug #4 was reported as <https://bugzilla.kernel.org/show_bug.cgi?id=15355> and more recently as <http://bugs.debian.org/600155>. Florian Fainelli attempted to fix these in commit 3bcf822, but that actually dealt with bugs #1-3, bug #4 having been fixed in mainline at that point. That commit fixes the most important current bug torvalds#6. Signed-off-by: Ben Hutchings <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Commit 0e2f80a("fs/dax: ensure all pages are idle prior to filesystem unmount") introduced the WARN_ON_ONCE to capture whether the filesystem has removed all DAX entries or not and applied the fix to xfs and ext4. Apply the missed fix on erofs to fix the runtime warning: [ 5.266254] ------------[ cut here ]------------ [ 5.266274] WARNING: CPU: 6 PID: 3109 at mm/truncate.c:89 truncate_folio_batch_exceptionals+0xff/0x260 [ 5.266294] Modules linked in: [ 5.266999] CPU: 6 UID: 0 PID: 3109 Comm: umount Tainted: G S 6.16.0+ torvalds#6 PREEMPT(voluntary) [ 5.267012] Tainted: [S]=CPU_OUT_OF_SPEC [ 5.267017] Hardware name: Dell Inc. OptiPlex 5000/05WXFV, BIOS 1.5.1 08/24/2022 [ 5.267024] RIP: 0010:truncate_folio_batch_exceptionals+0xff/0x260 [ 5.267076] Code: 00 00 41 39 df 7f 11 eb 78 83 c3 01 49 83 c4 08 41 39 df 74 6c 48 63 f3 48 83 fe 1f 0f 83 3c 01 00 00 43 f6 44 26 08 01 74 df <0f> 0b 4a 8b 34 22 4c 89 ef 48 89 55 90 e8 ff 54 1f 00 48 8b 55 90 [ 5.267083] RSP: 0018:ffffc900013f36c8 EFLAGS: 00010202 [ 5.267095] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ 5.267101] RDX: ffffc900013f3790 RSI: 0000000000000000 RDI: ffff8882a1407898 [ 5.267108] RBP: ffffc900013f3740 R08: 0000000000000000 R09: 0000000000000000 [ 5.267113] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 5.267119] R13: ffff8882a1407ab8 R14: ffffc900013f3888 R15: 0000000000000001 [ 5.267125] FS: 00007aaa8b437800(0000) GS:ffff88850025b000(0000) knlGS:0000000000000000 [ 5.267132] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5.267138] CR2: 00007aaa8b3aac10 CR3: 000000024f764000 CR4: 0000000000f52ef0 [ 5.267144] PKRU: 55555554 [ 5.267150] Call Trace: [ 5.267154] <TASK> [ 5.267181] truncate_inode_pages_range+0x118/0x5e0 [ 5.267193] ? save_trace+0x54/0x390 [ 5.267296] truncate_inode_pages_final+0x43/0x60 [ 5.267309] evict+0x2a4/0x2c0 [ 5.267339] dispose_list+0x39/0x80 [ 5.267352] evict_inodes+0x150/0x1b0 [ 5.267376] generic_shutdown_super+0x41/0x180 [ 5.267390] kill_block_super+0x1b/0x50 [ 5.267402] erofs_kill_sb+0x81/0x90 [erofs] [ 5.267436] deactivate_locked_super+0x32/0xb0 [ 5.267450] deactivate_super+0x46/0x60 [ 5.267460] cleanup_mnt+0xc3/0x170 [ 5.267475] __cleanup_mnt+0x12/0x20 [ 5.267485] task_work_run+0x5d/0xb0 [ 5.267499] exit_to_user_mode_loop+0x144/0x170 [ 5.267512] do_syscall_64+0x2b9/0x7c0 [ 5.267523] ? __lock_acquire+0x665/0x2ce0 [ 5.267535] ? __lock_acquire+0x665/0x2ce0 [ 5.267560] ? lock_acquire+0xcd/0x300 [ 5.267573] ? find_held_lock+0x31/0x90 [ 5.267582] ? mntput_no_expire+0x97/0x4e0 [ 5.267606] ? mntput_no_expire+0xa1/0x4e0 [ 5.267625] ? mntput+0x24/0x50 [ 5.267634] ? path_put+0x1e/0x30 [ 5.267647] ? do_faccessat+0x120/0x2f0 [ 5.267677] ? do_syscall_64+0x1a2/0x7c0 [ 5.267686] ? from_kgid_munged+0x17/0x30 [ 5.267703] ? from_kuid_munged+0x13/0x30 [ 5.267711] ? __do_sys_getuid+0x3d/0x50 [ 5.267724] ? do_syscall_64+0x1a2/0x7c0 [ 5.267732] ? irqentry_exit+0x77/0xb0 [ 5.267743] ? clear_bhb_loop+0x30/0x80 [ 5.267752] ? clear_bhb_loop+0x30/0x80 [ 5.267765] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 5.267772] RIP: 0033:0x7aaa8b32a9fb [ 5.267781] Code: c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 e9 83 0d 00 f7 d8 [ 5.267787] RSP: 002b:00007ffd7c4c9468 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 [ 5.267796] RAX: 0000000000000000 RBX: 00005a61592a8b00 RCX: 00007aaa8b32a9fb [ 5.267802] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00005a61592b2080 [ 5.267806] RBP: 00007ffd7c4c9540 R08: 00007aaa8b403b20 R09: 0000000000000020 [ 5.267812] R10: 0000000000000001 R11: 0000000000000246 R12: 00005a61592a8c00 [ 5.267817] R13: 0000000000000000 R14: 00005a61592b2080 R15: 00005a61592a8f10 [ 5.267849] </TASK> [ 5.267854] irq event stamp: 4721 [ 5.267859] hardirqs last enabled at (4727): [<ffffffff814abf50>] __up_console_sem+0x90/0xa0 [ 5.267873] hardirqs last disabled at (4732): [<ffffffff814abf35>] __up_console_sem+0x75/0xa0 [ 5.267884] softirqs last enabled at (3044): [<ffffffff8132adb3>] kernel_fpu_end+0x53/0x70 [ 5.267895] softirqs last disabled at (3042): [<ffffffff8132b5f4>] kernel_fpu_begin_mask+0xc4/0x120 [ 5.267905] ---[ end trace 0000000000000000 ]--- Fixes: bde708f ("fs/dax: always remove DAX page-cache entries when breaking layouts") Signed-off-by: Yuezhang Mo <[email protected]> Reviewed-by: Friendy Su <[email protected]> Reviewed-by: Daniel Palmer <[email protected]> Reviewed-by: Gao Xiang <[email protected]> Signed-off-by: Gao Xiang <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Petr Machata says: ==================== bridge: Allow keeping local FDB entries only on VLAN 0 The bridge FDB contains one local entry per port per VLAN, for the MAC of the port in question, and likewise for the bridge itself. This allows bridge to locally receive and punt "up" any packets whose destination MAC address matches that of one of the bridge interfaces or of the bridge itself. The number of these local "service" FDB entries grows linearly with number of bridge-global VLAN memberships, but that in turn will tend to grow quadratically with number of ports and per-port VLAN memberships. While that does not cause issues during forwarding lookups, it does make dumps impractically slow. As an example, with 100 interfaces, each on 4K VLANs, a full dump of FDB that just contains these 400K local entries, takes 6.5s. That's _without_ considering iproute2 formatting overhead, this is just how long it takes to walk the FDB (repeatedly), serialize it into netlink messages, and parse the messages back in userspace. This is to illustrate that with growing number of ports and VLANs, the time required to dump this repetitive information blows up. Arguably 4K VLANs per interface is not a very realistic configuration, but then modern switches can instead have several hundred interfaces, and we have fielded requests for >1K VLAN memberships per port among customers. FDB entries are currently all kept on a single linked list, and then dumping uses this linked list to walk all entries and dump them in order. When the message buffer is full, the iteration is cut short, and later restarted. Of course, to restart the iteration, it's first necessary to walk the already-dumped front part of the list before starting dumping again. So one possibility is to organize the FDB entries in different structure more amenable to walk restarts. One option is to walk directly the hash table. The advantage is that no auxiliary structure needs to be introduced. With a rough sketch of this approach, the above scenario gets dumped in not quite 3 s, saving over 50 % of time. However hash table iteration requires maintaining an active cursor that must be collected when the dump is aborted. It looks like that would require changes in the NDO protocol to allow to run this cleanup. Moreover, on hash table resize the iteration is simply restarted. FDB dumps are currently not guaranteed to correspond to any one particular state: entries can be missed, or be duplicated. But with hash table iteration we would get that plus the much less graceful resize behavior, where swaths of FDB are duplicated. Another option is to maintain the FDB entries in a red-black tree. We have a PoC of this approach on hand, and the above scenario is dumped in about 2.5 s. Still not as snappy as we'd like it, but better than the hash table. However the savings come at the expense of a more expensive insertion, and require locking during dumps, which blocks insertion. The upside of these approaches is that they provide benefits whatever the FDB contents. But it does not seem like either of these is workable. However we intend to clean up the RB tree PoC and present it for consideration later on in case the trade-offs are considered acceptable. Yet another option might be to use in-kernel FDB filtering, and to filter the local entries when dumping. Unfortunately, this does not help all that much either, because the linked-list walk still needs to happen. Also, with the obvious filtering interface built around ndm_flags / ndm_state filtering, one can't just exclude pure local entries in one query. One needs to dump all non-local entries first, and then to get permanent entries in another run filter local & added_by_user. I.e. one needs to pay the iteration overhead twice, and then integrate the result in userspace. To get significant savings, one would need a very specific knob like "dump, but skip/only include local entries". But if we are adding a local-specific knobs, maybe let's have an option to just not duplicate them in the first place. All this FDB duplication is there merely to make things snappy during forwarding. But high-radix switches with thousands of VLANs typically do not process much traffic in the SW datapath at all, but rather offload vast majority of it. So we could exchange some of the runtime performance for a neater FDB. To that end, in this patchset, introduce a new bridge option, BR_BOOLOPT_FDB_LOCAL_VLAN_0, which when enabled, has local FDB entries installed only on VLAN 0, instead of duplicating them across all VLANs. Then to maintain the local termination behavior, on FDB miss, the bridge does a second lookup on VLAN 0. Enabling this option changes the bridge behavior in expected ways. Since the entries are only kept on VLAN 0, FDB get, flush and dump will not perceive them on non-0 VLANs. And deleting the VLAN 0 entry affects forwarding on all VLANs. This patchset is loosely based on a privately circulated patch by Nikolay Aleksandrov. The patchset progresses as follows: - Patch #1 introduces a bridge option to enable the above feature. Then patches #2 to #5 gradually patch the bridge to do the right thing when the option is enabled. Finally patch torvalds#6 adds the UAPI knob and the code for when the feature is enabled or disabled. - Patches torvalds#7, torvalds#8 and torvalds#9 contain fixes and improvements to selftest libraries - Patch torvalds#10 contains a new selftest ==================== Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Add the preliminary header file for the PRUSS INTC instance present in the ICSSG IP on AM65x SoCs. The INTC on AM65x SoCs have increased number of host interrupts & channels (20 vs 10) and PRU System events (160 vs 64), so there are new additional registers. NOTE: The current pruIntc is defined as a single structure and the defined global instance variable CT_INTC leverages the Constant Table Entry #0 even though there is an additional Constant Table Entry torvalds#6 that starts at an offset of 0x200. The Constant Table Entry torvalds#6 should be commented out in any example linker command files accordingly. Signed-off-by: Suman Anna <[email protected]>
Add the RPMsg echo examples for the new K3 AM65x SoC family. The AM65x SoCs have three ICSSG IP instances, with each instance containing 2 PRU cores and 2 new auxiliary cores called RTU cores. Examples are added for each of the PRU and RTU cores within a single ICSSG IP instance. The same source files are used to rebuild the project for all the three different ICSSG instances - ICSSG0, ICSSG1 and ICSSG2. The output ELF firmware images and map files are generated under "gen/icssg<#inst>/" folder in each of the respective core example folder. The following are the notable changes w.r.t AM572x: - There is no STANDBY_INIT nor a SYSCFG register with ICSSG, so there is no need to program any value. - The rpmsg channel description and port numbers are constructed using macros passed through the compiler. - The PRU resource tables uses the new PRU interrupt resource types and macros, but uses the Version 0 interrupt resource type for illustration purposes (Version 1 recommended for all PRU and RTU cores in general). - The RTU resource tables uses the newer Version 1 interrupt resource type that had to be introduced specifically for AM65x ICSSG. - AM65x-specific Linker command files, most of the Constants for AM65x now are local ICSSG sub-module addresses. Note that each PRU and RTU core needs its own linker command file due to differences in their Constant Table Registers # 10, 11 and 22. - Use the AM65x specific interrupt header file. See commit c97858e ("AM65x: Add preliminary header file for ICSSG INTC") and the commented out CREGISTER torvalds#6 for some usage caveats. - Makefile improvements to build three output images for each ICSSG instance reusing the same source files and linker command files for each of PRU0, PRU1, RTU0 and RTU1 cores. Signed-off-by: Suman Anna <[email protected]>
Add the preliminary header file for the PRUSS INTC instance present in the ICSSG IP on J721E SoCs. The INTC on J721E SoCs is identical to the INTC present on AM65x SoCs, with 20 host interrupts & channels and 164 PRU System Events. The file is copied from the corresponding AM65x file. NOTE: 1. The ICSSG on J721E SoCs have two additional Tx_PRU cores and associated Task Managers, but there is no increase in the INTC host interrupts. 2. The current pruIntc is defined as a single structure and the defined global instance variable CT_INTC leverages the Constant Table Entry #0 even though there is an additional Constant Table Entry torvalds#6 that starts at an offset of 0x200. The Constant Table Entry torvalds#6 should be commented out in any example linker command files accordingly. Signed-off-by: Suman Anna <[email protected]>
Add the RPMsg echo examples for the latest K3 J721E SoC family. The J721E SoCs uses the next version of the AM65x ICSSG IP and contains two instances of this newer ICSSG IP. Each ICSSG instance contains 2 PRU cores, 2 RTU cores, and 2 new additional auxiliary cores called Transmit PRU (Tx_PRU) cores that are normally used to control the TX L2 FIFO if enabled in Ethernet applications. Examples are added for each of the PRU and RTU cores (Tx_PRUs are left out for the moment) within a single ICSSG IP instance. The same source files are used to rebuild the project for all the two different ICSSG instances - ICSSG0 and ICSSG1. The output ELF firmware images and map files are generated under "gen/icssg<#inst>/" folder in each of the respective core example folder. The following are the notable differences w.r.t AM65x: - The Tx_PRUs uses the same interrupt sources as the regular PRU cores, so the resource tables can use the Version 0 interrupt resource type (Version 1 still recommended for all PRU, RTU and Tx_PRU cores). - J721E-specific Linker command files, the Constants are identical to AM65x, but the addition of the Tx_PRU cores required additional partitioning of the Data RAMs. The second 4 KB in each Data RAM is equally partitioned between the RTU and Tx_PRU cores, while the size for PRU core is left unchanged. - Use a separate copy of the interrupt header file using a J721E specific folder. The INTC is identical to that of AM65x, so the same register definitions are used. See commit 91f4f18 ("J721E: Add header file for ICSSG INTC"). The same limitations around CREGISTER torvalds#6 exists as on AM65x. Please see commit df1d9da ("Examples: AM65x: Add RPMsg examples for AM65x SoCs") for differences w.r.t AM572x. Signed-off-by: Suman Anna <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
[ Upstream commit 181993b ] Commit 0e2f80a("fs/dax: ensure all pages are idle prior to filesystem unmount") introduced the WARN_ON_ONCE to capture whether the filesystem has removed all DAX entries or not and applied the fix to xfs and ext4. Apply the missed fix on erofs to fix the runtime warning: [ 5.266254] ------------[ cut here ]------------ [ 5.266274] WARNING: CPU: 6 PID: 3109 at mm/truncate.c:89 truncate_folio_batch_exceptionals+0xff/0x260 [ 5.266294] Modules linked in: [ 5.266999] CPU: 6 UID: 0 PID: 3109 Comm: umount Tainted: G S 6.16.0+ torvalds#6 PREEMPT(voluntary) [ 5.267012] Tainted: [S]=CPU_OUT_OF_SPEC [ 5.267017] Hardware name: Dell Inc. OptiPlex 5000/05WXFV, BIOS 1.5.1 08/24/2022 [ 5.267024] RIP: 0010:truncate_folio_batch_exceptionals+0xff/0x260 [ 5.267076] Code: 00 00 41 39 df 7f 11 eb 78 83 c3 01 49 83 c4 08 41 39 df 74 6c 48 63 f3 48 83 fe 1f 0f 83 3c 01 00 00 43 f6 44 26 08 01 74 df <0f> 0b 4a 8b 34 22 4c 89 ef 48 89 55 90 e8 ff 54 1f 00 48 8b 55 90 [ 5.267083] RSP: 0018:ffffc900013f36c8 EFLAGS: 00010202 [ 5.267095] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ 5.267101] RDX: ffffc900013f3790 RSI: 0000000000000000 RDI: ffff8882a1407898 [ 5.267108] RBP: ffffc900013f3740 R08: 0000000000000000 R09: 0000000000000000 [ 5.267113] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 5.267119] R13: ffff8882a1407ab8 R14: ffffc900013f3888 R15: 0000000000000001 [ 5.267125] FS: 00007aaa8b437800(0000) GS:ffff88850025b000(0000) knlGS:0000000000000000 [ 5.267132] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5.267138] CR2: 00007aaa8b3aac10 CR3: 000000024f764000 CR4: 0000000000f52ef0 [ 5.267144] PKRU: 55555554 [ 5.267150] Call Trace: [ 5.267154] <TASK> [ 5.267181] truncate_inode_pages_range+0x118/0x5e0 [ 5.267193] ? save_trace+0x54/0x390 [ 5.267296] truncate_inode_pages_final+0x43/0x60 [ 5.267309] evict+0x2a4/0x2c0 [ 5.267339] dispose_list+0x39/0x80 [ 5.267352] evict_inodes+0x150/0x1b0 [ 5.267376] generic_shutdown_super+0x41/0x180 [ 5.267390] kill_block_super+0x1b/0x50 [ 5.267402] erofs_kill_sb+0x81/0x90 [erofs] [ 5.267436] deactivate_locked_super+0x32/0xb0 [ 5.267450] deactivate_super+0x46/0x60 [ 5.267460] cleanup_mnt+0xc3/0x170 [ 5.267475] __cleanup_mnt+0x12/0x20 [ 5.267485] task_work_run+0x5d/0xb0 [ 5.267499] exit_to_user_mode_loop+0x144/0x170 [ 5.267512] do_syscall_64+0x2b9/0x7c0 [ 5.267523] ? __lock_acquire+0x665/0x2ce0 [ 5.267535] ? __lock_acquire+0x665/0x2ce0 [ 5.267560] ? lock_acquire+0xcd/0x300 [ 5.267573] ? find_held_lock+0x31/0x90 [ 5.267582] ? mntput_no_expire+0x97/0x4e0 [ 5.267606] ? mntput_no_expire+0xa1/0x4e0 [ 5.267625] ? mntput+0x24/0x50 [ 5.267634] ? path_put+0x1e/0x30 [ 5.267647] ? do_faccessat+0x120/0x2f0 [ 5.267677] ? do_syscall_64+0x1a2/0x7c0 [ 5.267686] ? from_kgid_munged+0x17/0x30 [ 5.267703] ? from_kuid_munged+0x13/0x30 [ 5.267711] ? __do_sys_getuid+0x3d/0x50 [ 5.267724] ? do_syscall_64+0x1a2/0x7c0 [ 5.267732] ? irqentry_exit+0x77/0xb0 [ 5.267743] ? clear_bhb_loop+0x30/0x80 [ 5.267752] ? clear_bhb_loop+0x30/0x80 [ 5.267765] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 5.267772] RIP: 0033:0x7aaa8b32a9fb [ 5.267781] Code: c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 e9 83 0d 00 f7 d8 [ 5.267787] RSP: 002b:00007ffd7c4c9468 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 [ 5.267796] RAX: 0000000000000000 RBX: 00005a61592a8b00 RCX: 00007aaa8b32a9fb [ 5.267802] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00005a61592b2080 [ 5.267806] RBP: 00007ffd7c4c9540 R08: 00007aaa8b403b20 R09: 0000000000000020 [ 5.267812] R10: 0000000000000001 R11: 0000000000000246 R12: 00005a61592a8c00 [ 5.267817] R13: 0000000000000000 R14: 00005a61592b2080 R15: 00005a61592a8f10 [ 5.267849] </TASK> [ 5.267854] irq event stamp: 4721 [ 5.267859] hardirqs last enabled at (4727): [<ffffffff814abf50>] __up_console_sem+0x90/0xa0 [ 5.267873] hardirqs last disabled at (4732): [<ffffffff814abf35>] __up_console_sem+0x75/0xa0 [ 5.267884] softirqs last enabled at (3044): [<ffffffff8132adb3>] kernel_fpu_end+0x53/0x70 [ 5.267895] softirqs last disabled at (3042): [<ffffffff8132b5f4>] kernel_fpu_begin_mask+0xc4/0x120 [ 5.267905] ---[ end trace 0000000000000000 ]--- Fixes: bde708f ("fs/dax: always remove DAX page-cache entries when breaking layouts") Signed-off-by: Yuezhang Mo <[email protected]> Reviewed-by: Friendy Su <[email protected]> Reviewed-by: Daniel Palmer <[email protected]> Reviewed-by: Gao Xiang <[email protected]> Signed-off-by: Gao Xiang <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
[ Upstream commit 181993b ] Commit 0e2f80a("fs/dax: ensure all pages are idle prior to filesystem unmount") introduced the WARN_ON_ONCE to capture whether the filesystem has removed all DAX entries or not and applied the fix to xfs and ext4. Apply the missed fix on erofs to fix the runtime warning: [ 5.266254] ------------[ cut here ]------------ [ 5.266274] WARNING: CPU: 6 PID: 3109 at mm/truncate.c:89 truncate_folio_batch_exceptionals+0xff/0x260 [ 5.266294] Modules linked in: [ 5.266999] CPU: 6 UID: 0 PID: 3109 Comm: umount Tainted: G S 6.16.0+ torvalds#6 PREEMPT(voluntary) [ 5.267012] Tainted: [S]=CPU_OUT_OF_SPEC [ 5.267017] Hardware name: Dell Inc. OptiPlex 5000/05WXFV, BIOS 1.5.1 08/24/2022 [ 5.267024] RIP: 0010:truncate_folio_batch_exceptionals+0xff/0x260 [ 5.267076] Code: 00 00 41 39 df 7f 11 eb 78 83 c3 01 49 83 c4 08 41 39 df 74 6c 48 63 f3 48 83 fe 1f 0f 83 3c 01 00 00 43 f6 44 26 08 01 74 df <0f> 0b 4a 8b 34 22 4c 89 ef 48 89 55 90 e8 ff 54 1f 00 48 8b 55 90 [ 5.267083] RSP: 0018:ffffc900013f36c8 EFLAGS: 00010202 [ 5.267095] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ 5.267101] RDX: ffffc900013f3790 RSI: 0000000000000000 RDI: ffff8882a1407898 [ 5.267108] RBP: ffffc900013f3740 R08: 0000000000000000 R09: 0000000000000000 [ 5.267113] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 5.267119] R13: ffff8882a1407ab8 R14: ffffc900013f3888 R15: 0000000000000001 [ 5.267125] FS: 00007aaa8b437800(0000) GS:ffff88850025b000(0000) knlGS:0000000000000000 [ 5.267132] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5.267138] CR2: 00007aaa8b3aac10 CR3: 000000024f764000 CR4: 0000000000f52ef0 [ 5.267144] PKRU: 55555554 [ 5.267150] Call Trace: [ 5.267154] <TASK> [ 5.267181] truncate_inode_pages_range+0x118/0x5e0 [ 5.267193] ? save_trace+0x54/0x390 [ 5.267296] truncate_inode_pages_final+0x43/0x60 [ 5.267309] evict+0x2a4/0x2c0 [ 5.267339] dispose_list+0x39/0x80 [ 5.267352] evict_inodes+0x150/0x1b0 [ 5.267376] generic_shutdown_super+0x41/0x180 [ 5.267390] kill_block_super+0x1b/0x50 [ 5.267402] erofs_kill_sb+0x81/0x90 [erofs] [ 5.267436] deactivate_locked_super+0x32/0xb0 [ 5.267450] deactivate_super+0x46/0x60 [ 5.267460] cleanup_mnt+0xc3/0x170 [ 5.267475] __cleanup_mnt+0x12/0x20 [ 5.267485] task_work_run+0x5d/0xb0 [ 5.267499] exit_to_user_mode_loop+0x144/0x170 [ 5.267512] do_syscall_64+0x2b9/0x7c0 [ 5.267523] ? __lock_acquire+0x665/0x2ce0 [ 5.267535] ? __lock_acquire+0x665/0x2ce0 [ 5.267560] ? lock_acquire+0xcd/0x300 [ 5.267573] ? find_held_lock+0x31/0x90 [ 5.267582] ? mntput_no_expire+0x97/0x4e0 [ 5.267606] ? mntput_no_expire+0xa1/0x4e0 [ 5.267625] ? mntput+0x24/0x50 [ 5.267634] ? path_put+0x1e/0x30 [ 5.267647] ? do_faccessat+0x120/0x2f0 [ 5.267677] ? do_syscall_64+0x1a2/0x7c0 [ 5.267686] ? from_kgid_munged+0x17/0x30 [ 5.267703] ? from_kuid_munged+0x13/0x30 [ 5.267711] ? __do_sys_getuid+0x3d/0x50 [ 5.267724] ? do_syscall_64+0x1a2/0x7c0 [ 5.267732] ? irqentry_exit+0x77/0xb0 [ 5.267743] ? clear_bhb_loop+0x30/0x80 [ 5.267752] ? clear_bhb_loop+0x30/0x80 [ 5.267765] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 5.267772] RIP: 0033:0x7aaa8b32a9fb [ 5.267781] Code: c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 e9 83 0d 00 f7 d8 [ 5.267787] RSP: 002b:00007ffd7c4c9468 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 [ 5.267796] RAX: 0000000000000000 RBX: 00005a61592a8b00 RCX: 00007aaa8b32a9fb [ 5.267802] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00005a61592b2080 [ 5.267806] RBP: 00007ffd7c4c9540 R08: 00007aaa8b403b20 R09: 0000000000000020 [ 5.267812] R10: 0000000000000001 R11: 0000000000000246 R12: 00005a61592a8c00 [ 5.267817] R13: 0000000000000000 R14: 00005a61592b2080 R15: 00005a61592a8f10 [ 5.267849] </TASK> [ 5.267854] irq event stamp: 4721 [ 5.267859] hardirqs last enabled at (4727): [<ffffffff814abf50>] __up_console_sem+0x90/0xa0 [ 5.267873] hardirqs last disabled at (4732): [<ffffffff814abf35>] __up_console_sem+0x75/0xa0 [ 5.267884] softirqs last enabled at (3044): [<ffffffff8132adb3>] kernel_fpu_end+0x53/0x70 [ 5.267895] softirqs last disabled at (3042): [<ffffffff8132b5f4>] kernel_fpu_begin_mask+0xc4/0x120 [ 5.267905] ---[ end trace 0000000000000000 ]--- Fixes: bde708f ("fs/dax: always remove DAX page-cache entries when breaking layouts") Signed-off-by: Yuezhang Mo <[email protected]> Reviewed-by: Friendy Su <[email protected]> Reviewed-by: Daniel Palmer <[email protected]> Reviewed-by: Gao Xiang <[email protected]> Signed-off-by: Gao Xiang <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
In November last year, I sent an RFC to introduce CAN XL [1]. That RFC, despite positive feedback, was put on hold due to some unanswered question concerning the PWM encoding [2]. While stuck, some small preparation work was done in parallel in [3] by refactoring the struct can_priv and doing some trivial clean-up and renaming. Initially, [3] received zero feedback but was eventually merged after splitting it in smaller parts and resending it. Finally, in July this year, we clarified the remaining mysteries about PWM calculation, thus unlocking the series. Summer being a bit busy because of some personal matters brings us to now. After doing all the refactoring and adding all the CAN XL features, the final result is roughly 30 patches, probably too much for a single series. So I am splitting it in two: - preparation (this series) - CAN XL (will come later, after this series get ACK-ed) And so, this series continues and finishes the preparation work done in [3]. It contains all the refactoring needed to smoothly introduce CAN XL. The goal is to: - split the functions in smaller pieces: CAN XL will introduce a fair amount of code. And some functions which are already fairly long (86 lines for can_validate(), 216 lines for can_changelink()) would grow to disproportionate sizes if the CAN XL logic were to be inlined in those functions. - repurpose the existing code to handle both CAN FD and CAN XL: a huge part of CAN XL simply reuses the CAN FD logic. All the existing CAN FD logic is made more generic to handle both CAN FD and XL. In more details: - Patch #1 moves struct data_bittiming_params from dev.h to bittiming.h and patch #2 makes can_get_relative_tdco() FD agnostic before also moving it to bittiming.h. - Patch #3 adds some comments to netlink.h tagging which IFLA symbols are FD specific. - Patches #4 to torvalds#6 are refactoring can_validate() and can_validate_bittiming(). - Patches torvalds#7 to torvalds#11 are refactoring can_changelink() and can_tdc_changelink(). - Patches torvalds#12 and torvalds#13 are refactoring can_get_size() and can_tdc_get_size(). - Patches torvalds#14 to torvalds#17 are refactoring can_fill_info() and can_tdc_fill_info(). - Patch torvalds#18 makes can_calc_tdco() FD agnostic. - Patch torvalds#19 adds can_get_ctrlmode_str() which converts control mode flags into strings. This is done in preparation of patch torvalds#20. - Patch torvalds#20 is the final patch and improves the user experience by providing detailed error messages whenever invalid parameters are provided. All those error messages came into handy when debugging the upcoming CAN XL patches. Aside from the last patch, the other changes do not impact any of the existing functionalities. The follow up series which introduces CAN XL is nearly completed but will be sent only once this one is approved: one thing at a time, I do not want to overwhelm people (including myself). [1] https://lore.kernel.org/linux-can/[email protected]/ [2] https://lore.kernel.org/linux-can/[email protected]/ [3] https://lore.kernel.org/linux-can/[email protected]/ To: Marc Kleine-Budde <[email protected]> To: Oliver Hartkopp <[email protected]> Cc: Vincent Mailhol <[email protected]> Cc: Stéphane Grosjean <[email protected]> Cc: Robert Nawrath <[email protected]> Cc: Minh Le <[email protected]> Cc: Duy Nguyen <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Vincent Mailhol <[email protected]> --- Changes in v3: - Add a static_assert() in can_validate_databittiming() to prove that the nla attributes were already correctly aligned. Link to v2: https://lore.kernel.org/r/[email protected] Changes in v2: - Move can_validate()'s comment block to can_validate_databittiming(). Consequently, [PATCH 07/21] can: netlink: remove comment in can_validate() from v1 is removed. - Change any occurrences of WARN_ON(1) into return -EOPNOTSUPP to suppress the three gcc warnings which were reported by the kernel test robot: Link: https://lore.kernel.org/linux-can/[email protected]/ Link: https://lore.kernel.org/linux-can/[email protected]/ Link: https://lore.kernel.org/linux-can/[email protected]/ - Small rewrite of patch torvalds#12 "can: netlink: make can_tdc_get_size() FD agnostic" description to add more details. Link to v1: https://lore.kernel.org/r/[email protected] --- b4-submit-tracking --- { "series": { "revision": 3, "change-id": "20250831-canxl-netlink-prep-9dbf8498fd9d", "prefixes": [], "prerequisites": [ "base-commit: net-next/main" ], "history": { "v1": [ "[email protected]" ], "v2": [ "[email protected]" ] } } }
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Patch series "mm: remove nth_page()", v2. As discussed recently with Linus, nth_page() is just nasty and we would like to remove it. To recap, the reason we currently need nth_page() within a folio is because on some kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP), the memmap is allocated per memory section. While buddy allocations cannot cross memory section boundaries, hugetlb and dax folios can. So crossing a memory section means that "page++" could do the wrong thing. Instead, nth_page() on these problematic configs always goes from page->pfn, to the go from (++pfn)->page, which is rather nasty. Likely, many people have no idea when nth_page() is required and when it might be dropped. We refer to such problematic PFN ranges and "non-contiguous pages". If we only deal with "contiguous pages", there is not need for nth_page(). Besides that "obvious" folio case, we might end up using nth_page() within CMA allocations (again, could span memory sections), and in one corner case (kfence) when processing memblock allocations (again, could span memory sections). So let's handle all that, add sanity checks, and remove nth_page(). Patch #1 -> #5 : stop making SPARSEMEM_VMEMMAP user-selectable + cleanups Patch torvalds#6 -> torvalds#13 : disallow folios to have non-contiguous pages Patch torvalds#14 -> torvalds#20 : remove nth_page() usage within folios Patch torvalds#22 : disallow CMA allocations of non-contiguous pages Patch torvalds#23 -> torvalds#33 : sanity+check + remove nth_page() usage within SG entry Patch torvalds#34 : sanity-check + remove nth_page() usage in unpin_user_page_range_dirty_lock() Patch torvalds#35 : remove nth_page() in kfence Patch torvalds#36 : adjust stale comment regarding nth_page Patch torvalds#37 : mm: remove nth_page() A lot of this is inspired from the discussion at [1] between Linus, Jason and me, so cudos to them. This patch (of 37): In an ideal world, we wouldn't have to deal with SPARSEMEM without SPARSEMEM_VMEMMAP, but in particular for 32bit SPARSEMEM_VMEMMAP is considered too costly and consequently not supported. However, if an architecture does support SPARSEMEM with SPARSEMEM_VMEMMAP, let's forbid the user to disable VMEMMAP: just like we already do for arm64, s390 and x86. So if SPARSEMEM_VMEMMAP is supported, don't allow to use SPARSEMEM without SPARSEMEM_VMEMMAP. This implies that the option to not use SPARSEMEM_VMEMMAP will now be gone for loongarch, powerpc, riscv and sparc. All architectures only enable SPARSEMEM_VMEMMAP with 64bit support, so there should not really be a big downside to using the VMEMMAP (quite the contrary). This is a preparation for not supporting (1) folio sizes that exceed a single memory section (2) CMA allocations of non-contiguous page ranges in SPARSEMEM without SPARSEMEM_VMEMMAP configs, whereby we want to limit possible impact as much as possible (e.g., gigantic hugetlb page allocations suddenly fails). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/all/CAHk-=wiCYfNp4AJLBORU-c7ZyRBUp66W2-Et6cdQ4REx-GyQ_A@mail.gmail.com/T/#u [1] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Acked-by: SeongJae Park <[email protected]> Reviewed-by: Wei Yang <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Huacai Chen <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexandre Ghiti <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Alex Dubov <[email protected]> Cc: Alex Willamson <[email protected]> Cc: Bart van Assche <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Brett Creeley <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Damien Le Maol <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Doug Gilbert <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Inki Dae <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: John Hubbard <[email protected]> Cc: Jonas Lahtinen <[email protected]> Cc: Kevin Tian <[email protected]> Cc: Lars Persson <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Marco Elver <[email protected]> Cc: "Martin K. Petersen" <[email protected]> Cc: Maxim Levitky <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Niklas Cassel <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Begunkov <[email protected]> Cc: Peter Xu <[email protected]> Cc: Robin Murohy <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Shameerali Kolothum Thodi <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Tvrtko Ursulin <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yishai Hadas <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
- treat tailcall count as 32-bit for access and update - change out_offset scope from file to function - minor format/structure changes for consistency Testing: (skipping fentry, fexit, freplace) ======== root@qemu-armhf:/usr/libexec/kselftests-bpf# modprobe test_bpf test_suite=test_tail_calls test_bpf: #0 Tail call leaf jited:1 967 PASS test_bpf: #1 Tail call 2 jited:1 1427 PASS test_bpf: #2 Tail call 3 jited:1 2373 PASS test_bpf: #3 Tail call 4 jited:1 2304 PASS test_bpf: #4 Tail call load/store leaf jited:1 1684 PASS test_bpf: #5 Tail call load/store jited:1 2249 PASS test_bpf: torvalds#6 Tail call error path, max count reached jited:1 22538 PASS test_bpf: torvalds#7 Tail call count preserved across function calls jited:1 1055668 PASS test_bpf: torvalds#8 Tail call error path, NULL target jited:1 513 PASS test_bpf: torvalds#9 Tail call error path, index out of range jited:1 392 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] root@qemu-armhf:/usr/libexec/kselftests-bpf# ./test_progs -n 397/1-12,17-18,23-24,27-31 397/1 tailcalls/tailcall_1:OK 397/2 tailcalls/tailcall_2:OK 397/3 tailcalls/tailcall_3:OK 397/4 tailcalls/tailcall_4:OK 397/5 tailcalls/tailcall_5:OK 397/6 tailcalls/tailcall_6:OK 397/7 tailcalls/tailcall_bpf2bpf_1:OK 397/8 tailcalls/tailcall_bpf2bpf_2:OK 397/9 tailcalls/tailcall_bpf2bpf_3:OK 397/10 tailcalls/tailcall_bpf2bpf_4:OK 397/11 tailcalls/tailcall_bpf2bpf_5:OK 397/12 tailcalls/tailcall_bpf2bpf_6:OK 397/17 tailcalls/tailcall_poke:OK 397/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 397/23 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 397/24 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 397/27 tailcalls/tailcall_failure:OK 397/28 tailcalls/reject_tail_call_spin_lock:OK 397/29 tailcalls/reject_tail_call_rcu_lock:OK 397/30 tailcalls/reject_tail_call_preempt_lock:OK 397/31 tailcalls/reject_tail_call_ref:OK 397 tailcalls:OK Summary: 1/21 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tony Ambardar <[email protected]>
Thanks for sharing linux in github! The beer is free too!