Up to [cvs.NetBSD.org] / src / sys / kern
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
futex(2): Avoid returning early on timeout. Rounding in the arithmetic leading into cv_timedwait_sig, and any skew between the timecounter used by clock_gettime and the hardclock timer used to wake cv_timedwait_sig, can lead cv_timedwait_sig to wake up before the deadline as observable by clock_gettime. futex(FUTEX_WAIT) is not supposed to do that, so ignore when cv_timedwait_sig returns EWOULDBLOCK -- we'll notice the deadline has passed in the next iteration anyway, if it has actually passed. While here, make sure that we never pass less than 1 tick to cv_timedwait_sig -- that turns it into cv_wait_sig, to wait indefinitely with no timeout. With this change, I have not seen any failures as reported in: PR kern/59132: t_futex_ops:futex_wait_timeout_* sometimes fails on early wakeup Some instrumentation in futex_wait to count when cv_timedwait_sig returns early as measured by clock_gettime (not committed in this change, just local experiments) supports this hypothesis for the symptoms observed in the PR.
futex(2): Rename various parameters to clarify correspondence. No functional change intended. I have spent way too much time puzzling over what val/val2/val3 mean for each operation; let's just give them all meaningful names and write down the correspondence in the dispatch switch in do_futex. Prompted by how much time I spent scratching my head for: PR kern/59129: futex(3): missing sign extension in FUTEX_WAKE_OP
futex(2): Fix some comments to match the usual argument order. No functional change intenteded. Prompted by: PR kern/59129: futex(3): missing sign extension in FUTEX_WAKE_OP
futex(2): Sign-extend FUTEX_WAKE_OP oparg/cmparg as Linux does. Also mask off bits in the FUTEX_OP macro as Linux does so that passing negative arguments works like in Linux. PR kern/59129: futex(3): missing sign extension in FUTEX_WAKE_OP
futex(2): Fix return value of FUTEX_CMP_REQUEUE. The return value is the number of waiters woken _or requeued_, not just the number of waiters woken: FUTEX_CMP_REQUEUE Returns the total number of waiters that were woken up or requeued to the futex for the futex word at uaddr2. If this value is greater than val, then the difference is the number of waiters requeued to the futex for the futex word at uaddr2. https://man7.org/linux/man-pages/man2/futex.2.html While here, clarify some of the arguments with comments so it's not quite so cryptic with val/val2/val3 everywhere. PR kern/56828: futex calls in Linux emulation sometimes hang
futex(2): Fix FUTEX_CMP_REQUEUE to always compare even if no waiters. It must always compare the futex value and fail with EAGAIN on mismatch, even if there are no waiters. FUTEX_CMP_REQUEUE (since Linux 2.6.7) This operation first checks whether the location uaddr still contains the value val3. If not, the operation fails with the error EAGAIN. Otherwise, the operation [...] https://man7.org/linux/man-pages/man2/futex.2.html PR kern/56828: futex calls in Linux emulation sometimes hang
sys_futex.c: Fix illustration of futex(2). In this illustration, we need to _set_ bit 1 to claim ownership, not _clear_ bit 1 to claim ownership. No functional change intended -- comment only.
kern: Eliminate most __HAVE_ATOMIC_AS_MEMBAR conditionals. I'm leaving in the conditional around the legacy membar_enters (store-before-load, store-before-store) in kern_mutex.c and in kern_lock.c because they may still matter: store-before-load barriers tend to be the most expensive kind, so eliding them is probably worthwhile on x86. (It also may not matter; I just don't care to do measurements right now, and it's a single valid and potentially justifiable use case in the whole tree.) However, membar_release/acquire can be mere instruction barriers on all TSO platforms including x86, so there's no need to go out of our way with a bad API to conditionalize them. If the procedure call overhead is measurable we just could change them to be macros on x86 that expand into __insn_barrier. Discussed on tech-kern: https://mail-index.netbsd.org/tech-kern/2023/02/23/msg028729.html
futex(9): Convert membar_enter/exit to membar_acquire/release. No functional change -- this is just in an illustrative comment!
sys: Use membar_release/acquire around reference drop. This just goes through my recent reference count membar audit and changes membar_exit to membar_release and membar_enter to membar_acquire -- this should make everything cheaper on most CPUs without hurting correctness, because membar_acquire is generally cheaper than membar_enter.
sys: Membar audit around reference count releases. If two threads are using an object that is freed when the reference count goes to zero, we need to ensure that all memory operations related to the object happen before freeing the object. Using an atomic_dec_uint_nv(&refcnt) == 0 ensures that only one thread takes responsibility for freeing, but it's not enough to ensure that the other thread's memory operations happen before the freeing. Consider: Thread A Thread B obj->foo = 42; obj->baz = 73; mumble(&obj->bar); grumble(&obj->quux); /* membar_exit(); */ /* membar_exit(); */ atomic_dec -- not last atomic_dec -- last /* membar_enter(); */ KASSERT(invariant(obj->foo, obj->bar)); free_stuff(obj); The memory barriers ensure that obj->foo = 42; mumble(&obj->bar); in thread A happens before KASSERT(invariant(obj->foo, obj->bar)); free_stuff(obj); in thread B. Without them, this ordering is not guaranteed. So in general it is necessary to do membar_exit(); if (atomic_dec_uint_nv(&obj->refcnt) != 0) return; membar_enter(); to release a reference, for the `last one out hit the lights' style of reference counting. (This is in contrast to the style where one thread blocks new references and then waits under a lock for existing ones to drain with a condvar -- no membar needed thanks to mutex(9).) I searched for atomic_dec to find all these. Obviously we ought to have a better abstraction for this because there's so much copypasta. This is a stop-gap measure to fix actual bugs until we have that. It would be nice if an abstraction could gracefully handle the different styles of reference counting in use -- some years ago I drafted an API for this, but making it cover everything got a little out of hand (particularly with struct vnode::v_usecount) and I ended up setting it aside to work on psref/localcount instead for better scalability. I got bored of adding #ifdef __HAVE_ATOMIC_AS_MEMBAR everywhere, so I only put it on things that look performance-critical on 5sec review. We should really adopt membar_enter_preatomic/membar_exit_postatomic or something (except they are applicable only to atomic r/m/w, not to atomic_load/store_*, making the naming annoying) and get rid of all the ifdefs.
Cherry-pick this sys_futex.c revision and associated changes: revision 1.13 date: 2021-09-28 08:05:42 -0700; author: thorpej; state: Exp; lines: +11 -9; commitid: FPndTp2ZDjYuyJaD; futex_release_all_lwp(): No need to pass the "tid" argument separately; that is a vestige of an older version of the code. Also, move a KASSERT() that both futex_release_all_lwp() call sites had inside of futex_release_all_lwp() itself. ...so make this easier to test this sys_futex.c with trunk.
merge rev. 1.15 from HEAD: fix a typo in compare_futex_key().
fix a typo in compare_futex_key().
fix various typos, mainly in comments, but also in man pages and log messages.
futex_release_all_lwp(): No need to pass the "tid" argument separately; that is a vestige of an older version of the code. Also, move a KASSERT() that both futex_release_all_lwp() call sites had inside of futex_release_all_lwp() itself.
The return values for FUTEX_REQUEUE and FUTEX_CMP_REQUEUE are different, but we weren't doing to the right thing. FUTEX_REQUEUE returns the number of waiters awakened. FUTEX_CMP_REQUEUE returns the number of waiters awakenend plus the number of waiters requeued (and it is an exercise for the caller to calculate the number requeued, if it cares).
Isolate knowledge of the union-ness of futex_key to where it's declared.
Correct a comment.
At the end of futex_wait(), when sleepq_block() returns 0, we would like to assert that l->l_futex == NULL, because all of the code paths that awaken a blocked thread in sys_futex.c itself clear l->l_futex. Unfortunately, there are certain received-a-signal situations (e.g. SIGKILL) where sleepq_block() will not return an error after being awakened by the signal, rendering this assertion too strong. So, rather than going down the rabbit hole of reasoning out and altering long-standing behavior of the signals code, just don't assert there and treat a zero-return from sleepq_block() as an aborted futex wait if l->l_futex != NULL. (Thanks chs@ for helping chase this one down.)
Bring over just the futex sleepq infrastructure changes from thorpej-futex to a new branch based on current HEAD. This contains only the fixes for the priority problems, and is intended to finish debugging those changes (without the new extensions).
Sync with HEAD.
need <sys/param.h> for COHERENCY_UNIT Minor KNF along the way.
futex_func_wait(): If TIMER_ABSTIME, sanity check that the deadline provided by the caller is not ridiculous.
Major overfaul of futex implemention: - Use sleepqs directly, rather than using condition variables and separate wait queues / strutures. By doing this, and using the standard mechanism for keeping sleepqs sorted by priority, we acn ensure that the highest priority waiters will be awakened, rather than naively awakening in FIFO order. - As part of the data structure re-organization, struct lwp gains "l_futex" (the futex an LWP is blocked on) and "l_futex_wakesel" (the futex wake selector bitset) fields (and loses l___rsvd1). Plese note the special locking considerations for these fields documented in the comments. - Add the notion of a "futex class". This is prep work for eventually supporting the FUTEX_*_PI operations, as well as some future NetBSD extensions to the futex interface. - Add a preliminary implementation of the first of those NetBSD extensions, FUTEX_NETBSD_RW_WAIT and FUTEX_NETBSD_RW_HANDOFF. These are designed to implement reader/writer locks with direct-handoff to the correct priority thread(s) (real-time read-waiters need to have priority over non-real-time write-waiters). NOTE: this is currently disabled due to a mysterious panic that haasn't yet been tracked down. - Add some SDT probes to aid in debugging.
Revert "Use cv_timedwaitclock_sig in futex." Turned out to break things; we'll do this another way.
Revert "Make sure futex waits never return ERESTART." Part of redoing the timedwaitclock changes, which were buggy and committed a little too fast.
Make sure futex waits never return ERESTART. If the user had passed in a relative timeout, this would have the effect of waiting for the full relative time repeatedly, without regard for how much time had elapsed during the wait before a signal. In principle this may not be necessary for absolute timeouts or indefinite timeouts, but it's not clear there's an advantage; we do the same for various other syscalls like nanosleep. Perhaps in the future we can arrange to keep the state of how much time had elapsed when we restart like Linux does, but that's a much more ambitious change.
Use cv_timedwaitclock_sig in futex. Possible fix for hangs observed with Java under Linux emulation.
Make FUTEX_WAIT_BITSET(bitset=0) fail with EINVAL to match Linux.
Fix waiting on a zero bitset. The logic in futex_wait assumes there are two paths out: 1. Error (signal or timeout), in which case we take ourselves off the queue. 2. Wakeup, in which case the waker takes us off the queue. But if the user does FUTEX_WAIT_BITSET(bitset=0), as in the futex_wait_pointless_bitset test, then we will never even go to sleep, so there will be nobody to wake us as in (2), but it's not an error as in (1) either. As a result, we're left on the queue. Instead, don't bother with any of the wait machinery in that case. This does not actually match Linux semantics -- Linux returns EINVAL if bitset is zero. But let's make sure this passes the releng test rig as the tests are written now, and then fix both the logic and the tests -- this is a candidate fix for: lib/libc/sys/t_futex_ops (277/847): 20 test cases futex_basic_wait_wake_private: [6.645189s] Passed. futex_basic_wait_wake_shared: [6.572692s] Passed. futex_cmp_requeue: [4.624082s] Passed. futex_requeue: [4.427191s] Passed. futex_wait_pointless_bitset: [0.202865s] Passed. futex_wait_timeout_deadline: [ 9074.4164779] panic: TAILQ_INSERT_TAIL 0xffff000056a1ad48 /tmp/bracket/build/2020.04.28.03.00.23-evbarm-aarch64/src/sys/kern/sys_futex.c:826 [ 9074.4340691] cpu0: Begin traceback... [ 9074.4340691] trace fp ffffc0004ceffb40 [ 9074.4340691] fp ffffc0004ceffb60 vpanic() at ffffc000004aac58 netbsd:vpanic+0x160 [ 9074.4441432] fp ffffc0004ceffbd0 panic() at ffffc000004aad4c netbsd:panic+0x44 [ 9074.4441432] fp ffffc0004ceffc60 futex_wait_enqueue() at ffffc000004b7710 netbsd:futex_wait_enqueue+0x138 [ 9074.4555795] fp ffffc0004ceffc80 futex_func_wait.part.5() at ffffc000004b82f4 netbsd:futex_func_wait.part.5+0x17c [ 9074.4660518] fp ffffc0004ceffd50 do_futex() at ffffc000004b8cd8 netbsd:do_futex+0x1d0 [ 9074.4660518] fp ffffc0004ceffdf0 sys___futex() at ffffc000004b9078 netbsd:sys___futex+0x50
Rename futex_get -> futex_lookup_create. Remove futex_put. Just use futex_rele instead of futex_put. There may once have been a method to the madness this alias in an early draft but there is no longer. No functional change; all names are private to sys_futex.c.
Fix races in aborted futex waits. - Re-check the wake condition in futex_wait in the event of error. => Otherwise, if futex_wait times out in cv_timedwait_sig but futex_wake wakes it while cv_timedwait_sig is still trying to reacquire fw_lock, the wake would be incorrectly accounted. - Fold futex_wait_abort into futex_wait so it happens atomically. => Otherwise, if futex_wait times out and release fw_lock, then, before futex_wait_abort reacquires the lock and removes it from the queue, the waiter could be woken by futex_wake. But once we enter futex_wait_abort, the decision to abort is final, so the wake would incorrectly accounted. - In futex_wait_abort, mark each waiter aborting while we do the lock dance, and skip over aborting waiters in futex_wake and futex_requeue. => Otherwise, futex_wake might move it to a new futex while futex_wait_abort has released all the locks -- but futex_wait_abort still has the old futex, so TAILQ_REMOVE will cross the streams and bad things will happen. - In futex_wait_abort, release the futex we moved the waiter off. => Otherwise, we would leak the futex reference acquired by futex_func_wait, in the event of aborting. (For normal wakeups, futex_wake releases the reference on our behalf.) - Consistently use futex_wait_dequeue rather than TAILQ_REMOVE so that all changes to fw_futex and the waiter queue are isolated to futex_wait_enqueue/dequeue and happen together. Patch developed with and tested by thorpej@.
We would have bigger problems if PAGE_SIZE were < sizeof(int). Remove a CTASSERT() that can't be evaluated at compile-time on all platforms.
fix DIAGNOSTIC build
Add a NetBSD native futex implementation, mostly written by riastradh@. Map the COMPAT_LINUX futex calls to the native ones.