Up to [cvs.NetBSD.org] / src / sys / kern
Request diff between arbitrary revisions
Default branch: MAIN
Current tag: vmlocking
Revision 1.128.2.13 / (download) - annotate - [select for diffs], Thu Nov 1 21:10:14 2007 UTC (16 years, 4 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.12: +44 -6
lines
Diff to previous 1.128.2.12 (colored) next main 1.129 (colored)
pool_reclaim: acquire kernel_lock if the pool is at IPL_SOFTCLOCK, SOFTNET or SOFTSERIAL, as mutexes at these levels must still be spinlocks. It's not yet safe for e.g. ip_intr() to block as this upsets code calling up from the socket layer. It can find pcbs sitting half baked. pool_cache_xcall: go to splvm to prevent kernel_lock from being taken, for the reason listed above. Pointed out by yamt@.
Revision 1.128.2.12 / (download) - annotate - [select for diffs], Mon Oct 29 16:37:44 2007 UTC (16 years, 4 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.11: +12 -9
lines
Diff to previous 1.128.2.11 (colored)
pool_drain_start: tweak assertions/comments.
Revision 1.128.2.11 / (download) - annotate - [select for diffs], Fri Oct 26 17:03:10 2007 UTC (16 years, 4 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.10: +113 -24
lines
Diff to previous 1.128.2.10 (colored)
- Use a cross call to drain the per-CPU component of pool caches. - When draining, skip over pools that are completly inactive.
Revision 1.128.2.10 / (download) - annotate - [select for diffs], Tue Sep 25 01:36:19 2007 UTC (16 years, 5 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.9: +14 -10
lines
Diff to previous 1.128.2.9 (colored)
If no constructor/destructor are provided for a pool_cache, use nullop. Remove the tests for pc_ctor/pc_dtor != NULL.
Revision 1.128.2.9 / (download) - annotate - [select for diffs], Mon Sep 10 11:13:17 2007 UTC (16 years, 6 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.8: +6 -8
lines
Diff to previous 1.128.2.8 (colored)
Fix a deadlock.
Revision 1.128.2.8 / (download) - annotate - [select for diffs], Sun Sep 9 23:17:14 2007 UTC (16 years, 6 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.7: +16 -55
lines
Diff to previous 1.128.2.7 (colored)
- Re-enable pool_cache, since it works on i386 again after today's pmap change. pool_cache_invalidate() no longer invalidates objects stored in the per-CPU caches. This needs some thought. - Remove pcg_get, pcg_put since they are only called from one place each. - Remove cc_busy assertions, since they don't work correctly. Pointed out by yamt@. - Add some more-assertions and simplify.
Revision 1.128.2.7 / (download) - annotate - [select for diffs], Sat Sep 1 12:55:15 2007 UTC (16 years, 6 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.6: +639 -310
lines
Diff to previous 1.128.2.6 (colored)
- Add a CPU layer to pool caches. In combination with vmem/kmem this provides CPU-local slab/object and general purpose allocators. The strategy used is as described in Jeff Bonwick's USENIX paper, except in at least one place where the described allocation strategy doesn't make sense. For exclusive access to the CPU layer the IPL is raised or kernel preemption disabled. Where the interrupt priority levels are software emulated this is much cheaper than taking a lock, and I think that writing to a local %pil register is likely to have a similar penalty to taking a lock. No tuning of the group sizes is currently done - all groups have 15 items each, but this should be fairly easy to implement. Also, the reclamation mechanism should probably use a cross-call to drain the CPU-level caches on remote CPUs. Currently this causes kernel memory corruption on i386, yet works without a problem on amd64. The cache layer is disabled for the time being until I can find the bug. - Change the pool_cache API so that the caches are themselves dynamically allocated, and that each cache is tied to a single pool only. Add some stubs to change pool_cache parameters that call directly through to the pool layer (e.g. pool_cache_sethiwat). The idea here is that pool_cache should become the default object allocator (and so LKM friendly), and that the pool allocator should be for kernel-internal use only. This will be posted to tech-kern@ for review.
Revision 1.128.2.6 / (download) - annotate - [select for diffs], Mon Aug 20 21:27:37 2007 UTC (16 years, 7 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.5: +28 -9
lines
Diff to previous 1.128.2.5 (colored)
Sync with HEAD.
Revision 1.128.2.5 / (download) - annotate - [select for diffs], Sun Jul 29 11:34:47 2007 UTC (16 years, 7 months ago) by ad
Branch: vmlocking
Changes since 1.128.2.4: +4 -2
lines
Diff to previous 1.128.2.4 (colored)
Trap free() of areas that contain undestroyed locks. Not a major problem but it helps to catch bugs.
Revision 1.128.2.4 / (download) - annotate - [select for diffs], Thu Mar 22 12:30:29 2007 UTC (17 years ago) by ad
Branch: vmlocking
Changes since 1.128.2.3: +3 -12
lines
Diff to previous 1.128.2.3 (colored)
- Remove debugging crud. - wakeup -> cv_broadcast.
Revision 1.128.2.3 / (download) - annotate - [select for diffs], Wed Mar 21 20:10:22 2007 UTC (17 years ago) by ad
Branch: vmlocking
Changes since 1.128.2.2: +2 -9
lines
Diff to previous 1.128.2.2 (colored)
GC the simplelock/spinlock debugging stuff.
Revision 1.128.2.2 / (download) - annotate - [select for diffs], Tue Mar 13 17:50:58 2007 UTC (17 years ago) by ad
Branch: vmlocking
Changes since 1.128.2.1: +119 -131
lines
Diff to previous 1.128.2.1 (colored)
Pull in the initial set of changes for the vmlocking branch.
Revision 1.128.2.1 / (download) - annotate - [select for diffs], Tue Mar 13 16:51:56 2007 UTC (17 years ago) by ad
Branch: vmlocking
Changes since 1.128: +7 -7
lines
Diff to previous 1.128 (colored)
Sync with head.
Revision 1.128 / (download) - annotate - [select for diffs], Sun Mar 4 06:03:07 2007 UTC (17 years ago) by christos
Branch: MAIN
Branch point for: vmlocking
Changes since 1.127: +25 -25
lines
Diff to previous 1.127 (colored)
Kill caddr_t; there will be some MI fallout, but it will be fixed shortly.