Below is a list of news around DAMON project.
This list is not exhaustive but just a DAMON maintainer’s collection of news. If you find a news that should be added to this list, please let us know at sj@kernel.org and/or damon@lists.linux.dev.
2024 2024-10-08: Videos for DAMON recipes at Open Source Summit EU'2024 and DAMON long-term plans at Kernel Memory Management Microconference'2024 are uploaded to YouTube.
2024-10-01: 2024-Q3 DAMON news letter including news for new features, more users, repos reorganizations, and conference talks is posted.
This document helps you estimating the amount of benefit that you could get from DAMON-based system optimizations, and describes how you could achieve it.
Check The Signs No optimization can provide same extent of benefit to every case. Therefore you should first guess how much improvements you could get using DAMON. If some of below conditions match your situation, you could consider using DAMON.
Low IPC and High Cache Miss Ratios.
DAMON is lightweight. It increases system memory usage by 0.39% and slows target workloads down by 1.16%.
DAMON is accurate and useful for memory management optimizations. An experimental DAMON-based operation scheme for THP, namely ‘ethp’, removes 76.15% of THP memory overheads while preserving 51.25% of THP speedup. Another experimental DAMON-based ‘proactive reclamation’ implementation, namely ‘prcl’, reduces 93.38% of residential sets and 23.63% of system memory footprint while incurring only 1.22% runtime overhead in the best case (parsec3/freqmine).
With increasingly data-intensive workloads and limited DRAM capacity, optimal memory management based on dynamic access patterns is becoming increasingly important. Such mechanisms are only possible if accurate and efficient dynamic access pattern monitoring is available.
DAMON is a Data Access MONitoring framework subsystem for the Linux kernel developed for such memory management. It is designed with some key mechanism (refer to Design for the detail) that make it
accurate (the monitoring output is useful enough for DRAM level memory management; It might not be appropriate for CPU Cache levels, though), light-weight (the monitoring overhead is low enough to be applied online), and scalable (the upper-bound of the overhead is in constant range regardless of the size of target workloads).