damon

DAMON News List

Below is a list of news around DAMON project. This list is not exhaustive but just a DAMON maintainer’s collection of news. If you find a news that should be added to this list, please let us know at sj@kernel.org and/or damon@lists.linux.dev. 2024 2024-04-28: Yet another academic paper exploring DAMON for serverless computing has published on ASPLOS 24. 2024-04-17: The third in-person DAMON meetup has held as a unconference session of Open Source Summint North America 2024

DAMON-based System Optimization Guide

This document helps you estimating the amount of benefit that you could get from DAMON-based system optimizations, and describes how you could achieve it. Check The Signs No optimization can provide same extent of benefit to every case. Therefore you should first guess how much improvements you could get using DAMON. If some of below conditions match your situation, you could consider using DAMON. Low IPC and High Cache Miss Ratios.

DAMON Evaluation

DAMON is lightweight. It increases system memory usage by 0.39% and slows target workloads down by 1.16%. DAMON is accurate and useful for memory management optimizations. An experimental DAMON-based operation scheme for THP, namely ‘ethp’, removes 76.15% of THP memory overheads while preserving 51.25% of THP speedup. Another experimental DAMON-based ‘proactive reclamation’ implementation, namely ‘prcl’, reduces 93.38% of residential sets and 23.63% of system memory footprint while incurring only 1.22% runtime overhead in the best case (parsec3/freqmine).

DAMON: Data Access Monitor

With increasingly data-intensive workloads and limited DRAM capacity, optimal memory management based on dynamic access patterns is becoming increasingly important. Such mechanisms are only possible if accurate and efficient dynamic access pattern monitoring is available. DAMON is a Data Access MONitoring framework subsystem for the Linux kernel developed for such memory management. It is designed with some key mechanism (refer to Design for the detail) that make it accurate (the monitoring output is useful enough for DRAM level memory management; It might not be appropriate for CPU Cache levels, though), light-weight (the monitoring overhead is low enough to be applied online), and scalable (the upper-bound of the overhead is in constant range regardless of the size of target workloads).