伙伴系統分配記憶體

来源:https://www.cnblogs.com/linhaostudy/archive/2020/05/16/12900614.html
-Advertisement-
Play Games

內核中常用的分配物理記憶體頁面的介面函數是alloc_pages(),用於分配一個或者多個連續的物理頁面,分配頁面個數只能是2個整數次冪。相比於多次分配離散的物理頁面,分配連續的物理頁面有利於提高系統記憶體的碎片化,記憶體碎片化是一個很讓人頭疼的問題。alloc_pages()函數有兩個,一個是分配gfp ...


內核中常用的分配物理記憶體頁面的介面函數是alloc_pages(),用於分配一個或者多個連續的物理頁面,分配頁面個數只能是2個整數次冪。相比於多次分配離散的物理頁面,分配連續的物理頁面有利於提高系統記憶體的碎片化,記憶體碎片化是一個很讓人頭疼的問題。alloc_pages()函數有兩個,一個是分配gfp_mask,另一個是分配階數order。

[include/linux/gfp.h]
#define alloc_pages(gfp_mask, order)	\
	alloc_pages_node(numa_node_id(), gfp_mask, order)

分配掩碼是非常重要的參數,它同樣定義在gfp.h頭文件中。

/* Plain integer GFP bitmasks. Do not use this directly. */
#define ___GFP_DMA		0x01u
#define ___GFP_HIGHMEM		0x02u
#define ___GFP_DMA32		0x04u
#define ___GFP_MOVABLE		0x08u
#define ___GFP_WAIT		0x10u
#define ___GFP_HIGH		0x20u
#define ___GFP_IO		0x40u
#define ___GFP_FS		0x80u
#define ___GFP_COLD		0x100u
#define ___GFP_NOWARN		0x200u
#define ___GFP_REPEAT		0x400u
#define ___GFP_NOFAIL		0x800u
#define ___GFP_NORETRY		0x1000u
#define ___GFP_MEMALLOC		0x2000u
#define ___GFP_COMP		0x4000u
#define ___GFP_ZERO		0x8000u
#define ___GFP_NOMEMALLOC	0x10000u
#define ___GFP_HARDWALL		0x20000u
#define ___GFP_THISNODE		0x40000u
#define ___GFP_RECLAIMABLE	0x80000u
#define ___GFP_NOTRACK		0x200000u
#define ___GFP_NO_KSWAPD	0x400000u
#define ___GFP_OTHER_NODE	0x800000u
#define ___GFP_WRITE		0x1000000u

分配掩碼是在內核代碼中分成兩類,一類叫zone modifiers,另一類是action modifiers。zone modifiers指定從哪一個zone中分配所需的頁面。zone modifiers由分配掩碼的最低4位來定義,分別是___GFP_DMA___GFP_HIGHMEM___GFP_DMA32___GFP_MOVABLE

/* If the above are modified, __GFP_BITS_SHIFT may need updating */

/*
 * GFP bitmasks..
 *
 * Zone modifiers (see linux/mmzone.h - low three bits)
 *
 * Do not put any conditional on these. If necessary modify the definitions
 * without the underscores and use them consistently. The definitions here may
 * be used in bit comparisons.
 */
#define __GFP_DMA	((__force gfp_t)___GFP_DMA)
#define __GFP_HIGHMEM	((__force gfp_t)___GFP_HIGHMEM)
#define __GFP_DMA32	((__force gfp_t)___GFP_DMA32)
#define __GFP_MOVABLE	((__force gfp_t)___GFP_MOVABLE)  /* Page is movable */
#define GFP_ZONEMASK	(__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)

action modifiers並不限制從哪個記憶體域中分配記憶體,但會改變分配行為,其定義如下:

/*
 * Action modifiers - doesn't change the zoning
 *
 * __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
 * _might_ fail.  This depends upon the particular VM implementation.
 *
 * __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
 * cannot handle allocation failures.  This modifier is deprecated and no new
 * users should be added.
 *
 * __GFP_NORETRY: The VM implementation must not retry indefinitely.
 *
 * __GFP_MOVABLE: Flag that this page will be movable by the page migration
 * mechanism or reclaimed
 */
#define __GFP_WAIT	((__force gfp_t)___GFP_WAIT)	/* Can wait and reschedule? */
#define __GFP_HIGH	((__force gfp_t)___GFP_HIGH)	/* Should access emergency pools? */
#define __GFP_IO	((__force gfp_t)___GFP_IO)	/* Can start physical IO? */
#define __GFP_FS	((__force gfp_t)___GFP_FS)	/* Can call down to low-level FS? */
#define __GFP_COLD	((__force gfp_t)___GFP_COLD)	/* Cache-cold page required */
#define __GFP_NOWARN	((__force gfp_t)___GFP_NOWARN)	/* Suppress page allocation failure warning */
#define __GFP_REPEAT	((__force gfp_t)___GFP_REPEAT)	/* See above */
#define __GFP_NOFAIL	((__force gfp_t)___GFP_NOFAIL)	/* See above */
#define __GFP_NORETRY	((__force gfp_t)___GFP_NORETRY) /* See above */
#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)/* Allow access to emergency reserves */
#define __GFP_COMP	((__force gfp_t)___GFP_COMP)	/* Add compound page metadata */
#define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)	/* Return zeroed page on success */
#define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC) /* Don't use emergency reserves.
							 * This takes precedence over the
							 * __GFP_MEMALLOC flag if both are
							 * set
							 */
#define __GFP_HARDWALL   ((__force gfp_t)___GFP_HARDWALL) /* Enforce hardwall cpuset memory allocs */
#define __GFP_THISNODE	((__force gfp_t)___GFP_THISNODE)/* No fallback, no policies */
#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) /* Page is reclaimable */
#define __GFP_NOTRACK	((__force gfp_t)___GFP_NOTRACK)  /* Don't track with kmemcheck */

#define __GFP_NO_KSWAPD	((__force gfp_t)___GFP_NO_KSWAPD)
#define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE) /* On behalf of other node */
#define __GFP_WRITE	((__force gfp_t)___GFP_WRITE)	/* Allocator intends to dirty page */

上述這些標誌位,我們在後續代碼中遇到時再詳細介紹。

下麵是GFP_KERNEL為例,為看理想情況下alloc_pages()函數是如何分配出物理記憶體的。

[分配物理記憶體的例子]
page = alloc_pages(GFP_KERNEL, order);

GFP_KERNEL分配掩碼定義在gfp.h頭文件上,是一個分配掩碼的組合。常用的分配掩碼組合如下:

/* This equals 0, but use constants in case they ever change */
#define GFP_NOWAIT	(GFP_ATOMIC & ~__GFP_HIGH)
/* GFP_ATOMIC means both !wait (__GFP_WAIT not set) and use emergency pool */
#define GFP_ATOMIC	(__GFP_HIGH)
#define GFP_NOIO	(__GFP_WAIT)
#define GFP_NOFS	(__GFP_WAIT | __GFP_IO)
#define GFP_KERNEL	(__GFP_WAIT | __GFP_IO | __GFP_FS)
#define GFP_TEMPORARY	(__GFP_WAIT | __GFP_IO | __GFP_FS | \
			 __GFP_RECLAIMABLE)
#define GFP_USER	(__GFP_WAIT | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
#define GFP_HIGHUSER	(GFP_USER | __GFP_HIGHMEM)
#define GFP_HIGHUSER_MOVABLE	(GFP_HIGHUSER | __GFP_MOVABLE)
#define GFP_IOFS	(__GFP_IO | __GFP_FS)
#define GFP_TRANSHUGE	(GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
			 __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN | \
			 __GFP_NO_KSWAPD)

所以GFP_KERNEL分配掩碼包含了__GFP_WAIT__GFP_IO__GFP_FS這三個標誌位,換算成十六進位0xd0;

alloc_pages()最終調用__alloc_pages_nodemask()函數,它是伙伴系統的核心函數;

/*
 * This is the 'heart' of the zoned buddy allocator.
 */
struct page *
__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
			struct zonelist *zonelist, nodemask_t *nodemask)
{
	struct zoneref *preferred_zoneref;
	struct page *page = NULL;
	unsigned int cpuset_mems_cookie;
	int alloc_flags = ALLOC_WMARK_LOW|ALLOC_CPUSET|ALLOC_FAIR;
	gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
	struct alloc_context ac = {
		.high_zoneidx = gfp_zone(gfp_mask),
		.nodemask = nodemask,
		.migratetype = gfpflags_to_migratetype(gfp_mask),
	};

struct alloc_context數據結構是伙伴系統分配函數中用於保存相關參數的數據結構。gfp_zone()函數從分配掩碼中計算出zone的zoneidx,並存放high_zoneidx成員中。

static inline enum zone_type gfp_zone(gfp_t flags)
{
	enum zone_type z;
	int bit = (__force int) (flags & GFP_ZONEMASK);

	z = (GFP_ZONE_TABLE >> (bit * ZONES_SHIFT)) &
					 ((1 << ZONES_SHIFT) - 1);
	VM_BUG_ON((GFP_ZONE_BAD >> bit) & 1);
	return z;
}

gfp_zone()函數會用到GFP_ZONEMASK、GFP_ZONE_TABLE和ZONES_SHIFT等巨集。它們的定義如下:

#define GFP_ZONEMASK	(__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)
#define GFP_ZONE_TABLE ( \
	(ZONE_NORMAL << 0 * ZONES_SHIFT)				      \
	| (OPT_ZONE_DMA << ___GFP_DMA * ZONES_SHIFT)			      \
	| (OPT_ZONE_HIGHMEM << ___GFP_HIGHMEM * ZONES_SHIFT)		      \
	| (OPT_ZONE_DMA32 << ___GFP_DMA32 * ZONES_SHIFT)		      \
	| (ZONE_NORMAL << ___GFP_MOVABLE * ZONES_SHIFT)			      \
	| (OPT_ZONE_DMA << (___GFP_MOVABLE | ___GFP_DMA) * ZONES_SHIFT)	      \
	| (ZONE_MOVABLE << (___GFP_MOVABLE | ___GFP_HIGHMEM) * ZONES_SHIFT)   \
	| (OPT_ZONE_DMA32 << (___GFP_MOVABLE | ___GFP_DMA32) * ZONES_SHIFT)   \
)
#if MAX_NR_ZONES < 2
#define ZONES_SHIFT 0
#elif MAX_NR_ZONES <= 2
#define ZONES_SHIFT 1
#elif MAX_NR_ZONES <= 4
#define ZONES_SHIFT 2

GFP_ZONEMASK是分配掩碼的低4位,在ARM Vexprss平臺上,只有ZONE_NORMAL和ZONE_HIGHMEM這兩個zone,但是計算__MAX_NR_ZONES需要加上ZONE_MOVABLE,所以MAX_NR_ZONES等於3,這裡ZONE_SHIFT等於2,那麼GFP_ZONE_TABLE計算結果等於0x200010。

在上述例子中,以GFP_KERNEL分配掩碼(0xd0)為參數代入gfp_zone()函數中,最終結果為0,即high_zoneidx為0。

另外__alloc_pages_nodemask()第15行代碼中的gfpflags_to_migratetype()函數把gfp_mask分配掩碼轉換成MIGRATE_TYPES類型是MIGRATE_UNMOVABLE;如果分配掩碼為GFP_HIGHUSER_MOVABLE,那麼MIGRATE_TYPES類型是MIGRATE_MOVABLE

/* Convert GFP flags to their corresponding migrate type */
static inline int gfpflags_to_migratetype(const gfp_t gfp_flags)
{
	WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);

	if (unlikely(page_group_by_mobility_disabled))
		return MIGRATE_UNMOVABLE;

	/* Group based on mobility */
	return (((gfp_flags & __GFP_MOVABLE) != 0) << 1) |
		((gfp_flags & __GFP_RECLAIMABLE) != 0);
}

繼續回到__alloc_pages_nodemask()函數中。

[__alloc_pages_nodemask]
retry_cpuset:
	cpuset_mems_cookie = read_mems_allowed_begin();

	/* We set it here, as __alloc_pages_slowpath might have changed it */
	ac.zonelist = zonelist;
	/* The preferred zone is used for statistics later */
	preferred_zoneref = first_zones_zonelist(ac.zonelist, ac.high_zoneidx,
				ac.nodemask ? : &cpuset_current_mems_allowed,
				&ac.preferred_zone);
	if (!ac.preferred_zone)
		goto out;
	ac.classzone_idx = zonelist_zone_idx(preferred_zoneref);

	/* First allocation attempt */
	alloc_mask = gfp_mask|__GFP_HARDWALL;
	page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
	if (unlikely(!page)) {
		/*
		 * Runtime PM, block IO and its error handling path
		 * can deadlock because I/O on the device might not
		 * complete.
		 */
		alloc_mask = memalloc_noio_flags(gfp_mask);

		page = __alloc_pages_slowpath(alloc_mask, order, &ac);
	}

	if (kmemcheck_enabled && page)
		kmemcheck_pagealloc_alloc(page, order, gfp_mask);

	trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype);

out:
	/*
	 * When updating a task's mems_allowed, it is possible to race with
	 * parallel threads in such a way that an allocation can fail while
	 * the mask is being updated. If a page allocation is about to fail,
	 * check if the cpuset changed during allocation and if so, retry.
	 */
	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
		goto retry_cpuset;

	return page;

首先get_page_from_freelist()會去嘗試分配物理頁面,如果這裡分配失敗,就會調用到__alloc_pages_slowpath()函數,這個函數會處理許多特殊的場景。這裡假設在理想情況下,get_page_from_freelist()能分配成功;

/*
 * get_page_from_freelist goes through the zonelist trying to allocate
 * a page.
 */
static struct page *
get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
						const struct alloc_context *ac)
{
	struct zonelist *zonelist = ac->zonelist;
	struct zoneref *z;
	struct page *page = NULL;
	struct zone *zone;
	nodemask_t *allowednodes = NULL;/* zonelist_cache approximation */
	int zlc_active = 0;		/* set if using zonelist_cache */
	int did_zlc_setup = 0;		/* just call zlc_setup() one time */
	bool consider_zone_dirty = (alloc_flags & ALLOC_WMARK_LOW) &&
				(gfp_mask & __GFP_WRITE);
	int nr_fair_skipped = 0;
	bool zonelist_rescan;

zonelist_scan:
	zonelist_rescan = false;

	/*
	 * Scan zonelist, looking for a zone with enough free.
	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
	 */
	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
								ac->nodemask) {

get_page_from_freelist()函數首先需要先判斷可以從哪一個zone來分配記憶體。for_each_zone_zonelist_nodemask巨集掃描記憶體節點中的zonelist去查找合適分配記憶體的zone。

/**
 * for_each_zone_zonelist_nodemask - helper macro to iterate over valid zones in a zonelist at or below a given zone index and within a nodemask
 * @zone - The current zone in the iterator
 * @z - The current pointer within zonelist->zones being iterated
 * @zlist - The zonelist being iterated
 * @highidx - The zone index of the highest zone to return
 * @nodemask - Nodemask allowed by the allocator
 *
 * This iterator iterates though all zones at or below a given zone index and
 * within a given nodemask
 */
#define for_each_zone_zonelist_nodemask(zone, z, zlist, highidx, nodemask) \
	for (z = first_zones_zonelist(zlist, highidx, nodemask, &zone);	\
		zone;							\
		z = next_zones_zonelist(++z, highidx, nodemask),	\
			zone = zonelist_zone(z))			\

for_each_zone_zonelist_nodemask首先通過first_zones_zonelist()從給定的zoneidx開始查找,這個給定的zoneidx就是highidx,之前通過gfp_zone()函數轉換得來的。

/**
 * first_zones_zonelist - Returns the first zone at or below highest_zoneidx within the allowed nodemask in a zonelist
 * @zonelist - The zonelist to search for a suitable zone
 * @highest_zoneidx - The zone index of the highest zone to return
 * @nodes - An optional nodemask to filter the zonelist with
 * @zone - The first suitable zone found is returned via this parameter
 *
 * This function returns the first zone at or below a given zone index that is
 * within the allowed nodemask. The zoneref returned is a cursor that can be
 * used to iterate the zonelist with next_zones_zonelist by advancing it by
 * one before calling.
 */
static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
					enum zone_type highest_zoneidx,
					nodemask_t *nodes,
					struct zone **zone)
{
	struct zoneref *z = next_zones_zonelist(zonelist->_zonerefs,
							highest_zoneidx, nodes);
	*zone = zonelist_zone(z);
	return z;
}

first_zones_zonelist()函數會調用next_zones_zonelist()函數來計算zoneref,最後返回zone數據結構;

struct zoneref *next_zones_zonelist(struct zoneref *z,
					enum zone_type highest_zoneidx,
					nodemask_t *nodes)
{
	/*
	 * Find the next suitable zone to use for the allocation.
	 * Only filter based on nodemask if it's set
	 */
	if (likely(nodes == NULL))
		while (zonelist_zone_idx(z) > highest_zoneidx)
			z++;
	else
		while (zonelist_zone_idx(z) > highest_zoneidx ||
				(z->zone && !zref_in_nodemask(z, nodes)))
			z++;

	return z;
}

計算zone的核心函數在next_zones_zonelist()函數中,這裡highest_zoneidx是gfp_zone()函數計算分配掩碼得來。zonelist有一個zoneref數組,zoneref數據結構里有一個成員zone指針會指向zone數據結構,還有一個zone_index成員指向zone的編號。zone在系統處理時會初始化這個數組,具體函數在build_zonelists_node()中。在ARM Vexpress平臺中,zone類型、zoneref[]數組和zoneidx的關係如下:

ZONE_HIGHMEM	__zonerefs[0]->zone_index=1
ZONE_NORMAL		__zonerefs[1]->zone_index=0

zonerefs[0]表示ZONE_HIGHMEM,其zone編號zone_index的值為1;zonerefs[1]表示ZONE_NORMAL,其zone的編號zone_index為0。也就是說,基於zone的設計思想是:分配物理頁面時會優先考慮ZONE_HIGHMEM,因為ZONE_HIGHMEM在zonelist中排在ZONE_NORMAL前面

回到我們之前的例子,gfp_zone(GFP_KERNEL)函數返回0,即highest_zoneidx為0,而這個記憶體節點的第一個zone是ZONE_HIGHMEM,其zone編號zone_index的值為1.因此在next_zones_zonelist()中,z++,最終first_zones_zonelist()函數會返回ZONE_NORMAL。在for_each_zone_zonelist_nodemask()遍歷過程中也只能遍歷ZONE_NORMAL這一個zone了。

再舉一個例子,分配掩碼GFP_HIGHUSER_MOVABLE,GFP_HIGHUSER_MOVEABLE包含了__GFP_HIGHMEM,那麼next_zones_zonelist()函數返回哪個zone呢?

GFP_HIGHUSER_MOVABLE的值為0x200da,那麼gfp_zone(GFP_HIGHUSER_MOVABLE)函數等於2,即highest_zoneidx為2,而這個記憶體節點的第一個ZONE_HIGHME,其zone編號zone_index的值為1;

  • first_zones_zonelist()函數中,由於第一個zone的zone_index值小於highest_zoneidx,因此會返回ZONE_HIGHMEM。

  • for_each_zone_zonelist_nodemask()函數中,next_zones_zonelist(++z, highidx, nodemask)依然會返回ZONE_NORMAL;

  • 因此這裡會遍歷ZONE_HIGHMEM和ZONE_NORMAL,這兩個zone,但是會先遍歷ZONE_HIGHMEM,然後才是ZONE_NORMAL。

    要正確理解for_each_zone_zonelist_nodemask()這個巨集的行為,需要理解如下兩個方面:

    • highest_zoneidx是怎麼計算得來的,即如何解析分配掩碼,這是gfp_zone()函數的職責。
    • 每個記憶體節點都有一個struct pglist_data數據結構,其成員node_zonelists是一個struct zonelist數據結構,zonelist中包含了struct zoneref __zonerefs[]數組來描述這些zone。其中ZONE_HIGHMEM排在前面,並且 _zonerefs[0]->zone_index=1,ZONE_NORMAL排在後面,且 _zonerefs[1]->zone_index=0;

    上述這些設計讓人感覺複雜,但是這是正確理解以zone為基礎的物理頁面分配機制的基石。(說實話zone的分配實在是奇妙~)

    __alloc_page_nodemask()的第24行代碼調用first_zones_zonelist(),計算出preferred_zoneref並且保存到ac.classzone_idx變數中,該變數在kswapd內核線程中還會用到。例如以GFP_KERNEL為分配掩碼,preferred_zone指的是ZONE_NORMAL,ac.classzone_idx的值為0;

    回到get_page_from_freelist()函數中,for_each_zone_zonelist_nodemask()找到了接下來可以從哪些zone中分配記憶體,下麵做一些必要的檢查;

    [get_page_from_freelist()]
    ....
    		if (cpusets_enabled() &&
    			(alloc_flags & ALLOC_CPUSET) &&
    			!cpuset_zone_allowed(zone, gfp_mask))
    				continue;
    		/*
    		 * Distribute pages in proportion to the individual
    		 * zone size to ensure fair page aging.  The zone a
    		 * page was allocated in should have no effect on the
    		 * time the page has in memory before being reclaimed.
    		 */
    		if (alloc_flags & ALLOC_FAIR) {
    			if (!zone_local(ac->preferred_zone, zone))
    				break;
    			if (test_bit(ZONE_FAIR_DEPLETED, &zone->flags)) {
    				nr_fair_skipped++;
    				continue;
    			}
    		}
    		/*
    		 * When allocating a page cache page for writing, we
    		 * want to get it from a zone that is within its dirty
    		 * limit, such that no single zone holds more than its
    		 * proportional share of globally allowed dirty pages.
    		 * The dirty limits take into account the zone's
    		 * lowmem reserves and high watermark so that kswapd
    		 * should be able to balance it without having to
    		 * write pages from its LRU list.
    		 *
    		 * This may look like it could increase pressure on
    		 * lower zones by failing allocations in higher zones
    		 * before they are full.  But the pages that do spill
    		 * over are limited as the lower zones are protected
    		 * by this very same mechanism.  It should not become
    		 * a practical burden to them.
    		 *
    		 * XXX: For now, allow allocations to potentially
    		 * exceed the per-zone dirty limit in the slowpath
    		 * (ALLOC_WMARK_LOW unset) before going into reclaim,
    		 * which is important when on a NUMA setup the allowed
    		 * zones are together not big enough to reach the
    		 * global limit.  The proper fix for these situations
    		 * will require awareness of zones in the
    		 * dirty-throttling and the flusher threads.
    		 */
    		if (consider_zone_dirty && !zone_dirty_ok(zone))
    			continue;
    .....
    

    下麵代碼用於檢測當前zone的watermark水位是否充足。

    [get_page_from_freelist()]
    ...
    		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
    		if (!zone_watermark_ok(zone, order, mark,
    				       ac->classzone_idx, alloc_flags)) {
    			...
    			ret = zone_reclaim(zone, gfp_mask, order);
    			switch (ret) {
    			case ZONE_RECLAIM_NOSCAN:
    				/* did not scan */
    				continue;
    			case ZONE_RECLAIM_FULL:
    				/* scanned but unreclaimable */
    				continue;
    			default:
    				/* did we reclaim enough */
    				if (zone_watermark_ok(zone, order, mark,
    						ac->classzone_idx, alloc_flags))
    					goto try_this_zone;
    
    				/*
    				 * Failed to reclaim enough to meet watermark.
    				 * Only mark the zone full if checking the min
    				 * watermark or if we failed to reclaim just
    				 * 1<<order pages or else the page allocator
    				 * fastpath will prematurely mark zones full
    				 * when the watermark is between the low and
    				 * min watermarks.
    				 */
    				if (((alloc_flags & ALLOC_WMARK_MASK) == ALLOC_WMARK_MIN) ||
    				    ret == ZONE_RECLAIM_SOME)
    					goto this_zone_full;
    
    				continue;
    			}
    		}
    
    try_this_zone:
    		page = buffered_rmqueue(ac->preferred_zone, zone, order,
    						gfp_mask, ac->migratetype);
    		if (page) {
    			if (prep_new_page(page, order, gfp_mask, alloc_flags))
    				goto try_this_zone;
    			return page;
    		}
    ...
    

    zone數據結構中有一個成員watermark記錄各種水位的情況。系統定義了3種水位,分別是WMARK_MINWMARK_LOWWMARK_HIGH。watermark水位的計算在__setup_per_zone_wmarks()函數中。

    [mm/page_alloc.c]
    static void __setup_per_zone_wmarks(void)
    {
    	unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
    	unsigned long lowmem_pages = 0;
    	struct zone *zone;
    	unsigned long flags;
    
    	/* Calculate total number of !ZONE_HIGHMEM pages */
    	for_each_zone(zone) {
    		if (!is_highmem(zone))
    			lowmem_pages += zone->managed_pages;
    	}
    
    	for_each_zone(zone) {
    		u64 tmp;
    
    		spin_lock_irqsave(&zone->lock, flags);
    		tmp = (u64)pages_min * zone->managed_pages;
    		do_div(tmp, lowmem_pages);
    		if (is_highmem(zone)) {
    			/*
    			 * __GFP_HIGH and PF_MEMALLOC allocations usually don't
    			 * need highmem pages, so cap pages_min to a small
    			 * value here.
    			 *
    			 * The WMARK_HIGH-WMARK_LOW and (WMARK_LOW-WMARK_MIN)
    			 * deltas controls asynch page reclaim, and so should
    			 * not be capped for highmem.
    			 */
    			unsigned long min_pages;
    
    			min_pages = zone->managed_pages / 1024;
    			min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);
    			zone->watermark[WMARK_MIN] = min_pages;
    		} else {
    			/*
    			 * If it's a lowmem zone, reserve a number of pages
    			 * proportionate to the zone's size.
    			 */
    			zone->watermark[WMARK_MIN] = tmp;
    		}
    
    		zone->watermark[WMARK_LOW]  = min_wmark_pages(zone) + (tmp >> 2);
    		zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
    
    		__mod_zone_page_state(zone, NR_ALLOC_BATCH,
    			high_wmark_pages(zone) - low_wmark_pages(zone) -
    			atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
    
    		setup_zone_migrate_reserve(zone);
    		spin_unlock_irqrestore(&zone->lock, flags);
    	}
    
    	/* update totalreserve_pages */
    	calculate_totalreserve_pages();
    }
    

    計算watermark水位用到min_free_kbytes這個值,它是在系統啟動時通過系統空閑頁面的數量計算的,具體計算在init_per_zone_wmark_min()這個函數中。另外系統起來之後也可以通過sysfs來設置,節點在/proc/sys/vm/min_free_kbytes。計算watermark水位的公式不算複雜,最後結果保存在每個zone的watermark數組中,後續伙伴系統和kswapd內核線程中用到;

    回到get_page_from_freelist()函數,這裡會讀取WMARK_LOW水位的值到變數mark中,這裡zone_watermark_ok()函數判斷當前zone的空閑頁面是否滿足WMARK_LOW水位。

    [get_page_from_freelist->zone_watermark_ok->__zone_watermark_ok]
    
    /*
     * Return true if free pages are above 'mark'. This takes into account the order
     * of the allocation.
     */
    static bool __zone_watermark_ok(struct zone *z, unsigned int order,
    			unsigned long mark, int classzone_idx, int alloc_flags,
    			long free_pages)
    {
    	/* free_pages may go negative - that's OK */
    	long min = mark;
    	int o;
    	long free_cma = 0;
    
    	free_pages -= (1 << order) - 1;
    	if (alloc_flags & ALLOC_HIGH)
    		min -= min / 2;
    	if (alloc_flags & ALLOC_HARDER)
    		min -= min / 4;
    #ifdef CONFIG_CMA
    	/* If allocation can't use CMA areas don't use free CMA pages */
    	if (!(alloc_flags & ALLOC_CMA))
    		free_cma = zone_page_state(z, NR_FREE_CMA_PAGES);
    #endif
    
    	if (free_pages - free_cma <= min + z->lowmem_reserve[classzone_idx])
    		return false;
    	for (o = 0; o < order; o++) {
    		/* At the next order, this order's pages become unavailable */
    		free_pages -= z->free_area[o].nr_free << o;
    
    		/* Require fewer higher order pages to be free */
    		min >>= 1;
    
    		if (free_pages <= min)
    			return false;
    	}
    	return true;
    }
    

    參數z表示要判斷的zone,order是要分配的記憶體的階數,mark是要檢查的水位。通常分配物理記憶體頁面的內核路徑是檢查WMARK_LOW水位,而頁面回收kswapd內核線程則是檢查WMARK_HIGH水位,這會導致一個記憶體節點各個zone的頁面老化速度不一致的問題,為瞭解決這個問題,內核提出了許多的詭異的補丁,這個問題可以參見之後的內容。

    __zone_watermark_ok()函數首先判斷zone的空閑頁面是否小於某個水位值和zone的最低保留值(lowmem_reserve)之和。返回true表示空閑頁面在某個水位在上,否則返回false;

    回到get_page_from_freelist()函數中,當判斷當前zone的空閑頁面低於WMARK_LOW水位,會調用zone_reclaim()函數來回收頁面。我們這裡假設zone_watermark_ok()判斷空閑頁面充沛,接下來就會調用buffered_rmqueue()函數從伙伴系統中分配物理頁面。

    [__alloc_pages_nodemask()->get_page_from_freelist()->buffered_rmqueue()]
    /*
     * Allocate a page from the given zone. Use pcplists for order-0 allocations.
     */
    static inline
    struct page *buffered_rmqueue(struct zone *preferred_zone,
    			struct zone *zone, unsigned int order,
    			gfp_t gfp_flags, int migratetype)
    {
    	unsigned long flags;
    	struct page *page;
    	bool cold = ((gfp_flags & __GFP_COLD) != 0);
    
    	if (likely(order == 0)) {
    		struct per_cpu_pages *pcp;
    		struct list_head *list;
    
    		local_irq_save(flags);
    		pcp = &this_cpu_ptr(zone->pageset)->pcp;
    		list = &pcp->lists[migratetype];
    		if (list_empty(list)) {
    			pcp->count += rmqueue_bulk(zone, 0,
    					pcp->batch, list,
    					migratetype, cold);
    			if (unlikely(list_empty(list)))
    				goto failed;
    		}
    
    		if (cold)
    			page = list_entry(list->prev, struct page, lru);
    		else
    			page = list_entry(list->next, struct page, lru);
    
    		list_del(&page->lru);
    		pcp->count--;
    	} else {
    		if (unlikely(gfp_flags & __GFP_NOFAIL)) {
    			/*
    			 * __GFP_NOFAIL is not to be used in new code.
    			 *
    			 * All __GFP_NOFAIL callers should be fixed so that they
    			 * properly detect and handle allocation failures.
    			 *
    			 * We most definitely don't want callers attempting to
    			 * allocate greater than order-1 page units with
    			 * __GFP_NOFAIL.
    			 */
    			WARN_ON_ONCE(order > 1);
    		}
    		spin_lock_irqsave(&zone->lock, flags);
    		page = __rmqueue(zone, order, migratetype);
    		spin_unlock(&zone->lock);
    		if (!page)
    			goto failed;
    		__mod_zone_freepage_state(zone, -(1 << order),
    					  get_freepage_migratetype(page));
    	}
    
    	__mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
    	if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
    	    !test_bit(ZONE_FAIR_DEPLETED, &zone->flags))
    		set_bit(ZONE_FAIR_DEPLETED, &zone->flags);
    
    	__count_zone_vm_events(PGALLOC, zone, 1 << order);
    	zone_statistics(preferred_zone, zone, gfp_flags);
    	local_irq_restore(flags);
    
    	VM_BUG_ON_PAGE(bad_range(zone, page), page);
    	return page;
    
    failed:
    	local_irq_restore(flags);
    	return NULL;
    }
    

    這裡根據order數值兵分兩路:一路是order等於0 的情況,也就是分配一個物理頁面時,從zone->per_cpu_pageset列表中分配;另一路order大於0的情況,就從伙伴系統中分配。我們只關註order大於0 的情況,它最終會調用到__rmqueue_smallest()函數。

    [get_page_from_freelist()->buffered_rmqueue()->buffered_rmqueue->__rmqueue()->__rmqueue_smallest()]
    
    /*
     * Go through the free lists for the given migratetype and remove
     * the smallest available page from the freelists
     */
    static inline
    struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
    						int migratetype)
    {
    	unsigned int current_order;
    	struct free_area *area;
    	struct page *page;
    
    	/* Find a page of the appropriate size in the preferred list */
    	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
    		area = &(zone->free_area[current_order]);
    		if (list_empty(&area->free_list[migratetype]))
    			continue;
    
    		page = list_entry(area->free_list[migratetype].next,
    							struct page, lru);
    		list_del(&page->lru);
    		rmv_page_order(page);
    		area->nr_free--;
    		expand(zone, page, order, current_order, area, migratetype);
    		set_freepage_migratetype(page, migratetype);
    		return page;
    	}
    
    	return NULL;
    }
    

    在__rmqueue_smallest()函數中,首先從order開始查找zone中的空閑鏈表。如果zone的當前order對應的空閑區free_area中相應的migratetype類型的鏈表裡沒有空閑鏈表,那麼就會查找下一級order

    為什麼會這樣?因為在系統啟動時,空閑頁面會儘可能分配到MAX_ORDER-1的鏈表中,這個可以在系統剛起來之後,通過'cat /proc/pagetypeinfo'命令可以看出端倪。當找到某個order的空閑區中對應的mirgratetype類型的空閑鏈表中有空閑記憶體塊時,就會從一個記憶體塊摘下來,然後摘用expand()函數來切“蛋糕”。因為通常摘下來的記憶體塊會比需要的記憶體大,切完之後需要把剩下來的記憶體塊重新放回伙伴系統中。

    expand()函數就是實現“切蛋糕”的功能。這裡的參數high就是current_order,通常是current_order要比需求的order要大。每比較一次,area減一,相當於退了一級order,最後通過list_add把剩下的記憶體塊添加到低一級的空閑鏈表中。

    [get_page_from_freelist()->buffered_rmqueue()->buffered_rmqueue->__rmqueue()->__rmqueue_smallest()->expand()]
    /*
     * The order of subdivision here is critical for the IO subsystem.
     * Please do not alter this order without good reasons and regression
     * testing. Specifically, as large blocks of memory are subdivided,
     * the order in which smaller blocks are delivered depends on the order
     * they're subdivided in this function. This is the primary factor
     * influencing the order in which pages are delivered to the IO
     * subsystem according to empirical testing, and this is also justified
     * by considering the behavior of a buddy system containing a single
     * large block of memory acted on by a series of small allocations.
     * This behavior is a critical factor in sglist merging's success.
     *
     * -- nyc
     */
    static inline void expand(struct zone *zone, struct page *page,
    	int low, int high, struct free_area *area,
    	int migratetype)
    {
    	unsigned long size = 1 << high;
    
    	while (high > low) {
    		area--;
    		high--;
    		size >>= 1;
    		VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
    
    		if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
    			debug_guardpage_enabled() &&
    			high < debug_guardpage_minorder()) {
    			/*
    			 * Mark as guard pages (or page), that will allow to
    			 * merge back to allocator when buddy will be freed.
    			 * Corresponding page table entries will not be touched,
    			 * pages will stay not present in virtual address space
    			 */
    			set_page_guard(zone, &page[size], high, migratetype);
    			continue;
    		}
    		list_add(&page[size].lru, &area->free_list[migratetype]);
    		area->nr_free++;
    		set_page_order(&page[size], high);
    	}
    }
    

    所需要的頁面分配成功之後,__rmqueue()函數返回到這個記憶體塊的起始頁面struct page數據結構。回到buffered_rmqueue()函數,最後還需要利用zone_statistics()函數做一些統計數據的計算。

    回到get_page_from_freelist()函數,最後還要通過prep_new_page()函數做一些有趣的檢查,才能最終出廠。

    [__alloc_page_nodemask()->get_page_from_freelist()->prep_new_page()->check_new_page()]
    /*
     * This page is about to be returned from the page allocator
     */
    static inline int check_new_page(struct page *page)
    {
    	const char *bad_reason = NULL;
    	unsigned long bad_flags = 0;
    
    	if (unlikely(page_mapcount(page)))
    		bad_reason = "nonzero mapcount";
    	if (unlikely(page->mapping != NULL))
    		bad_reason = "non-NULL mapping";
    	if (unlikely(atomic_read(&page->_count) != 0))
    		bad_reason = "nonzero _count";
    	if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
    		bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
    		bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
    	}
    #ifdef CONFIG_MEMCG
    	if (unlikely(page->mem_cgroup))
    		bad_reason = "page still charged to cgroup";
    #endif
    	if (unlikely(bad_reason)) {
    		bad_page(page, bad_reason, bad_flags);
    		return 1;
    	}
    	return 0;
    }
    

    check_new_page()函數主要做如下的檢查:

    • 剛分配的頁面struct page的_macpcount計數應該為0。
    • 這是page->mapping為NULL。
    • 判斷這是page的_count是否為0。註意alloc_page()分配的 _count為1,但這裡為0,因為這個函數之後還調用set_page_refcounted()->set_page_count(),把 _count設置為1;
    • 檢查PAGE_FLAGS_CHECK_AT_PREP標誌位,這個flag在free_page時已經清除了,而這時該flag被設置,說明分配過程中有問題。

    上述檢查都通過後,我們分配的頁面就合格了,可以出廠了,頁面page便開啟了屬於它的精彩生命周期。


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 磁碟管理 1 選擇 1 若一臺電腦的記憶體為8GB,則交換分區的大小通常是(c) A.64GB B.128GB C.16GB D.32GB 2 若一臺電腦的記憶體為128M,則交換分區的大小通常為(C) A.64M B.128M C.256M D.512M 3在安裝Linux操作系統時,必須創建的兩 ...
  • 如何從安裝了Windows的工作電腦連遠程接到Linux伺服器?其實有很多軟體,比如 PuTTY、XShell、CRT、MobaXterm等等。不過還是 PuTTY最簡單易用、無需安裝、並且開源免費。PuTTY其實是一個軟體套裝,裡邊除了最常用的putty之外,還包含了像 pscp、psftp等可以 ...
  • 什麼是容器?在生活中我們常見的容器有各種瓶瓶罐罐、各種能夠容納其它物料的東西叫容器;容器的特點就是有著很好的隔離作用,使得不同的物料互相隔離;除此之外容器還方便運輸、方便儲存;這是生活中所說的容器,以及它的特點;在電腦領域中,所謂容器不外乎也有同生活中的容器的特點,隔離,方便“運輸”(電腦中的... ...
  • 文件操作 help——列出cygwin支持的所有命令 pwd——顯示當前的路徑 ls——顯示當前文件夾中的所有文件 mkdir——建立目錄 rmdir——刪除目錄 rm——刪除文件 cat bj.log ——打開文件 grep 查找命令 cat bj.log grep Baiduspider >xi ...
  • 本文檔是根據網路資料整理的,參考文章眾多,若有雷同,純屬巧合! Linux 目錄 - /:根目錄,一般只存放目錄,不存放文件 - /bin -> /usr/bin:可執行二進位文件的目錄,也是常用命令目錄,如常用的命令 ls、cat、mv 等 - /boot:該目錄中存放系統的內核文件,系統引導時使 ...
  • 最近研究CentOS8 發現右鍵打開後沒有終端這一項: 1.經過查詢發現是沒有安裝一個包 2.使用命令進行安裝並重啟: [root@base ~]# yum -y install nautilus-open-terminal CentOS-BaseOS-8 - Media 3.8 MB/s | 3. ...
  • 最近老闆沉迷於抖音,時不時在那邊呵呵傻笑,於是我偷偷湊過去一看,好家伙,他正在看朱一旦~ 這天,老闆幽幽地走到我身邊,淡淡地跟我說,良許,你要是能找到公司里混水摸魚的人,我就給你漲薪!我回過頭,望著他朱一旦似的枯燥笑臉,自信說道,放心,有我在,公司里就沒有摸魚的人! 作為一名資深摸魚專家,熟知 10 ...
  • 內核編譯步驟及模塊管理 設定內核參數的方法: echo VALUE > /proc/sys/TO/SOMEFILE sysctl -w kernel.hostname= 能立即生效,但無法永安有效。 永久有效需要修改配置文件/etc/syctl.conf 修改完配置文件不會理解生效,需要執行以下命令 ...
一周排行
    -Advertisement-
    Play Games
  • 前言 在我們開發過程中基本上不可或缺的用到一些敏感機密數據,比如SQL伺服器的連接串或者是OAuth2的Secret等,這些敏感數據在代碼中是不太安全的,我們不應該在源代碼中存儲密碼和其他的敏感數據,一種推薦的方式是通過Asp.Net Core的機密管理器。 機密管理器 在 ASP.NET Core ...
  • 新改進提供的Taurus Rpc 功能,可以簡化微服務間的調用,同時可以不用再手動輸出模塊名稱,或調用路徑,包括負載均衡,這一切,由框架實現並提供了。新的Taurus Rpc 功能,將使得服務間的調用,更加輕鬆、簡約、高效。 ...
  • 順序棧的介面程式 目錄順序棧的介面程式頭文件創建順序棧入棧出棧利用棧將10進位轉16進位數驗證 頭文件 #include <stdio.h> #include <stdbool.h> #include <stdlib.h> 創建順序棧 // 指的是順序棧中的元素的數據類型,用戶可以根據需要進行修改 ...
  • 前言 整理這個官方翻譯的系列,原因是網上大部分的 tomcat 版本比較舊,此版本為 v11 最新的版本。 開源項目 從零手寫實現 tomcat minicat 別稱【嗅虎】心有猛虎,輕嗅薔薇。 系列文章 web server apache tomcat11-01-官方文檔入門介紹 web serv ...
  • C總結與剖析:關鍵字篇 -- <<C語言深度解剖>> 目錄C總結與剖析:關鍵字篇 -- <<C語言深度解剖>>程式的本質:二進位文件變數1.變數:記憶體上的某個位置開闢的空間2.變數的初始化3.為什麼要有變數4.局部變數與全局變數5.變數的大小由類型決定6.任何一個變數,記憶體賦值都是從低地址開始往高地 ...
  • 如果讓你來做一個有狀態流式應用的故障恢復,你會如何來做呢? 單機和多機會遇到什麼不同的問題? Flink Checkpoint 是做什麼用的?原理是什麼? ...
  • C++ 多級繼承 多級繼承是一種面向對象編程(OOP)特性,允許一個類從多個基類繼承屬性和方法。它使代碼更易於組織和維護,並促進代碼重用。 多級繼承的語法 在 C++ 中,使用 : 符號來指定繼承關係。多級繼承的語法如下: class DerivedClass : public BaseClass1 ...
  • 前言 什麼是SpringCloud? Spring Cloud 是一系列框架的有序集合,它利用 Spring Boot 的開發便利性簡化了分散式系統的開發,比如服務註冊、服務發現、網關、路由、鏈路追蹤等。Spring Cloud 並不是重覆造輪子,而是將市面上開發得比較好的模塊集成進去,進行封裝,從 ...
  • class_template 類模板和函數模板的定義和使用類似,我們已經進行了介紹。有時,有兩個或多個類,其功能是相同的,僅僅是數據類型不同。類模板用於實現類所需數據的類型參數化 template<class NameType, class AgeType> class Person { publi ...
  • 目錄system v IPC簡介共用記憶體需要用到的函數介面shmget函數--獲取對象IDshmat函數--獲得映射空間shmctl函數--釋放資源共用記憶體實現思路註意 system v IPC簡介 消息隊列、共用記憶體和信號量統稱為system v IPC(進程間通信機制),V是羅馬數字5,是UNI ...