此處承接前面未深入分析的頁面釋放部分,主要詳細分析伙伴管理演算法中頁面釋放的實現。頁面釋放的函數入口是__free_page(),其實則是一個巨集定義。 具體實現: 而__free_pages()的實現: 其中put_page_testzero()是對page結構的_count引用計數做原子減及測試,用 ...
此處承接前面未深入分析的頁面釋放部分,主要詳細分析伙伴管理演算法中頁面釋放的實現。頁面釋放的函數入口是__free_page(),其實則是一個巨集定義。
具體實現:
【file:/include/linux/gfp.h】
#define __free_page(page) __free_pages((page), 0)
而__free_pages()的實現:
【file:/mm/page_alloc.c】
void __free_pages(struct page *page, unsigned int order)
{
if (put_page_testzero(page)) {
if (order == 0)
free_hot_cold_page(page, 0);
else
__free_pages_ok(page, order);
}
}
其中put_page_testzero()是對page結構的_count引用計數做原子減及測試,用於檢查記憶體頁面是否仍被使用,如果不再使用,則進行釋放。其中order表示頁面數量,如果釋放的是單頁,則會調用free_hot_cold_page()將頁面釋放至per-cpu page緩存中,而不是伙伴管理演算法;真正的釋放至伙伴管理演算法的是__free_pages_ok(),同時也是用於多個頁面釋放的情況。
此處接著則由free_hot_cold_page()開始分析:
【file:/mm/page_alloc.c】
/*
* Free a 0-order page
* cold == 1 ? free a cold page : free a hot page
*/
void free_hot_cold_page(struct page *page, int cold)
{
struct zone *zone = page_zone(page);
struct per_cpu_pages *pcp;
unsigned long flags;
int migratetype;
if (!free_pages_prepare(page, 0))
return;
migratetype = get_pageblock_migratetype(page);
set_freepage_migratetype(page, migratetype);
local_irq_save(flags);
__count_vm_event(PGFREE);
/*
* We only track unmovable, reclaimable and movable on pcp lists.
* Free ISOLATE pages back to the allocator because they are being
* offlined but treat RESERVE as movable pages so we can get those
* areas back if necessary. Otherwise, we may have to free
* excessively into the page allocator
*/
if (migratetype >= MIGRATE_PCPTYPES) {
if (unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, 0, migratetype);
goto out;
}
migratetype = MIGRATE_MOVABLE;
}
pcp = &this_cpu_ptr(zone->pageset)->pcp;
if (cold)
list_add_tail(&page->lru, &pcp->lists[migratetype]);
else
list_add(&page->lru, &pcp->lists[migratetype]);
pcp->count++;
if (pcp->count >= pcp->high) {
unsigned long batch = ACCESS_ONCE(pcp->batch);
free_pcppages_bulk(zone, batch, pcp);
pcp->count -= batch;
}
out:
local_irq_restore(flags);
}
先看一下free_pages_prepare()的實現:
【file:/mm/page_alloc.c】
static bool free_pages_prepare(struct page *page, unsigned int order)
{
int i;
int bad = 0;
trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
if (PageAnon(page))
page->mapping = NULL;
for (i = 0; i < (1 << order); i++)
bad += free_pages_check(page + i);
if (bad)
return false;
if (!PageHighMem(page)) {
debug_check_no_locks_freed(page_address(page),
PAGE_SIZE << order);
debug_check_no_obj_freed(page_address(page),
PAGE_SIZE << order);
}
arch_free_page(page, order);
kernel_map_pages(page, 1 << order, 0);
return true;
}
其中trace_mm_page_free()用於trace追蹤機制;而kmemcheck_free_shadow()用於記憶體檢測工具kmemcheck,如果未定義CONFIG_KMEMCHECK的情況下,它是一個空函數。接著後面的PageAnon()等都是用於檢查頁面狀態的情況,以判斷頁面是否允許釋放,避免錯誤釋放頁面。由此可知該函數主要作用是檢查和調試。
接著回到free_hot_cold_page()函數中,get_pageblock_migratetype()和set_freepage_migratetype()分別是獲取和設置頁面的遷移類型,即設置到page->index;local_irq_save()和末尾的local_irq_restore()則用於保存恢復中斷請求標識。
if (migratetype >= MIGRATE_PCPTYPES) {
if (unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, 0, migratetype);
goto out;
}
migratetype = MIGRATE_MOVABLE;
}
這裡面的MIGRATE_PCPTYPES用來表示每CPU頁框高速緩存的數據結構中的鏈表的遷移類型數目,如果某個頁面類型大於MIGRATE_PCPTYPES則表示其可掛到可移動列表中,如果遷移類型是MIGRATE_ISOLATE則直接將該其釋放到伙伴管理演算法中。
末尾部分:
pcp = &this_cpu_ptr(zone->pageset)->pcp;
if (cold)
list_add_tail(&page->lru, &pcp->lists[migratetype]);
else
list_add(&page->lru, &pcp->lists[migratetype]);
pcp->count++;
if (pcp->count >= pcp->high) {
unsigned long batch = ACCESS_ONCE(pcp->batch);
free_pcppages_bulk(zone, batch, pcp);
pcp->count -= batch;
}
其中pcp表示記憶體管理區的每CPU管理結構,cold表示冷熱頁面,如果是冷頁就將其掛接到對應遷移類型的鏈表尾,而若是熱頁則掛接到對應遷移類型的鏈表頭。其中if (pcp->count >= pcp->high)判斷值得註意,其用於如果釋放的頁面超過了每CPU緩存的最大頁面數時,則將其批量釋放至伙伴管理演算法中,其中批量數為pcp->batch。
具體分析一下釋放至伙伴管理演算法的實現free_pcppages_bulk():
【file:/mm/page_alloc.c】
/*
* Frees a number of pages from the PCP lists
* Assumes all pages on list are in same zone, and of same order.
* count is the number of pages to free.
*
* If the zone was previously in an "all pages pinned" state then look to
* see if this freeing clears that state.
*
* And clear the zone's pages_scanned counter, to hold off the "all pages are
* pinned" detection logic.
*/
static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp)
{
int migratetype = 0;
int batch_free = 0;
int to_free = count;
spin_lock(&zone->lock);
zone->pages_scanned = 0;
while (to_free) {
struct page *page;
struct list_head *list;
/*
* Remove pages from lists in a round-robin fashion. A
* batch_free count is maintained that is incremented when an
* empty list is encountered. This is so more pages are freed
* off fuller lists instead of spinning excessively around empty
* lists
*/
do {
batch_free++;
if (++migratetype == MIGRATE_PCPTYPES)
migratetype = 0;
list = &pcp->lists[migratetype];
} while (list_empty(list));
/* This is the only non-empty list. Free them all. */
if (batch_free == MIGRATE_PCPTYPES)
batch_free = to_free;
do {
int mt; /* migratetype of the to-be-freed page */
page = list_entry(list->prev, struct page, lru);
/* must delete as __free_one_page list manipulates */
list_del(&page->lru);
mt = get_freepage_migratetype(page);
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, zone, 0, mt);
trace_mm_page_pcpu_drain(page, 0, mt);
if (likely(!is_migrate_isolate_page(page))) {
__mod_zone_page_state(zone, NR_FREE_PAGES, 1);
if (is_migrate_cma(mt))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
}
} while (--to_free && --batch_free && !list_empty(list));
}
spin_unlock(&zone->lock);
}
裡面while大迴圈用於計數釋放指定批量數的頁面。其中釋放方式是先自MIGRATE_UNMOVABLE遷移類型起(止於MIGRATE_PCPTYPES遷移類型),遍歷各個鏈表統計其鏈表中頁面數:
do {
batch_free++;
if (++migratetype == MIGRATE_PCPTYPES)
migratetype = 0;
list = &pcp->lists[migratetype];
} while (list_empty(list));
如果只有MIGRATE_PCPTYPES遷移類型的鏈表為非空鏈表,則全部頁面將從該鏈表中釋放。
後面的do{}while()裡面,其先將頁面從lru鏈表中去除,然後獲取頁面的遷移類型,通過__free_one_page()釋放頁面,最後使用__mod_zone_page_state()修改管理區的狀態值。
著重分析一下__free_one_page()的實現:
【file:/mm/page_alloc.c】
/*
* Freeing function for a buddy system allocator.
*
* The concept of a buddy system is to maintain direct-mapped table
* (containing bit values) for memory blocks of various "orders".
* The bottom level table contains the map for the smallest allocatable
* units of memory (here, pages), and each level above it describes
* pairs of units from the levels below, hence, "buddies".
* At a high level, all that happens here is marking the table entry
* at the bottom level available, and propagating the changes upward
* as necessary, plus some accounting needed to play nicely with other
* parts of the VM system.
* At each level, we keep a list of pages, which are heads of continuous
* free pages of length of (1 << order) and marked with _mapcount
* PAGE_BUDDY_MAPCOUNT_VALUE. Page's order is recorded in page_private(page)
* field.
* So when we are allocating or freeing one, we can derive the state of the
* other. That is, if we allocate a small block, and both were
* free, the remainder of the region must be split into blocks.
* If a block is freed, and its buddy is also free, then this
* triggers coalescing into a block of larger size.
*
* -- nyc
*/
static inline void __free_one_page(struct page *page,
struct zone *zone, unsigned int order,
int migratetype)
{
unsigned long page_idx;
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
VM_BUG_ON(!zone_is_initialized(zone));
if (unlikely(PageCompound(page)))
if (unlikely(destroy_compound_page(page, order)))
return;
VM_BUG_ON(migratetype == -1);
page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
while (order < MAX_ORDER-1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
break;
/*
* Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
* merge with it and move up one order.
*/
if (page_is_guard(buddy)) {
clear_page_guard_flag(buddy);
set_page_private(page, 0);
__mod_zone_freepage_state(zone, 1 << order,
migratetype);
} else {
list_del(&buddy->lru);
zone->free_area[order].nr_free--;
rmv_page_order(buddy);
}
combined_idx = buddy_idx & page_idx;
page = page + (combined_idx - page_idx);
page_idx = combined_idx;
order++;
}
set_page_order(page, order);
/*
* If this is not the largest possible page, check if the buddy
* of the next-highest order is free. If it is, it's possible
* that pages are being freed that will coalesce soon. In case,
* that is happening, add the free page to the tail of the list
* so it's less likely to be used soon and more likely to be merged
* as a higher order page
*/
if ((order < MAX_ORDER-2) && pfn_valid_within(page_to_pfn(buddy))) {
struct page *higher_page, *higher_buddy;
combined_idx = buddy_idx & page_idx;
higher_page = page + (combined_idx - page_idx);
buddy_idx = __find_buddy_index(combined_idx, order + 1);
higher_buddy = higher_page + (buddy_idx - combined_idx);
if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
list_add_tail(&page->lru,
&zone->free_area[order].free_list[migratetype]);
goto out;
}
}
list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
out:
zone->free_area[order].nr_free++;
}
於while (order < MAX_ORDER-1)前面主要是對釋放的頁面進行檢查校驗操作。而while迴圈內,通過__find_buddy_index()獲取與當前釋放的頁面處於同一階的伙伴頁面索引值,同時藉此索引值計算出伙伴頁面地址,並做伙伴頁面檢查以確定其是否可以合併,若否則退出;接著if (page_is_guard(buddy))用於對頁面的debug_flags成員做檢查,由於未配置CONFIG_DEBUG_PAGEALLOC,page_is_guard()固定返回false;則剩下的操作主要就是將頁面從分配鏈中摘除,同時將頁面合併並將其處於的階提升一級。
退出while迴圈後,通過set_page_order()設置頁面最終可合併成為的管理階。最後判斷當前合併的頁面是否為最大階,否則將頁面放至伙伴管理鏈表的末尾,避免其過早被分配,得以機會進一步與高階頁面進行合併。末了,將最後的掛入的階的空閑計數加1。
至此伙伴管理演算法的頁面釋放完畢。
而__free_pages_ok()的頁面釋放實現調用棧則是:
__free_pages_ok()
—>free_one_page()
—>__free_one_page()
殊途同歸,最終還是__free_one_page()來釋放,具體的過程就不再仔細分析了。