Linux內核最新的連續記憶體分配器(CMA)——避免預留大塊記憶體【轉】

来源:https://www.cnblogs.com/linhaostudy/archive/2018/12/25/10174653.html
-Advertisement-
Play Games

在我們使用ARM等嵌入式Linux系統的時候,一個頭疼的問題是GPU,Camera,HDMI等都需要預留大量連續記憶體,這部分記憶體平時不用,但是一般的做法又必須先預留著。目前,Marek Szyprowski和Michal Nazarewicz實現了一套全新的Contiguous Memory All ...


在我們使用ARM等嵌入式Linux系統的時候,一個頭疼的問題是GPU,Camera,HDMI等都需要預留大量連續記憶體,這部分記憶體平時不用,但是一般的做法又必須先預留著。目前,Marek Szyprowski和Michal Nazarewicz實現了一套全新的Contiguous Memory Allocator。通過這套機制,我們可以做到不預留記憶體,這些記憶體平時是可用的,只有當需要的時候才被分配給Camera,HDMI等設備。下麵分析它的基本代碼流程。

1. 聲明連續記憶體

內核啟動過程中arch/arm/mm/init.c中的arm_memblock_init()會調用dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));

該函數位於:drivers/base/dma-contiguous.c

/**
 * dma_contiguous_reserve() - reserve area for contiguous memory handling
 * @limit: End address of the reserved memory (optional, 0 for any).
 *
 * This function reserves memory from early allocator. It should be
 * called by arch specific code once the early allocator (memblock or bootmem)
 * has been activated and all other subsystems have already allocated/reserved
 * memory.
 */
void __init dma_contiguous_reserve(phys_addr_t limit)
{
        unsigned long selected_size = 0;
 
        pr_debug("%s(limit %08lx)\n", __func__, (unsigned long)limit);
 
        if (size_cmdline != -1) {
                selected_size = size_cmdline;
        } else {
#ifdef CONFIG_CMA_SIZE_SEL_MBYTES
                selected_size = size_bytes;
#elif defined(CONFIG_CMA_SIZE_SEL_PERCENTAGE)
                selected_size = cma_early_percent_memory();
#elif defined(CONFIG_CMA_SIZE_SEL_MIN)
                selected_size = min(size_bytes, cma_early_percent_memory());
#elif defined(CONFIG_CMA_SIZE_SEL_MAX)
                selected_size = max(size_bytes, cma_early_percent_memory());
#endif
        }   
 
        if (selected_size) {
                pr_debug("%s: reserving %ld MiB for global area\n", __func__,
                         selected_size / SZ_1M);
 
                dma_declare_contiguous(NULL, selected_size, 0, limit);
        }   

其中的size_bytes定義為:

static const unsigned long size_bytes = CMA_SIZE_MBYTES * SZ_1M

預設情況下,CMA_SIZE_MBYTES會被定義為16MB,來源於CONFIG_CMA_SIZE_MBYTES=16

int __init dma_declare_contiguous(struct device *dev, unsigned long size,
                                  phys_addr_t base, phys_addr_t limit)
{
        ...
        /* Reserve memory */
        if (base) {
                if (memblock_is_region_reserved(base, size) ||
                    memblock_reserve(base, size) < 0) {
                        base = -EBUSY;
                        goto err;
                }
        } else {
                /*
                 * Use __memblock_alloc_base() since
                 * memblock_alloc_base() panic()s.
                 */
                phys_addr_t addr = __memblock_alloc_base(size, alignment, limit);
                if (!addr) {
                        base = -ENOMEM;
                        goto err;
                } else if (addr + size > ~(unsigned long)0) {
                        memblock_free(addr, size);
                        base = -EINVAL;
                        base = -EINVAL;
                        goto err;
                } else {
                        base = addr;
                }
        }
 
        /*
         * Each reserved area must be initialised later, when more kernel
         * subsystems (like slab allocator) are available.
         */
        r->start = base;
        r->size = size;
        r->dev = dev;
        cma_reserved_count++;
        pr_info("CMA: reserved %ld MiB at %08lx\n", size / SZ_1M,
                (unsigned long)base);
 
        /* Architecture specific contiguous memory fixup. */
        dma_contiguous_early_fixup(base, size);
        return 0;
err:
        pr_err("CMA: failed to reserve %ld MiB\n", size / SZ_1M);
        return base;
} 

由此可見,連續記憶體區域也是在內核啟動的早期,通過__memblock_alloc_base()拿到的。

另外:

drivers/base/dma-contiguous.c裡面的core_initcall()會導致cma_init_reserved_areas()被調用:

cma_create_area()會調用cma_activate_area(),cma_activate_area()函數則會針對每個page調用:

init_cma_reserved_pageblock(pfn_to_page(base_pfn));

這個函數則會通過set_pageblock_migratetype(page, MIGRATE_CMA)將頁設置為MIGRATE_CMA類型的:

#ifdef CONFIG_CMA
/* Free whole pageblock and set it's migration type to MIGRATE_CMA. */
void __init init_cma_reserved_pageblock(struct page *page)
{                                    
        unsigned i = pageblock_nr_pages;
        struct page *p = page;
        
        do {
                __ClearPageReserved(p);
                set_page_count(p, 0);
        } while (++p, --i);
        
        set_page_refcounted(page);
        set_pageblock_migratetype(page, MIGRATE_CMA);
        __free_pages(page, pageblock_order);
        totalram_pages += pageblock_nr_pages;
}       
#endif

同時其中調用的__free_pages(page, pageblock_order);最終會調用到__free_one_page(page, zone, order, migratetype);

相關的page會被加到MIGRATE_CMA的free_list上面去:

list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);

2. 申請連續記憶體

申請連續記憶體仍然使用標準的arch/arm/mm/dma-mapping.c中定義的dma_alloc_coherent()和dma_alloc_writecombine(),這二者會間接調用drivers/base/dma-contiguous.c中的

struct page *dma_alloc_from_contiguous(struct device *dev, int count,
                                       unsigned int align)

->

struct page *dma_alloc_from_contiguous(struct device *dev, int count,
                                       unsigned int align)
{
       ...
 
       for (;;) {
                pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
                                                    start, count, mask);
                if (pageno >= cma->count) {
                        ret = -ENOMEM;
                        goto error;
                }
 
                pfn = cma->base_pfn + pageno;
                ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
                if (ret == 0) {
                        bitmap_set(cma->bitmap, pageno, count);
                        break;
                } else if (ret != -EBUSY) {
                        goto error;
                }
                pr_debug("%s(): memory range at %p is busy, retrying\n",
                         __func__, pfn_to_page(pfn));
                /* try again with a bit different memory target */
                start = pageno + mask + 1;
        }
       ...
 
}

--》

int alloc_contig_range(unsigned long start, unsigned long end,

                       unsigned migratetype)

需要隔離page,隔離page的作用通過代碼的註釋可以體現:

 /*
         * What we do here is we mark all pageblocks in range as
         * MIGRATE_ISOLATE.  Because of the way page allocator work, we
         * align the range to MAX_ORDER pages so that page allocator
         * won't try to merge buddies from different pageblocks and
         * change MIGRATE_ISOLATE to some other migration type.
         *
         * Once the pageblocks are marked as MIGRATE_ISOLATE, we
         * migrate the pages from an unaligned range (ie. pages that
         * we are interested in).  This will put all the pages in
         * range back to page allocator as MIGRATE_ISOLATE.
         *
         * When this is done, we take the pages in range from page
         * allocator removing them from the buddy system.  This way
         * page allocator will never consider using them.
         *
         * This lets us mark the pageblocks back as
         * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
         * MAX_ORDER aligned range but not in the unaligned, original
         * range are put back to page allocator so that buddy can use
         * them. 
         */  
                
        ret = start_isolate_page_range(pfn_align_to_maxpage_down(start),
                                       pfn_align_to_maxpage_up(end),
                                       migratetype);

簡單地說,就是把相關的page標記為MIGRATE_ISOLATE,這樣buddy系統就不會再使用他們。

/*      
 * start_isolate_page_range() -- make page-allocation-type of range of pages
 * to be MIGRATE_ISOLATE.
 * @start_pfn: The lower PFN of the range to be isolated.
 * @end_pfn: The upper PFN of the range to be isolated.
 * @migratetype: migrate type to set in error recovery.
 *
 * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
 * the range will never be allocated. Any free pages and pages freed in the
 * future will not be allocated again.
 *
 * start_pfn/end_pfn must be aligned to pageblock_order.
 * Returns 0 on success and -EBUSY if any part of range cannot be isolated.
 */
int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
                             unsigned migratetype)
{
        unsigned long pfn;
        unsigned long undo_pfn;
        struct page *page;
 
        BUG_ON((start_pfn) & (pageblock_nr_pages - 1));
        BUG_ON((end_pfn) & (pageblock_nr_pages - 1));
 
        for (pfn = start_pfn;
             pfn < end_pfn;
             pfn += pageblock_nr_pages) {
                page = __first_valid_page(pfn, pageblock_nr_pages);
                if (page && set_migratetype_isolate(page)) {
                        undo_pfn = pfn;
                        goto undo;
                }
        }
        return 0;
undo:
        for (pfn = start_pfn;
             pfn < undo_pfn;
             pfn += pageblock_nr_pages)
                unset_migratetype_isolate(pfn_to_page(pfn), migratetype);
 
        return -EBUSY;
}

接下來調用__alloc_contig_migrate_range()進行頁面隔離和遷移:

static int __alloc_contig_migrate_range(unsigned long start, unsigned long end) 
{
        /* This function is based on compact_zone() from compaction.c. */
 
        unsigned long pfn = start;
        unsigned int tries = 0; 
        int ret = 0; 
 
        struct compact_control cc = {
                .nr_migratepages = 0, 
                .order = -1,
                .zone = page_zone(pfn_to_page(start)),
                .sync = true,
        };   
        INIT_LIST_HEAD(&cc.migratepages);
 
        migrate_prep_local();
 
        while (pfn < end || !list_empty(&cc.migratepages)) {
                if (fatal_signal_pending(current)) {
                        ret = -EINTR;
                        break;
                }    
 
                if (list_empty(&cc.migratepages)) {
                        cc.nr_migratepages = 0; 
                        pfn = isolate_migratepages_range(cc.zone, &cc, 
                                                         pfn, end);
                        if (!pfn) {
                                ret = -EINTR;
                                break;
                        }    
                        tries = 0; 
                } else if (++tries == 5) { 
                        ret = ret < 0 ? ret : -EBUSY;
                        break;
                }    
 
                ret = migrate_pages(&cc.migratepages,
                                    __alloc_contig_migrate_alloc,
                                    0, false, true);
        }    
 
        putback_lru_pages(&cc.migratepages);
        return ret > 0 ? 0 : ret; 
}

其中的函數migrate_pages()會完成頁面的遷移,遷移過程中通過傳入的__alloc_contig_migrate_alloc()申請新的page,並將老的page付給新的page:

int migrate_pages(struct list_head *from,
                new_page_t get_new_page, unsigned long private, bool offlining,
                bool sync)
{
        int retry = 1; 
        int nr_failed = 0; 
        int pass = 0; 
        struct page *page;
        struct page *page2;
        int swapwrite = current->flags & PF_SWAPWRITE;
        int rc;
 
        if (!swapwrite)
                current->flags |= PF_SWAPWRITE;
 
        for(pass = 0; pass < 10 && retry; pass++) {
                retry = 0; 
 
                list_for_each_entry_safe(page, page2, from, lru) {
                        cond_resched();
 
                        rc = unmap_and_move(get_new_page, private,
                                                page, pass > 2, offlining,
                                                sync);
 
                        switch(rc) {
                        case -ENOMEM:
                                goto out; 
                        case -EAGAIN:
                                retry++;
                                break;
                        case 0:
                                break;
                        default:
                                /* Permanent failure */
                                nr_failed++;
                                break;
                        }    
                }    
        }    
        rc = 0;
...
} 

其中的unmap_and_move()函數較為關鍵,它定義在mm/migrate.c中

/*
 * Obtain the lock on page, remove all ptes and migrate the page
 * to the newly allocated page in newpage.
 */
static int unmap_and_move(new_page_t get_new_page, unsigned long private,
            struct page *page, int force, bool offlining, bool sync)
{
    int rc = 0;
    int *result = NULL;
    struct page *newpage = get_new_page(page, private, &result);
    int remap_swapcache = 1;
    int charge = 0;
    struct mem_cgroup *mem = NULL;
    struct anon_vma *anon_vma = NULL;
 
    ...
 
    /* charge against new page */
    charge = mem_cgroup_prepare_migration(page, newpage, &mem);
    ...
 
    if (PageWriteback(page)) {
        if (!force || !sync)
            goto uncharge;
        wait_on_page_writeback(page);
    }
    /*
     * By try_to_unmap(), page->mapcount goes down to 0 here. In this case,
     * we cannot notice that anon_vma is freed while we migrates a page.
     * This get_anon_vma() delays freeing anon_vma pointer until the end
     * of migration. File cache pages are no problem because of page_lock()
     * File Caches may use write_page() or lock_page() in migration, then,
     * just care Anon page here.
     */
    if (PageAnon(page)) {
        /*
         * Only page_lock_anon_vma() understands the subtleties of
         * getting a hold on an anon_vma from outside one of its mms.
         */
        anon_vma = page_lock_anon_vma(page);
        if (anon_vma) {
            /*
             * Take a reference count on the anon_vma if the
             * page is mapped so that it is guaranteed to
             * exist when the page is remapped later
             */
            get_anon_vma(anon_vma);
            page_unlock_anon_vma(anon_vma);
        } else if (PageSwapCache(page)) {
            /*
             * We cannot be sure that the anon_vma of an unmapped
             * swapcache page is safe to use because we don't
             * know in advance if the VMA that this page belonged
             * to still exists. If the VMA and others sharing the
             * data have been freed, then the anon_vma could
             * already be invalid.
             *
             * To avoid this possibility, swapcache pages get
             * migrated but are not remapped when migration
             * completes
             */
            remap_swapcache = 0;
        } else {
            goto uncharge;
        }
    }
 
    ...
    /* Establish migration ptes or remove ptes */
    try_to_unmap(page, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
 
skip_unmap:
    if (!page_mapped(page))
        rc = move_to_new_page(newpage, page, remap_swapcache);
 
    if (rc && remap_swapcache)
        remove_migration_ptes(page, page);
 
    /* Drop an anon_vma reference if we took one */
    if (anon_vma)
        drop_anon_vma(anon_vma);
 
uncharge:
    if (!charge)
        mem_cgroup_end_migration(mem, page, newpage, rc == 0);
unlock:
    unlock_page(page);
 
move_newpage:
    ...
}

通過unmap_and_move(),老的page就被遷移過去新的page。

接下來要回收page,回收page的作用是,不至於因為拿了連續的記憶體後,系統變得記憶體饑餓:

->

/*
         * Reclaim enough pages to make sure that contiguous allocation
         * will not starve the system.
         */
        __reclaim_pages(zone, GFP_HIGHUSER_MOVABLE, end-start);

->

/*
 * Trigger memory pressure bump to reclaim some pages in order to be able to
 * allocate 'count' pages in single page units. Does similar work as
 *__alloc_pages_slowpath() function.
 */
static int __reclaim_pages(struct zone *zone, gfp_t gfp_mask, int count)
{
        enum zone_type high_zoneidx = gfp_zone(gfp_mask);
        struct zonelist *zonelist = node_zonelist(0, gfp_mask);
        int did_some_progress = 0;
        int order = 1;
        unsigned long watermark;
 
        /*
         * Increase level of watermarks to force kswapd do his job
         * to stabilise at new watermark level.
         */
        __update_cma_watermarks(zone, count);
 
        /* Obey watermarks as if the page was being allocated */
        watermark = low_wmark_pages(zone) + count;
        while (!zone_watermark_ok(zone, 0, watermark, 0, 0)) {
                wake_all_kswapd(order, zonelist, high_zoneidx, zone_idx(zone));
 
                did_some_progress = __perform_reclaim(gfp_mask, order, zonelist,
                                                      NULL);
                if (!did_some_progress) {
                        /* Exhausted what can be done so it's blamo time */
                        out_of_memory(zonelist, gfp_mask, order, NULL);
                }
        }
 
        /* Restore original watermark levels. */
        __update_cma_watermarks(zone, -count);
 
        return count;
}

3. 釋放連續記憶體

記憶體釋放的時候也比較簡單,直接就是:

arch/arm/mm/dma-mapping.c

void dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle)

->

arch/arm/mm/dma-mapping.c:

static void __free_from_contiguous(struct device *dev, struct page *page,
                                   size_t size)
{
        __dma_remap(page, size, pgprot_kernel);
        dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT);
}

->

bool dma_release_from_contiguous(struct device *dev, struct page *pages,
                                 int count)
{
        ...
        free_contig_range(pfn, count);
        ..
 
}

->

void free_contig_range(unsigned long pfn, unsigned nr_pages)
{       
        for (; nr_pages--; ++pfn)
                __free_page(pfn_to_page(pfn));
}  

將page交還給buddy。

4. 內核記憶體分配的migratetype

內核記憶體分配的時候,帶的標誌是GFP_,但是GFP_可以轉化為migratetype:

static inline int allocflags_to_migratetype(gfp_t gfp_flags)
{
        WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);
 
        if (unlikely(page_group_by_mobility_disabled))
                return MIGRATE_UNMOVABLE;
 
        /* Group based on mobility */
        return (((gfp_flags & __GFP_MOVABLE) != 0) << 1) |
                ((gfp_flags & __GFP_RECLAIMABLE) != 0); 
}

之後申請記憶體的時候,會對比遷移類型匹配的free_list:

        page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
                        zonelist, high_zoneidx, ALLOC_WMARK_LOW|ALLOC_CPUSET,
                        preferred_zone, migratetype);

另外,筆者也編寫了一個測試程式,透過它隨時測試CMA的功能:

/*
 * kernel module helper for testing CMA
 *
 * Licensed under GPLv2 or later.
 */
 
#include <linux/module.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/dma-mapping.h>
 
#define CMA_NUM  10
static struct device *cma_dev;
static dma_addr_t dma_phys[CMA_NUM];
static void *dma_virt[CMA_NUM];
 
/* any read request will free coherent memory, eg.
 * cat /dev/cma_test
 */
static ssize_t
cma_test_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
{
    int i;
 
    for (i = 0; i < CMA_NUM; i++) {
        if (dma_virt[i]) {
            dma_free_coherent(cma_dev, (i + 1) * SZ_1M, dma_virt[i], dma_phys[i]);
            _dev_info(cma_dev, "free virt: %p phys: %p\n", dma_virt[i], (void *)dma_phys[i]);
            dma_virt[i] = NULL;
            break;
        }
    }
    return 0;
}
 
/*
 * any write request will alloc coherent memory, eg.
 * echo 0 > /dev/cma_test
 */
static ssize_t
cma_test_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
    int i;
    int ret;
 
    for (i = 0; i < CMA_NUM; i++) {
        if (!dma_virt[i]) {
            dma_virt[i] = dma_alloc_coherent(cma_dev, (i + 1) * SZ_1M, &dma_phys[i], GFP_KERNEL);
 
            if (dma_virt[i]) {
                void *p;
                /* touch every page in the allocated memory */
                for (p = dma_virt[i]; p <  dma_virt[i] + (i + 1) * SZ_1M; p += PAGE_SIZE)
                    *(u32 *)p = 0;
 
                _dev_info(cma_dev, "alloc virt: %p phys: %p\n", dma_virt[i], (void *)dma_phys[i]);
            } else {
                dev_err(cma_dev, "no mem in CMA area\n");
                ret = -ENOMEM;
            }
            break;
        }
    }
 
    return count;
}
 
static const struct file_operations cma_test_fops = {
    .owner =    THIS_MODULE,
    .read  =    cma_test_read,
    .write =    cma_test_write,
};
 
static struct miscdevice cma_test_misc = {
    .name = "cma_test",
    .fops = &cma_test_fops,
};
 
static int __init cma_test_init(void)
{
    int ret = 0;
 
    ret = misc_register(&cma_test_misc);
    if (unlikely(ret)) {
        pr_err("failed to register cma test misc device!\n");
        return ret;
    }
    cma_dev = cma_test_misc.this_device;
    cma_dev->coherent_dma_mask = ~0;
    _dev_info(cma_dev, "registered.\n");
 
    return ret;
}
module_init(cma_test_init);
 
static void __exit cma_test_exit(void)
{
    misc_deregister(&cma_test_misc);
}
module_exit(cma_test_exit);
 
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Barry Song <[email protected]>");
MODULE_DESCRIPTION("kernel module to help the test of CMA");
MODULE_ALIAS("CMA test");

申請記憶體:

# echo 0 > /dev/cma_test

釋放記憶體:

# cat /dev/cma_test

您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • Redis官網:https://redis.io Redis是完全開源免費的,遵守 "BSD協議" . Redis是一個高性能的 資料庫. @[TOC] Redis具有以下特點 1. 支持數據持久化,可將記憶體中的數據保存至磁碟,重啟後可以再次載入進行使用. 2. 支持五種數據類型. 3. 支持資料庫 ...
  • ArchLinux關機、重啟時出現ACPI錯誤: ACPI中類似 錯誤是因為雙顯卡而導致的,所以解決思路如下: 1. 啟動Archlinux時關閉獨立顯卡,使用集顯。 2. 屏蔽nouveau,安裝nvidia閉源驅動。 ...
  • 查看所有運行中的進程:ps aux | less顯示所有進程: ps -A / ps -e顯示進程的樹狀圖:pstree ...
  • 示範一下如何透過Docker安裝GitLab,也順便將一些常用的東西紀錄一下 作業系統: CentOS 7 安裝Docker CE 1. 先移除系統上預先安裝的Docker舊版本 2. 安裝相關套件 3. 新增Docker 官方的stable 套件庫(repository) 4. 更新yum 的套件 ...
  • 今天使用docker部署asp.net core應用程式時,發現當我們做好基礎鏡像之後需要把鏡像導出到正式環境,因此學習了一下如何從docker中導出鏡像: 1.首先通過docker images命令查看需要導出的鏡像信息 我們要導出容器id為: 47c4890b7bd1的鏡像, 使用 docker ...
  • 系統平臺:android6.0概述Healthd是android4.4之後提出來的一種中介模型,該模型向下監聽來自底層的電池事件,向上傳遞電池數據信息給Framework層的BatteryService用以計算電池電量相關狀態信息,BatteryServcie通過傳遞來的數據來計算電池電量顯示,剩餘 ...
  • 1.對雙向鏈表的具體操作如下: 2.字元串相關 內核中經常會有字元串轉換的需要, 其介面如下: 示例: 3.另外字元串本身的操作介面如下: 文章來源http://blog.sina.com.cn/s/blog_b2aa4e080102xw25.html ...
  • CentOS 7查看以開放埠命令:firewall-cmd —list-ports 查看埠是否開放命令:第一個方法就是使用lsof -i:埠號命令行,例如lsof -i:80。如果沒有任何信息輸出,則表示該埠號(此處是80)沒有開放。第二個方法就是使用netstat -aptn命令行,查看所 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...