Apache DolphinScheduler 如何實現自動化打包+單機/集群部署?

来源:https://www.cnblogs.com/DolphinScheduler/archive/2023/09/12/17697551.html
-Advertisement-
Play Games

Apache DolphinScheduler 是一款開源的分散式任務調度系統,旨在幫助用戶實現複雜任務的自動化調度和管理。DolphinScheduler 支持多種任務類型,可以在單機或集群環境下運行。下麵將介紹如何實現 DolphinScheduler 的自動化打包和單機/集群部署。 自動化打包 ...


file

Apache DolphinScheduler 是一款開源的分散式任務調度系統,旨在幫助用戶實現複雜任務的自動化調度和管理。DolphinScheduler 支持多種任務類型,可以在單機或集群環境下運行。下麵將介紹如何實現 DolphinScheduler 的自動化打包和單機/集群部署。

自動化打包

所需環境:maven、jdk

執行以下shell完成代碼拉取及打包,打包路徑:/opt/action/dolphinscheduler/dolphinscheduler-dist/target/apache-dolphinscheduler-dev-SNAPSHOT-bin.tar.gz

sudo su - root <<EOF
cd /opt/action
git clone [email protected]:apache/dolphinscheduler.git
cd Dolphinscheduler
git fetch origin dev
git checkout -b dev origin/dev
#git log --oneline
EOF
}

# 打包
build(){
sudo su - root <<EOF
cd /opt/action/Dolphinscheduler
mvn -B clean install -Prelease -Dmaven.test.skip=true -Dcheckstyle.skip=true -Dmaven.javadoc.skip=true
EOF
}

單機部署

1、DolphinScheduler運行所需環境

所需環境jdk、zookeeper、mysql

初始化zookeeper(高版本zookeeper推薦使用v3.8及以上版本)環境

安裝包官網下載地址:https://zookeeper.apache.org/

sudo su - root <<EOF
#進入/opt目錄下(安裝目錄自行選擇)
cd /opt
#解壓縮
tar -xvf apache-zookeeper-3.8.0-bin.tar.gz
#修改文件名稱
sudo mv apache-zookeeper-3.8.0-bin zookeeper
#進入zookeeper目錄
cd zookeeper/
#在 /opt/zookeeper 目錄下創建目錄 zkData,用來存放 zookeeper 的數據文件
mkdir zkData
#進入conf文件夾
cd conf/
#修改配置文件,複製 zoo_sample.cfg 文件並重命名為 zoo.cfg因為zookeeper只能識別 zoo.cfg 配置文件
cp zoo_sample.cfg zoo.cfg
#修改 zoo.cfg 的配置
sed  -i 's/\/tmp\/zookeeper/\/opt\/zookeeper\/conf/g' zoo.cfg
#停止之前的zk服務
ps -ef|grep QuorumPeerMain|grep -v grep|awk '{print "kill -9 " $2}' |sh
#使用 vim zoo.cfg 命令修改 zoo.cfg 的配置
sh /opt/zookeeper/bin/zkServer.sh start
EOF
}

jdk、mysql這裡不做過多贅述。

2、初始化配置

2.1 配置文件初始化

初始化文件要放到指定目錄(本文章以/opt/action/tool舉例)

  • 2.1.1新建文件夾
mkdir -p /opt/action/tool
mkdir -p /opt/Dsrelease
  • 2.1.2新建初始化文件common.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/tmp/dolphinscheduler

# resource storage type: HDFS, S3, NONE
resource.storage.type=HDFS

# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resource.upload.path=/dolphinscheduler

# whether to startup kerberos
hadoop.security.authentication.startup.state=false

# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf

# login user from keytab username
[email protected]

# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab

# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=root
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=file:///
aws.access.key.id=minioadmin
aws.secret.access.key=minioadmin
aws.region=us-east-1
aws.endpoint=http://localhost:9000
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace aws2 to actual resourcemanager hostname
yarn.application.status.address=http://aws2:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://aws2:19888/ws/v1/history/mapreduce/jobs/%s

# datasource encryption enable
datasource.encryption.enable=false

# datasource encryption salt
datasource.encryption.salt=!@#$%^&*

# data quality option
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar

#data-quality.error.output.path=/tmp/data-quality-error-data

# Network IP gets priority, default inner outer

# Whether hive SQL is executed in the same session
support.hive.oneSession=false

# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true

# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=

# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default

# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh

# development state
development.state=false

# rpc port
alert.rpc.port=50052

# Url endpoint for zeppelin RESTful API
zeppelin.rest.url=http://localhost:8080
  • 2.1.3新建初始化文件core-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://aws1</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>aws1:2181</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
  • 2.1.4新建初始化文件hdfs-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>aws2:50090</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/secondary</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>aws1</value>
</property>
<property>
<name>dfs.ha.namenodes.aws1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.aws1.nn1</name>
<value>aws1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.aws1.nn2</name>
<value>aws2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.aws1.nn1</name>
<value>aws1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.aws1.nn2</name>
<value>aws2:50070</value>
</property>
<property>


<property>
<name>dfs.datanode.address</name>
<value>aws1:50010</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>aws1:50020</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>aws1:50075</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>aws1:50475</value>
</property>

<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://aws1:8485;aws2:8485;aws3:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/bigdata/hadoop/ha/dfs/jn</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.aws1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
  • 2.1.5上傳初始化jar包mysql-connector-java-8.0.16.jar
  • 2.1.6上傳初始化jar包ojdbc8.jar

2.2 初始化文件替換

cd /opt/Dsrelease
sudo rm -r $today/

echo "rm -r $today"

cd /opt/release

cp $packge_tar /opt/Dsrelease

cd /opt/Dsrelease

tar -zxvf $packge_tar
mv $packge  $today
p_api_lib=/opt/Dsrelease/$today/api-server/libs/
p_master_lib=/opt/Dsrelease/$today/master-server/libs/
p_worker_lib=/opt/Dsrelease/$today/worker-server/libs/
p_alert_lib=/opt/Dsrelease/$today/alert-server/libs/
p_tools_lib=/opt/Dsrelease/$today/tools/libs/
p_st_lib=/opt/Dsrelease/$today/standalone-server/libs/


p_api_conf=/opt/Dsrelease/$today/api-server/conf/
p_master_conf=/opt/Dsrelease/$today/master-server/conf/
p_worker_conf=/opt/Dsrelease/$today/worker-server/conf/
p_alert_conf=/opt/Dsrelease/$today/alert-server/conf/
p_tools_conf=/opt/Dsrelease/$today/tools/conf/
p_st_conf=/opt/Dsrelease/$today/standalone-server/conf/


cp $p0 $p4 $p_api_lib
cp $p0 $p4 $p_master_lib
cp $p0 $p4 $p_worker_lib
cp $p0 $p4 $p_alert_lib
cp $p0 $p4 $p_tools_lib
cp $p0 $p4 $p_st_lib

echo "cp $p0 $p_api_lib"

cp $p1 $p2 $p3 $p_api_conf
cp $p1 $p2 $p3 $p_master_conf
cp $p1 $p2 $p3 $p_worker_conf
cp $p1 $p2 $p3 $p_alert_conf
cp $p1 $p2 $p3 $p_tools_conf
cp $p1 $p2 $p3 $p_st_conf



echo "cp $p1 $p2 $p3 $p_api_conf"
}

define_param(){

packge_tar=apache-dolphinscheduler-dev-SNAPSHOT-bin.tar.gz
packge=apache-dolphinscheduler-dev-SNAPSHOT-bin
p0=/opt/action/tool/mysql-connector-java-8.0.16.jar
p1=/opt/action/tool/common.properties
p2=/opt/action/tool/core-site.xml
p3=/opt/action/tool/hdfs-site.xml
p4=/opt/action/tool/ojdbc8.jar

today=`date +%m%d`


}

2.3 配置文件內容替換

sed  -i 's/spark2/spark/g' /opt/Dsrelease/$today/worker-server/conf/dolphinscheduler_env.sh

cd /opt/Dsrelease/$today/bin/env/
sed -i '$a\export SPRING_PROFILES_ACTIVE=permission_shiro' dolphinscheduler_env.sh
sed -i '$a\export DATABASE="mysql"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_DRIVER_CLASS_NAME="com.mysql.jdbc.Driver"' dolphinscheduler_env.sh
#自定義修改mysql配置
sed -i '$a\export SPRING_DATASOURCE_URL="jdbc:mysql://ctyun6:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&allowPublicKeyRetrieval=true"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_USERNAME="root"' dolphinscheduler_env.sh
sed -i '$a\export SPRING_DATASOURCE_PASSWORD="root@123"' dolphinscheduler_env.sh
echo "替換jdbc配置成功"
#自定義修改zookeeper配置
sed -i '$a\export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}' dolphinscheduler_env.sh
sed -i '$a\export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-ctyun6:2181}' dolphinscheduler_env.sh
echo "替換zookeeper配置成功"
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/resource.storage.type=HDFS/resource.storage.type=NONE/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/hdfs.root.user=root/resource.hdfs.root.user=root/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/master-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/worker-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/alert-server/conf/common.properties
sed -i 's/fs.defaultFS=file:/resource.fs.defaultFS=file:/' /opt/Dsrelease/$today/api-server/conf/common.properties
sed -i '$a\resource.hdfs.fs.defaultFS=file:///' /opt/Dsrelease/$today/api-server/conf/common.properties
echo "替換common.properties配置成功"
# 替換master worker記憶體 api alert也可進行修改,具體根據當前伺服器硬體配置而定,但要遵循Xms=Xmx=2Xmn的規律
cd /opt/Dsrelease/$today/
sed -i 's/Xms4g/Xms2g/g' worker-server/bin/start.sh
sed -i 's/Xmx4g/Xmx2g/g' worker-server/bin/start.sh
sed -i 's/Xmn2g/Xmn1g/g' worker-server/bin/start.sh
sed -i 's/Xms4g/Xms2g/g' master-server/bin/start.sh
sed -i 's/Xmx4g/Xmx2g/g' master-server/bin/start.sh
sed -i 's/Xmn2g/Xmn1g/g' master-server/bin/start.sh
echo "master worker記憶體修改完成"

}

3、刪除HDFS配置

echo "開始刪除hdfs配置"
sudo rm /opt/Dsrelease/$today/api-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/api-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/worker-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/worker-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/master-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/master-server/conf/hdfs-site.xml
sudo rm /opt/Dsrelease/$today/alert-server/conf/core-site.xml
sudo rm /opt/Dsrelease/$today/alert-server/conf/hdfs-site.xml
echo "結束刪除hdfs配置"
}

4、MySQL初始化

init_mysql(){

sql_path="/opt/Dsrelease/$today/tools/sql/sql/dolphinscheduler_mysql.sql"
sourceCommand="source $sql_path"
echo $sourceCommand
echo "開始source:"
mysql -hlocalhost -uroot -proot@123 -D "dolphinscheduler" -e "$sourceCommand"
echo "結束source:"
}

5、啟動DolphinScheduler服務

stop_all_server(){
cd /opt/Dsrelease/$today
./bin/dolphinscheduler-daemon.sh stop api-server
./bin/dolphinscheduler-daemon.sh stop master-server
./bin/dolphinscheduler-daemon.sh stop worker-server
./bin/dolphinscheduler-daemon.sh stop alert-server
ps -ef|grep api-server|grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep master-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep worker-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
ps -ef|grep alert-server |grep -v grep|awk '{print "kill -9 " $2}' |sh
}

run_all_server(){
cd /opt/Dsrelease/$today
./bin/dolphinscheduler-daemon.sh start api-server
./bin/dolphinscheduler-daemon.sh start master-server
./bin/dolphinscheduler-daemon.sh start worker-server
./bin/dolphinscheduler-daemon.sh start alert-server
}

集群部署

1、開放mysql和zookeeper對外埠
2、集群部署及啟動

複製完成初始化的文件夾到指定的伺服器,啟動指定服務即可完成集群部署,要連同一個Zookeeper和MySQL。

本文由 白鯨開源 提供發佈支持!


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 前言 組合模式,英文名稱是:Composite Pattern。當我們談到這個模式的時候,有一個物件和這個模式很像,也符合這個模式要表達的意思,那就是“俄羅斯套娃”。“俄羅斯套娃”就是大的瓷器娃娃裡面裝著一個小的瓷器娃娃,小的瓷器娃娃裡面再裝著更小的瓷器娃娃,直到最後一個不能再裝更小的瓷器娃娃的那個 ...
  • 引子 "某呼"網友提問“C#為什麼非要把函數叫方法?”,看到這個問題還真不知道怎麼回答。要想知道C#為什麼叫方法,還得從傳統面向過程語言說起。 方法的由來 函數(function)是面向過程編程語言里,對可以獨立調用代碼段的另一種叫法,因為通常有數個輸入和一個輸出,很像數學里的那個函數。比如C語言中 ...
  • 在前面隨筆《循序漸進介紹基於CommunityToolkit.Mvvm 和HandyControl的WPF應用端開發(1)》中介紹了Mvvm 的開發,以及一些界面效果,本篇隨筆繼續深入探討基於CommunityToolkit.Mvvm 和HandyControl的WPF應用端開發,介紹如何整合Sql... ...
  • 文件與文件流的區別(自己的話): 在軟體開發過程中,我們常常把文件的“讀寫操作”,與“創造、移動、複製、刪除操作”區分開來。其中,文件的“讀寫操作”是由System.IO命名空間下fileStream類控制的。而文件的“創造、移動、複製、刪除操作”是由file類控制的。 當然了,還有很多和這兩個類功 ...
  • 1、背景描述 出於安全考慮,需要禁止使用root用戶通過ssh遠程登錄Linux 禁用root用戶遠程登錄後,需要提供一個許可權用戶用於ssh遠程登錄 2、創建擁有sudo許可權的用戶 2.1、創建一個普通用戶rain useradd命令用於創建一個用戶, 選項 -m 表示創建用戶的主目錄, -c 表示 ...
  • Intel 海光 鯤鵬920 飛騰2500 CPU性能對比 為了讓程式能快點,特意瞭解了CPU的各種原理,比如多核、超線程、NUMA、睿頻、功耗、GPU、大小核再到分支預測、cache_line失效、加鎖代價、IPC等各種指標(都有對應的代碼和測試數據)都會在這系列文章中得到答案。當然一定會有程式員 ...
  • 1. ENUM(枚舉) 1.1. MySQL在存儲枚舉時非常緊湊,會根據列表值的數量壓縮到1或者2位元組中 1.2. 轉換為ENUM會使表變小 1.3. 如果表中有其他索引,減少主鍵大小也會使這些非主鍵索引小得多 1.4. ENUM列可以存儲一組預定義的不同字元串值 1.5. ENUM欄位是根據內部整 ...
  • 1. Redis的數據結構有哪些 1. 字元串(String):字元串是Redis最基本的數據結構。它可以存儲任意類型的數據,包括文本、整數或二進位數據。字元串類型的值最大可以達到512MB。 SET name "John" GET name ``` 將字元串"John"存儲在鍵名為"name"的字 ...
一周排行
    -Advertisement-
    Play Games
  • 示例項目結構 在 Visual Studio 中創建一個 WinForms 應用程式後,項目結構如下所示: MyWinFormsApp/ │ ├───Properties/ │ └───Settings.settings │ ├───bin/ │ ├───Debug/ │ └───Release/ ...
  • [STAThread] 特性用於需要與 COM 組件交互的應用程式,尤其是依賴單線程模型(如 Windows Forms 應用程式)的組件。在 STA 模式下,線程擁有自己的消息迴圈,這對於處理用戶界面和某些 COM 組件是必要的。 [STAThread] static void Main(stri ...
  • 在WinForm中使用全局異常捕獲處理 在WinForm應用程式中,全局異常捕獲是確保程式穩定性的關鍵。通過在Program類的Main方法中設置全局異常處理,可以有效地捕獲並處理未預見的異常,從而避免程式崩潰。 註冊全局異常事件 [STAThread] static void Main() { / ...
  • 前言 給大家推薦一款開源的 Winform 控制項庫,可以幫助我們開發更加美觀、漂亮的 WinForm 界面。 項目介紹 SunnyUI.NET 是一個基於 .NET Framework 4.0+、.NET 6、.NET 7 和 .NET 8 的 WinForm 開源控制項庫,同時也提供了工具類庫、擴展 ...
  • 說明 該文章是屬於OverallAuth2.0系列文章,每周更新一篇該系列文章(從0到1完成系統開發)。 該系統文章,我會儘量說的非常詳細,做到不管新手、老手都能看懂。 說明:OverallAuth2.0 是一個簡單、易懂、功能強大的許可權+可視化流程管理系統。 有興趣的朋友,請關註我吧(*^▽^*) ...
  • 一、下載安裝 1.下載git 必須先下載並安裝git,再TortoiseGit下載安裝 git安裝參考教程:https://blog.csdn.net/mukes/article/details/115693833 2.TortoiseGit下載與安裝 TortoiseGit,Git客戶端,32/6 ...
  • 前言 在項目開發過程中,理解數據結構和演算法如同掌握蓋房子的秘訣。演算法不僅能幫助我們編寫高效、優質的代碼,還能解決項目中遇到的各種難題。 給大家推薦一個支持C#的開源免費、新手友好的數據結構與演算法入門教程:Hello演算法。 項目介紹 《Hello Algo》是一本開源免費、新手友好的數據結構與演算法入門 ...
  • 1.生成單個Proto.bat內容 @rem Copyright 2016, Google Inc. @rem All rights reserved. @rem @rem Redistribution and use in source and binary forms, with or with ...
  • 一:背景 1. 講故事 前段時間有位朋友找到我,說他的窗體程式在客戶這邊出現了卡死,讓我幫忙看下怎麼回事?dump也生成了,既然有dump了那就上 windbg 分析吧。 二:WinDbg 分析 1. 為什麼會卡死 窗體程式的卡死,入口門檻很低,後續往下分析就不一定了,不管怎麼說先用 !clrsta ...
  • 前言 人工智慧時代,人臉識別技術已成為安全驗證、身份識別和用戶交互的關鍵工具。 給大家推薦一款.NET 開源提供了強大的人臉識別 API,工具不僅易於集成,還具備高效處理能力。 本文將介紹一款如何利用這些API,為我們的項目添加智能識別的亮點。 項目介紹 GitHub 上擁有 1.2k 星標的 C# ...