[20170623]利用傳輸表空間恢複數據庫2.txt

来源:http://www.cnblogs.com/lfree/archive/2017/06/23/7070263.html
-Advertisement-
Play Games

[20170623]利用傳輸表空間恢複數據庫2.txt--//繼續上午的測試,測試truncate,是否可行,理論講應該沒有問題.我主要的目的測試是否要切換日誌.--//參考鏈接 : http://blog.itpub.net/267265/viewspace-2141166/1.環境:SCOTT@ ...


[20170623]利用傳輸表空間恢複數據庫2.txt

--//繼續上午的測試,測試truncate,是否可行,理論講應該沒有問題.我主要的目的測試是否要切換日誌.
--//參考鏈接 : http://blog.itpub.net/267265/viewspace-2141166/

1.環境:
SCOTT@book> @ &r/ver1
PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.4.0     Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

SCOTT@book> alter system archive log current ;
System altered.
--//測試前先切換一次日誌.

SCOTT@book> select count(*) from t;
    COUNT(*)
------------
       84192

SCOTT@book> select current_scn,sysdate from v$database ;
 CURRENT_SCN SYSDATE
------------ -------------------
 13276962316 2017-06-23 15:21:54

SCOTT@book> truncate table t ;
Table truncated.

2.開始測試恢復:
$ mkdir /home/oracle/aux1

RMAN> transport tablespace tea tablespace destination '/home/oracle/aux1' auxiliary destination '/home/oracle/aux1' until scn 13276962316;

RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='hFvw'

initialization parameters used for automatic instance:
db_name=BOOK
db_unique_name=hFvw_tspitr_BOOK
compatible=11.2.0.4.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
db_create_file_dest=/home/oracle/aux1
log_archive_dest_1='location=/home/oracle/aux1'
#No auxiliary parameter file used


starting up automatic instance BOOK

Oracle instance started

Total System Global Area    1068937216 bytes

Fixed Size                     2260088 bytes
Variable Size                285213576 bytes
Database Buffers             771751936 bytes
Redo Buffers                   9711616 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

contents of Memory Script:
{
# set requested point in time
set until  scn 13276962316;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2017-06-23 15:24:49
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=127 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=133 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=139 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/BOOK/autobackup/2017_06_23/o1_mf_s_947414679_dns04qp7_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/BOOK/autobackup/2017_06_23/o1_mf_s_947414679_dns04qp7_.bkp tag=TAG20170623T104439
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/home/oracle/aux1/BOOK/controlfile/o1_mf_dnsjl28h_.ctl
Finished restore at 2017-06-23 15:24:51

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  scn 13276962316;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  6 to
 "/home/oracle/aux1/tea01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 6;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause


executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /home/oracle/aux1/BOOK/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2017-06-23 15:24:56
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to /home/oracle/aux1/BOOK/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /home/oracle/aux1/tea01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/backup/full_20170623_f8s7gn1n_1_1.bak
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00001 to /home/oracle/aux1/BOOK/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_2: reading from backup piece /home/oracle/backup/full_20170623_f9s7gn1n_1_1.bak
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00002 to /home/oracle/aux1/BOOK/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_3: reading from backup piece /home/oracle/backup/full_20170623_f7s7gn1n_1_1.bak
channel ORA_AUX_DISK_1: piece handle=/home/oracle/backup/full_20170623_f8s7gn1n_1_1.bak tag=TAG20170623T100023
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_2: piece handle=/home/oracle/backup/full_20170623_f9s7gn1n_1_1.bak tag=TAG20170623T100023
channel ORA_AUX_DISK_2: restored backup piece 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_3: piece handle=/home/oracle/backup/full_20170623_f7s7gn1n_1_1.bak tag=TAG20170623T100023
channel ORA_AUX_DISK_3: restored backup piece 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:15
Finished restore at 2017-06-23 15:25:11

datafile 1 switched to datafile copy
input datafile copy RECID=17 STAMP=947431511 file name=/home/oracle/aux1/BOOK/datafile/o1_mf_system_dnsjl8cy_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=18 STAMP=947431511 file name=/home/oracle/aux1/BOOK/datafile/o1_mf_undotbs1_dnsjl8cp_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=19 STAMP=947431511 file name=/home/oracle/aux1/BOOK/datafile/o1_mf_sysaux_dnsjl8dc_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=20 STAMP=947431511 file name=/home/oracle/aux1/tea01.dbf

{
# set requested point in time
set until  scn 13276962316;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  6 online";
# recover and open resetlogs
recover clone database tablespace  "TEA", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  6 online

Starting recover at 2017-06-23 15:25:11
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3

starting media recovery
archived log for thread 1 with sequence 697 is already on disk as file /u01/app/oracle/archivelog/book/1_697_896605872.dbf
archived log for thread 1 with sequence 698 is already on disk as file /u01/app/oracle/archivelog/book/1_698_896605872.dbf
archived log for thread 1 with sequence 699 is already on disk as file /u01/app/oracle/archivelog/book/1_699_896605872.dbf
archived log for thread 1 with sequence 700 is already on disk as file /u01/app/oracle/archivelog/book/1_700_896605872.dbf
archived log for thread 1 with sequence 701 is already on disk as file /u01/app/oracle/archivelog/book/1_701_896605872.dbf
archived log for thread 1 with sequence 702 is already on disk as file /u01/app/oracle/archivelog/book/1_702_896605872.dbf
archived log for thread 1 with sequence 703 is already on disk as file /u01/app/oracle/archivelog/book/1_703_896605872.dbf
archived log for thread 1 with sequence 704 is already on disk as file /u01/app/oracle/archivelog/book/1_704_896605872.dbf
archived log for thread 1 with sequence 705 is already on disk as file /u01/app/oracle/archivelog/book/1_705_896605872.dbf
archived log for thread 1 with sequence 706 is already on disk as file /u01/app/oracle/archivelog/book/1_706_896605872.dbf
archived log for thread 1 with sequence 707 is already on disk as file /u01/app/oracle/archivelog/book/1_707_896605872.dbf
archived log for thread 1 with sequence 708 is already on disk as file /u01/app/oracle/archivelog/book/1_708_896605872.dbf
archived log for thread 1 with sequence 709 is already on disk as file /u01/app/oracle/archivelog/book/1_709_896605872.dbf
archived log file name=/u01/app/oracle/archivelog/book/1_697_896605872.dbf thread=1 sequence=697
archived log file name=/u01/app/oracle/archivelog/book/1_698_896605872.dbf thread=1 sequence=698
archived log file name=/u01/app/oracle/archivelog/book/1_699_896605872.dbf thread=1 sequence=699
archived log file name=/u01/app/oracle/archivelog/book/1_700_896605872.dbf thread=1 sequence=700
archived log file name=/u01/app/oracle/archivelog/book/1_701_896605872.dbf thread=1 sequence=701
archived log file name=/u01/app/oracle/archivelog/book/1_702_896605872.dbf thread=1 sequence=702
archived log file name=/u01/app/oracle/archivelog/book/1_703_896605872.dbf thread=1 sequence=703
archived log file name=/u01/app/oracle/archivelog/book/1_704_896605872.dbf thread=1 sequence=704
archived log file name=/u01/app/oracle/archivelog/book/1_705_896605872.dbf thread=1 sequence=705
archived log file name=/u01/app/oracle/archivelog/book/1_706_896605872.dbf thread=1 sequence=706
archived log file name=/u01/app/oracle/archivelog/book/1_707_896605872.dbf thread=1 sequence=707
archived log file name=/u01/app/oracle/archivelog/book/1_708_896605872.dbf thread=1 sequence=708
archived log file name=/u01/app/oracle/archivelog/book/1_709_896605872.dbf thread=1 sequence=709
media recovery complete, elapsed time: 00:00:04
Finished recover at 2017-06-23 15:25:16

database opened

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  TEA read only';
# create directory for datapump export
sql clone "create or replace directory STREAMS_DIROBJ_DPDIR as ''
/home/oracle/aux1''";
}
executing Memory Script

sql statement: alter tablespace  TEA read only

sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/home/oracle/aux1''
Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_hFvw":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_hFvw" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_hFvw is:
   EXPDP>   /home/oracle/aux1/dmpfile.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TEA:
   EXPDP>   /home/oracle/aux1/tea01.dbf
   EXPDP> Job "SYS"."TSPITR_EXP_hFvw" successfully completed at Fri Jun 23 15:25:53 2017 elapsed 0 00:00:31
Export completed

/*
   The following command may be used to import the tablespaces.
   Substitute values for <logon> and <directory>.
   impdp <logon> directory=<directory> dumpfile= 'dmpfile.dmp' transport_datafiles= /home/oracle/aux1/tea01.dbf
*/
--------------------------------------------------------------
-- Start of sample PL/SQL script for importing the tablespaces
--------------------------------------------------------------
-- creating directory objects
CREATE DIRECTORY STREAMS$DIROBJ$1 AS  '/home/oracle/aux1/';
CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS  '/home/oracle/aux1';
/* PL/SQL Script to import the exported tablespaces */
DECLARE
  -- the datafiles
  tbs_files     dbms_streams_tablespace_adm.file_set;
  cvt_files     dbms_streams_tablespace_adm.file_set;
  -- the dumpfile to import
  dump_file     dbms_streams_tablespace_adm.file;
  dp_job_name   VARCHAR2(30) := NULL;
  -- names of tablespaces that were imported
  ts_names       dbms_streams_tablespace_adm.tablespace_set;
BEGIN
  -- dump file name and location
  dump_file.file_name :=  'dmpfile.dmp';
  dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
  -- forming list of datafiles for import
  tbs_files( 1).file_name :=  'tea01.dbf';
  tbs_files( 1).directory_object :=  'STREAMS$DIROBJ$1';
  -- import tablespaces
  dbms_streams_tablespace_adm.attach_tablespaces(
    datapump_job_name      => dp_job_name,
    dump_file              => dump_file,
    tablespace_files       => tbs_files,
    converted_files        => cvt_files,
    tablespace_names       => ts_names);
  -- output names of imported tablespaces
  IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
    FOR i IN ts_names.first .. ts_names.last LOOP
      dbms_output.put_line('imported tablespace '|| ts_names(i));
    END LOOP;
  END IF;
END;
/
-- dropping directory objects
DROP DIRECTORY STREAMS$DIROBJ$1;
DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
--------------------------------------------------------------
-- End of sample PL/SQL script
--------------------------------------------------------------

Removing automatic instance
shutting down automatic instance
database closed
database dismounted
Oracle instance shut down
Automatic instance removed
auxiliary instance file /home/oracle/aux1/BOOK/datafile/o1_mf_temp_dnsjlx0l_.tmp deleted
auxiliary instance file /home/oracle/aux1/BOOK/onlinelog/o1_mf_3_dnsjlwnd_.log deleted
auxiliary instance file /home/oracle/aux1/BOOK/onlinelog/o1_mf_2_dnsjlwj5_.log deleted
auxiliary instance file /home/oracle/aux1/BOOK/onlinelog/o1_mf_1_dnsjlw9h_.log deleted
auxiliary instance file /home/oracle/aux1/BOOK/datafile/o1_mf_sysaux_dnsjl8dc_.dbf deleted
auxiliary instance file /home/oracle/aux1/BOOK/datafile/o1_mf_undotbs1_dnsjl8cp_.dbf deleted
auxiliary instance file /home/oracle/aux1/BOOK/datafile/o1_mf_system_dnsjl8cy_.dbf deleted
auxiliary instance file /home/oracle/aux1/BOOK/controlfile/o1_mf_dnsjl28h_.ctl deleted
--//OK,沒有問題,上午不知道第一次問題在那裡.

$ ls -l //u01/app/oracle/archivelog/book
total 215340
-rw-r----- 1 oracle oinstall    79360 2017-06-23 09:54:41 1_695_896605872.dbf
-rw-r----- 1 oracle oinstall 11775488 2017-06-23 09:59:36 1_696_896605872.dbf
-rw-r----- 1 oracle oinstall   101888 2017-06-23 10:01:06 1_697_896605872.dbf
-rw-r----- 1 oracle oinstall     7680 2017-06-23 10:01:18 1_698_896605872.dbf
-rw-r----- 1 oracle oinstall    13824 2017-06-23 10:01:43 1_699_896605872.dbf
-rw-r----- 1 oracle oinstall 11830272 2017-06-23 10:07:44 1_700_896605872.dbf
-rw-r----- 1 oracle oinstall   298496 2017-06-23 10:10:11 1_701_896605872.dbf
-rw-r----- 1 oracle oinstall    99328 2017-06-23 10:12:26 1_702_896605872.dbf
-rw-r----- 1 oracle oinstall   745984 2017-06-23 10:15:53 1_703_896605872.dbf
-rw-r----- 1 oracle oinstall 50181632 2017-06-23 10:24:41 1_704_896605872.dbf
-rw-r----- 1 oracle oinstall 50181632 2017-06-23 10:24:42 1_705_896605872.dbf
-rw-r----- 1 oracle oinstall 50181632 2017-06-23 10:24:44 1_706_896605872.dbf
-rw-r----- 1 oracle oinstall 38688768 2017-06-23 13:55:30 1_707_896605872.dbf
-rw-r----- 1 oracle oinstall  5877760 2017-06-23 15:21:37 1_708_896605872.dbf
-rw-r----- 1 oracle oinstall   141824 2017-06-23 15:24:55 1_709_896605872.dbf

--//從時間看,在做傳輸時自動執行了一次日誌切換.從alert中也可以看出來:
Fri Jun 23 15:21:37 2017
ALTER SYSTEM ARCHIVE LOG
Fri Jun 23 15:21:37 2017
Beginning log switch checkpoint up to RBA [0x2c5.2.10], SCN: 13276962295
Thread 1 advanced to log sequence 709 (LGWR switch)
  Current log# 3 seq# 709 mem# 0: /mnt/ramdisk/book/redo03.log
Archived Log entry 1248 added for thread 1 sequence 708 ID 0x4fb7d86e dest 1:
Fri Jun 23 15:21:38 2017
Completed checkpoint up to RBA [0x2c5.2.10], SCN: 13276962295
Fri Jun 23 15:24:55 2017
ALTER SYSTEM ARCHIVE LOG
Fri Jun 23 15:24:55 2017
Beginning log switch checkpoint up to RBA [0x2c6.2.10], SCN: 13276962601
Thread 1 advanced to log sequence 710 (LGWR switch)
  Current log# 1 seq# 710 mem# 0: /mnt/ramdisk/book/redo01.log
Archived Log entry 1249 added for thread 1 sequence 709 ID 0x4fb7d86e dest 1:
Fri Jun 23 15:25:04 2017
Incremental checkpoint up to RBA [0x2c5.2.0], current log tail at RBA [0x2c6.a.0]
Fri Jun 23 15:29:53 2017
Completed checkpoint up to RBA [0x2c6.2.10], SCN: 13276962601

3.導入數據:
SCOTT@book> grant dba to ttt IDENTIFIED BY ttt;
Grant succeeded.

$ cp /home/oracle/aux1/dmpfile.dmp /u01/app/oracle/admin/book/dpdump/

impdp system/oracle dumpfile=dmpfile.dmp transport_datafiles=/home/oracle/aux1/tea01.dbf REMAP_TABLESPACE=TEA:MILK
REMAP_SCHEMA=scott:ttt logfile=impdp.log

$ impdp system/oracle dumpfile=dmpfile.dmp transport_datafiles=/home/oracle/aux1/tea01.dbf REMAP_TABLESPACE=TEA:MILK
REMAP_SCHEMA=scott:ttt logfile=impdp.log

Import: Release 11.2.0.4.0 - Production on Fri Jun 23 15:36:59 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/a****** dumpfile=dmpfile.dmp
transport_datafiles=/home/oracle/aux1/tea01.dbf REMAP_TABLESPACE=TEA:MILK REMAP_SCHEMA=scott:ttt logfile=impdp.log
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Jun 23 15:37:03 2017 elapsed 0 00:00:03

--//ok,成功!!

4.測試:
SCOTT@book> select count(*) from scott.t;
    COUNT(*)
------------
           0

SCOTT@book> select count(*) from ttt.t;
    COUNT(*)
------------
       84192

--//可以發現測試ok.可以發現rman的transport tablespace還是很簡單的,封裝的複雜的命令.
--//至於上午第一次失敗,無法在重新再現了,放棄探究.




您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 這本是一個很基礎的問題,很慚愧,很久沒研究這一塊了,已經忘得差不多了。前段時間面試,有面試官問過這個問題。雖然覺得沒必要記,要用的時候寫個Demo,打個Log就清楚了。但是今天順手寫了個Demo,也就順手分享一下結果。 第一個界面打開.png 第一個界面打開.png 這是第一個Activity打開顯 ...
  • CacheDispatcher 緩存分發 cacheQueue只是一個優先隊列,我們在start方法中,分析了CacheDispatcher的構成是需要cacheQueue,然後調用CacheDispatcher.start方法,我們看一下CacheDispatcher得到cacheQueue之後, ...
  • armv7,armv7s,arm64,i386,x86_64 詳解 一、概要 平時項目開發中,可能使用第三方提供的靜態庫.a,如果.a提供方技術不成熟,使用的時候就會出現問題,例如: 在真機上編譯報錯:No architectures to compile for (ONLY_ACTIVE_ARCH ...
  • 1.DECODE 只有Oracle 才有,其它資料庫不支持; 2.CASE WHEN的用法, Oracle、SQL Server、 MySQL 都支持; 3.DECODE 只能用做相等判斷,但是可以配合sign函數進行大於,小於,等於的判斷,CASE when可用於=,>=,<,<=,<>,is n ...
  • 將大量數據保存起來,通過電腦加工而成的可以進行高效訪問的數據集合稱為資料庫(Database,DB)。將姓名、住址、電話號碼、郵箱地址、愛好和家庭構成等數據保存到資料庫中,就可以隨時迅速獲取想要的信息了。用來管理資料庫的電腦系統稱為資料庫管理系統(Database Management Syst... ...
  • pt-heartbeat是用來監控主從延遲的一款percona工具,現在我們大部分的MySQL架構還是基於主從複製,例如MHA,MMM,keepalived等解決方案。而主從環境的話,我們很關心的就是主從延遲的問題,一般情況下我們在從庫執行以下語句: mysql> show slave status ...
  • 由於最近工作要做MySQL集群,所以需要安裝MySQL,本機可以聯網,如不能聯網可參看rpm安裝方法,廢話不多,具體安裝步驟如下: 1,下載MySQL wget https://repo.mysql.com//mysql57-community-release-el6-11.noarch.rpm 2 ...
  • --查找存在某表名的存儲過程 SELECT distinct b.name from syscomments a,sysobjects b WHERE a.id=b.id and a.TEXT LIKE '%你要查找的表名%' --查找存在某內容的存儲過程SELECT NAME FROM sysob ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...