Metric 是 Datavines 中一個核心概念,一個 Metric 表示一個數據質量檢查規則,比如空值檢查和表行數檢查都是一個規則。Metric 採用插件化設計,用戶可以根據自己的需求來實現一個 Metric。下麵我們來詳細講解一下如何自定義`Metric`。 ### 第一步 我們先瞭解下幾個 ...
https://www.sqlite.org/speed.html 做了SQLite、MySQL和PostgreSQL的速度比較,使用的資料庫版本比較老,但是測試方法依舊頗有意義。
小結
我們進行了一系列的測試來衡量SQLite 2.7.6、PostgreSQL 7.1.3和MySQL 3.23.41的相對性能。以下是從這些實驗中得出的一般結論:
-
SQLite 2.7.6比RedHat 7.2上預設安裝的PostgreSQL 7.1.3在大多數常用操作上要快得多(有時快10或20倍)。
-
在大多數常見操作中,SQLite 2.7.6通常比MySQL 3.23.41快(有時快兩倍以上)。
-
SQLite執行CREATE INDEX或DROP TABLE的速度不如其他資料庫。但影響不大,因為這些都是不常見操作。
-
將多個操作組合成事務時SQLite的工作效果最好。
測試說明:
- 不涉及多用戶性能或涉及多個連接和子查詢的複雜查詢的優化。
- 在相對較小(大約14兆位元組)的資料庫上進行的。
測試環境
用於這些測試的平臺是一臺1.6GHz的Athlon,有1GB的記憶體和一個IDE磁碟驅動器。操作系統是RedHat Linux 7.2,stock內核。
使用的PostgreSQL和MySQL伺服器是RedHat 7.2上預設提供的(PostgreSQL版本7.1.3和MySQL版本3.23.41)。特別註意的是,RedHat 7.2上的預設MySQL配置不支持事務。不支持事務給了MySQL很大的速度優勢,但SQLite在大多數測試中仍然能夠領先。
RedHat 7.3中的預設PostgreSQL配置太保守(它是為在8MB記憶體的機器上工作而設計的),通過配置調整,可以使PostgreSQL運行得快得多。Matt Sergeant報告說,他已經調整了他的PostgreSQL安裝,結果顯示,PostgreSQL和MySQL的運行速度基本相同。他對SQLite進行了測試,其配置與網站上出現的相同。它是用-O6優化和-DNDEBUG=1開關編譯的,該開關禁用了SQLite代碼中的許多 "assert() "語句。-DNDEBUG=1編譯器選項使SQLite的速度大約提高了一倍。
一個簡單的Tcl腳本被用來生成和運行所有的測試。這個Tcl腳本的副本可以在SQLite源代碼樹中的tools/speedtest.tcl文件中找到。
測試
測試1:1000個INSERT
CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');
... 995 lines omitted
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');
- 結果:
| PostgreSQL: | 4.373 |
| ---------------------- | ------ |
| MySQL: | 0.114 |
| SQLite 2.7.6: | 13.061 |
| SQLite 2.7.6 (nosync): | 0.223 |
因為它沒有中央伺服器來協調訪問,所以SQLite必須為每個事務關閉和重新打開資料庫文件,從而使其緩存失效。在這個測試中,每個SQL語句都是一個單獨的事務,所以資料庫文件必須被打開和關閉,緩存必須被刷新1000次。儘管這樣,SQLite的非同步版本仍然幾乎和MySQL一樣快。然而,請註意同步版本的速度要慢得多。SQLite在每個同步事務之後調用fsync(),以確保所有數據在繼續之前安全地在磁碟錶面。在同步測試的13秒中,SQLite大部分時間都在閑置,等待磁碟I/O的完成。
測試2:事務中的25000個INSERT
BEGIN;
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two');
... 24997 lines omitted
INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine');
INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six');
COMMIT;
- 結果:
| PostgreSQL: | 4.900 |
| ---------------------- | ----- |
| MySQL: | 2.184 |
| SQLite 2.7.6: | 0.914 |
| SQLite 2.7.6 (nosync): | 0.757 |
當所有的INSERT被放在事務中時,SQLite不再需要關閉和重新打開資料庫,不需要做任何fsync(),SQLite比PostgreSQL和MySQL都快得多。
測試3:25000次INSERT到有索引的表中
BEGIN;
CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));
CREATE INDEX i3 ON t3(c);
... 24998 lines omitted
INSERT INTO t3 VALUES(24999,88509,'eighty eight thousand five hundred nine');
INSERT INTO t3 VALUES(25000,84791,'eighty four thousand seven hundred ninety one');
COMMIT;
- 結果:
| PostgreSQL: | 8.175 |
| ---------------------- | ----- |
| MySQL: | 3.197 |
| SQLite 2.7.6: | 1.555 |
| SQLite 2.7.6 (nosync): | 1.402 |
有報告稱,SQLite在索引表上的表現不盡人意。最近增加了這個測試來反駁這些傳言。誠然,SQLite在創建新的索引項時不如其他引擎快(見下麵的測試6),但其總體速度仍較好。
測試4:100個沒有索引的SELECT
BEGIN;
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<1100;
... 96 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=9800 AND b<10800;
SELECT count(*), avg(b) FROM t2 WHERE b>=9900 AND b<10900;
COMMIT;
- 結果:
| PostgreSQL: | 3.629 |
| ---------------------- | ----- |
| MySQL: | 2.760 |
| SQLite 2.7.6: | 2.494 |
| SQLite 2.7.6 (nosync): | 2.526 |
這個測試對沒有索引的25000條目表進行了100次查詢,因此需要進行全表掃描。先前版本的SQLite在這個測試中曾經比PostgreSQL和MySQL慢,但最近的性能增強提高了它的速度,所以它現在是這一組中最快的。
測試5:100次模糊SELECT
BEGIN;
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%two%';
... 96 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety nine%';
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one hundred%';
COMMIT;
- 結果:
| PostgreSQL: | 13.409 |
| ---------------------- | ------ |
| MySQL: | 4.640 |
| SQLite 2.7.6: | 3.362 |
| SQLite 2.7.6 (nosync): | 3.372 |
這個測試進行100次全表掃描,但它使用了字元串比較而不是數字比較。SQLite在這裡比PostgreSQL快三倍以上,比MySQL快30%左右。
測試6:創建索引
CREATE INDEX i2a ON t2(a);
CREATE INDEX i2b ON t2(b);
- 結果:
| PostgreSQL: | 0.381 |
| ---------------------- | ----- |
| MySQL: | 0.318 |
| SQLite 2.7.6: | 0.777 |
| SQLite 2.7.6 (nosync): | 0.659 |
SQLite在創建新索引時比較慢。問題不大(因為新索引的創建並不頻繁),且正在努力解決的問題。希望未來的SQLite版本在這方面能做得更好。
測試7:5000個帶索引的SELECT
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<200;
SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<300;
... 4994 lines omitted
SELECT count(*), avg(b) FROM t2 WHERE b>=499700 AND b<499800;
SELECT count(*), avg(b) FROM t2 WHERE b>=499800 AND b<499900;
SELECT count(*), avg(b) FROM t2 WHERE b>=499900 AND b<500000;
- 結果:
| PostgreSQL: | 4.614 |
| ---------------------- | ----- |
| MySQL: | 1.270 |
| SQLite 2.7.6: | 1.121 |
| SQLite 2.7.6 (nosync): | 1.162 |
這三個資料庫引擎在有索引工作時都運行得更快。但SQLite仍然是最快的。
測試8:1000次無索引的UPDATE操作
BEGIN;
UPDATE t1 SET b=b*2 WHERE a>=0 AND a<10;
UPDATE t1 SET b=b*2 WHERE a>=10 AND a<20;
... 996 lines omitted
UPDATE t1 SET b=b*2 WHERE a>=9980 AND a<9990;
UPDATE t1 SET b=b*2 WHERE a>=9990 AND a<10000;
COMMIT;
- 結果:
| PostgreSQL: | 1.739 |
| ---------------------- | ----- |
| MySQL: | 8.410 |
| SQLite 2.7.6: | 0.637 |
| SQLite 2.7.6 (nosync): | 0.638 |
MySQL始終比PostgreSQL和SQLite慢五到十倍。MySQL通常是非常快的引擎。也許這個問題在後來的MySQL版本中已經解決了。
測試9:帶索引的25000次UPDATE
BEGIN;
UPDATE t2 SET b=468026 WHERE a=1;
UPDATE t2 SET b=121928 WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET b=35065 WHERE a=24999;
UPDATE t2 SET b=347393 WHERE a=25000;
COMMIT;
- 結果:
| PostgreSQL: | 18.797 |
| ---------------------- | ------ |
| MySQL: | 8.134 |
| SQLite 2.7.6: | 3.520 |
| SQLite 2.7.6 (nosync): | 3.104 |
在最近的2.7.0版本中,SQLite在這項測試中的運行速度與MySQL大致相同。但最近對SQLite的優化使UPDATEs的速度提高了一倍多。
測試10:帶索引的25000個文本UPDATE
BEGIN;
UPDATE t2 SET c='one hundred forty eight thousand three hundred eighty two' WHERE a=1;
UPDATE t2 SET c='three hundred sixty six thousand five hundred two' WHERE a=2;
... 24996 lines omitted
UPDATE t2 SET c='three hundred eighty three thousand ninety nine' WHERE a=24999;
UPDATE t2 SET c='two hundred fifty six thousand eight hundred thirty' WHERE a=25000;
COMMIT;
- 結果:
| PostgreSQL: | 48.133 |
| ---------------------- | ------ |
| MySQL: | 6.982 |
| SQLite 2.7.6: | 2.408 |
| SQLite 2.7.6 (nosync): | 1.725 |
在這裡,2.7.0版的SQLite曾經以與MySQL差不多的速度運行。但現在2.7.6版比MySQL快2倍以上,比PostgreSQL快20倍以上。
測試11:來自SELECT的INSERT
BEGIN;
INSERT INTO t1 SELECT b,a,c FROM t2;
INSERT INTO t2 SELECT b,a,c FROM t1;
COMMIT;
- 結果:
| PostgreSQL: | 61.364 |
| ---------------------- | ------ |
| MySQL: | 1.537 |
| SQLite 2.7.6: | 2.787 |
| SQLite 2.7.6 (nosync): | 1.599 |
非同步的SQLite只比MySQL慢一絲。(MySQL似乎特別擅長INSERT...SELECT語句。)PostgreSQL引擎的大部分時間是在等待磁碟I/O。
測試12:沒有索引的DELETE
DELETE FROM t2 WHERE c LIKE '%fifty%';
- 結果:
| PostgreSQL: | 1.509 |
| ---------------------- | ----- |
| MySQL: | 0.975 |
| SQLite 2.7.6: | 4.004 |
| SQLite 2.7.6 (nosync): | 0.560 |
同步版本的SQLite是一組中最慢的,但非同步版本是最快的。差別在於執行fsync()所需的額外時間。
測試13:帶索引的DELETE
DELETE FROM t2 WHERE a>10 AND a<20000;
- 結果:
| PostgreSQL: | 1.316 |
| ---------------------- | ----- |
| MySQL: | 2.262 |
| SQLite 2.7.6: | 2.068 |
| SQLite 2.7.6 (nosync): | 0.752 |
PostgreSQL比MySQL快。然而,非同步的SQLite比另外兩個都快。
測試14:大的DELETE之後再大的INSERT
INSERT INTO t2 SELECT * FROM t1;
- 結果:
| PostgreSQL: | 13.168 |
| ---------------------- | ------ |
| MySQL: | 1.815 |
| SQLite 2.7.6: | 3.210 |
| SQLite 2.7.6 (nosync): | 1.485 |
舊版本的SQLite(在2.4.0版本之前)在一連串的DELETE和新的INSERT之後會顯示出性能下降。如本測試所示,這個問題現在已經解決了。
測試15:大的DELETE後大量小INSERT
BEGIN;
DELETE FROM t1;
INSERT INTO t1 VALUES(1,10719,'ten thousand seven hundred nineteen');
... 11997 lines omitted
INSERT INTO t1 VALUES(11999,72836,'seventy two thousand eight hundred thirty six');
INSERT INTO t1 VALUES(12000,64231,'sixty four thousand two hundred thirty one');
COMMIT;
- 結果:
| PostgreSQL: | 4.556 |
| ---------------------- | ----- |
| MySQL: | 1.704 |
| SQLite 2.7.6: | 0.618 |
| SQLite 2.7.6 (nosync): | 0.406 |
SQLite非常善於在事務中進行INSERT,它在這項測試中比其他資料庫快很多。
測試16:DROP表
DROP TABLE t1;
DROP TABLE t2;
DROP TABLE t3;
- 結果:
| PostgreSQL: | 0.135 |
| ---------------------- | ----- |
| MySQL: | 0.015 |
| SQLite 2.7.6: | 0.939 |
| SQLite 2.7.6 (nosync): | 0.254 |
SQLite比其他資料庫要慢,它必須經過並刪除資料庫文件中涉及該表的記錄。另一方面,MySQL和PostgreSQL使用單獨的文件來表示每個表,所以它們可以通過刪除文件來刪除表,這要快得多。刪除表並不是常見的操作,SQLite花費的時間稍長,也不是大問題。
測試內容
- 空資料庫場景
- 業務場景:資料庫大小在26G左右,參見性能測試方案的數據模型部分。
- 性能測試場景:在執行性能、負載、尖峰、壓力等測試時通過API或命令行執行。
測試代碼
以下代碼使用python調用sqlite庫實現測試,同時將sql導出為sql文件,以方便命令行執行。後續需要支持調用我司DB的python API。
代碼最新版本存放在https://github.com/china-testing/python-testing-examples/blob/master/db/sqlite_speed.py
import random
import time
from num2word import word
import sqlite3
def open_database():
db = 'pydb.db'
conn = sqlite3.connect(db)
return conn
def get_conn_and_cursor(name, sql=''):
print("*"*20, name)
conn = open_database()
cursor = conn.cursor()
if sql:
cursor.execute(sql)
conn.commit();
return conn, cursor
def list2file(lists, filename):
f = open(filename, 'w')
for item in lists:
f.write(item + "\n")
f.close()
def insert_1000():
sqls = []
sql = ''' DROP TABLE IF EXISTS t1;'''
sqls.append(sql)
conn, cursor = get_conn_and_cursor("Test 1: 1000 INSERTs", sql)
t1 = time.time()
sql = ''' CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));'''
cursor.execute(sql)
conn.commit()
for i in range(1, 1001):
num = random.randint(1, 100000000)
num_str = word(num).lower()
sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
cursor.execute(sql)
sqls.append(sql)
conn.commit()
print("*"*10, time.time()-t1)
list2file(sqls, "1.sql")
conn.close()
def insert_25000_transaction():
sqls = []
sql = '''DROP TABLE IF EXISTS t2;'''
sqls.append(sql)
conn, cursor = get_conn_and_cursor("Test 2: 25000 INSERTs in a transaction", sql)
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = ''' CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));'''
sqls.append(sql)
cursor.execute(sql)
for i in range(1, 25001):
num = random.randint(1, 100000000)
num_str = word(num).lower()
sql = '''INSERT INTO t2 VALUES({}, {},'{}');'''.format(i, num, num_str)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "2.sql")
def insert_25000_transaction_index():
sqls = []
sql = '''DROP TABLE IF EXISTS t3;'''
sqls.append(sql)
conn, cursor = get_conn_and_cursor("Test 3: 25000 INSERTs into an indexed table", sql)
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = ''' CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));'''
sqls.append(sql)
cursor.execute(sql)
sql = '''CREATE INDEX i3 ON t3(c);'''
sqls.append(sql)
cursor.execute(sql)
for i in range(1, 25001):
num = random.randint(1, 100000000)
num_str = word(num).lower()
sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "3.sql")
def select_100_without_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 4: 100 SELECTs without an index")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
for i in range(100):
sql = '''SELECT count(*), avg(b) FROM t2 WHERE b>={} AND b<{};'''.format(i*100, i*100+1000)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "4.sql")
def select_100_comparison():
sqls = []
conn, cursor = get_conn_and_cursor("Test 5: 100 SELECTs on a string comparison")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
for i in range(1,101):
sql = '''SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%{}%';'''.format(word(i).lower())
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "5.sql")
def create_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 6: Creating an index")
t1 = time.time()
sql = ''' CREATE INDEX i2a ON t2(a);'''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sql = ''' CREATE INDEX i2b ON t2(b); '''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "6.sql")
def select_5000_with_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 7: 5000 SELECTs with an index")
t1 = time.time()
for i in range(5000):
sql = '''SELECT count(*), avg(b) FROM t2 WHERE b>={} AND b<{};'''.format(i*100, i*100+100)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "7.sql")
def update_1000_without_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 8: 1000 UPDATEs without an index")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
for i in range(1000):
sql = '''UPDATE t1 SET b=b*2 WHERE a>={} AND a<{};'''.format(i*10, i*10+10)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "8.sql")
def update_25000_with_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 9: 25000 UPDATEs with an index")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
for i in range(25000):
sql = '''UPDATE t2 SET b={} WHERE a={};'''.format(random.randint(1, 100000000), i+1)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "9.sql")
def update_25000_text_with_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 10: 25000 text UPDATEs with an index")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
for i in range(25000):
sql = '''UPDATE t2 SET c='{}' WHERE a={};'''.format(word(random.randint(1, 100000000)).lower(), i+1)
cursor.execute(sql)
sqls.append(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "10.sql")
def insert_from_select():
sqls = []
conn, cursor = get_conn_and_cursor("Test 11: INSERTs from a SELECT")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = '''INSERT INTO t1 SELECT b,a,c FROM t2;'''
sqls.append(sql)
cursor.execute(sql)
sql = '''INSERT INTO t2 SELECT b,a,c FROM t1;'''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "11.sql")
def del_without_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 12: DELETE without an index")
t1 = time.time()
sql = '''DELETE FROM t2 WHERE c LIKE '%fifty%'; '''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "12.sql")
def del_with_index():
sqls = []
conn, cursor = get_conn_and_cursor("Test 13: DELETE with an index")
t1 = time.time()
sql = '''DELETE FROM t2 WHERE a>10 AND a<20000; '''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "13.sql")
def big_insert_after_big_del():
sqls = []
conn, cursor = get_conn_and_cursor("Test 14: A big INSERT after a big DELETE")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = '''DELETE FROM t2;'''
sqls.append(sql)
cursor.execute(sql)
sql = '''INSERT INTO t2 SELECT * FROM t1; '''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "14.sql")
def small_insert_after_big_del():
sqls = []
conn, cursor = get_conn_and_cursor("Test 15: A big DELETE followed by many small INSERTs")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = '''DELETE FROM t1;'''
sqls.append(sql)
cursor.execute(sql)
for i in range(1, 12001):
num = random.randint(1, 100000000)
num_str = word(num).lower()
sql = '''INSERT INTO t1 VALUES({}, {},'{}');'''.format(i, num, num_str)
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sqls.append("COMMIT;")
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "15.sql")
def drop_table():
sqls = []
conn, cursor = get_conn_and_cursor("Test 16: DROP TABLE")
t1 = time.time()
cursor.execute("BEGIN")
sqls.append("BEGIN;")
sql = '''DROP TABLE t1;'''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sql = '''DROP TABLE t2;'''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
sql = '''DROP TABLE t3;'''
sqls.append(sql)
cursor.execute(sql)
conn.commit()
print("*"*10, time.time()-t1)
conn.close()
list2file(sqls, "16.sql")
insert_1000()
insert_25000_transaction()
insert_25000_transaction_index()
select_100_without_index()
select_100_comparison()
create_index()
select_5000_with_index()
update_1000_without_index()
update_25000_with_index()
update_25000_text_with_index()
insert_from_select()
del_without_index()
del_with_index()
big_insert_after_big_del()
small_insert_after_big_del()
drop_table()
釘釘或微信號: pythontesting 微信公眾號:pythontesting