1. 啟動並下載一個clickhouse-server, By default, starting above server instance will be run as default user without password. docker run -d --name ch-server - ...
1. 啟動並下載一個clickhouse-server, By default, starting above server instance will be run as default user without password.
docker run -d --name ch-server --ulimit nofile=262144:262144 -p 8123:8123 -p 9000:9000 -p 9009:9009 yandex/clickhouse-server
或者加一個Mount
docker run -d --name ch-server --ulimit nofile=262144:262144 -p 8123:8123 -p 9000:9000 -p 9009:9009 -v e:\\taobao:/usr/src/app/ yandex/clickhouse-server
2. 打開瀏覽器, 輸入, 查詢結果就下載下來
http://localhost:8123/?query=SELECT%20%27Hello,%20ClickHouse!%27
3. 進入docker 的CLI, server端安裝在/etc/clickhouse-server 目錄 用clickhouse-client來執行客戶端命令
4. 通過csv導入數據, 把本地目錄Mount 到/usr/src/app目錄
clickhouse-client --query='INSERT INTO tblSale FORMAT CSV' < /usr/src/app/tblsale_in.csv
csv必須是逗號分隔,而且DateTime格式必須是不帶毫秒的, 不然會有下麵的錯誤
clickhouse-client --query='INSERT INTO tblSale FORMAT CSV' < /usr/src/app/tblsale_in.csv# Code: 117. DB::Exception: Expected end of line: (at row 1) Row 1: Column 0, name: id, type: UInt32, parsed text: "1" Column 1, name: prod_id, type: UInt32, parsed text: "1" Column 2, name: user_id, type: UInt32, parsed text: "1" Column 3, name: cnt, type: UInt32, parsed text: "1" Column 4, name: total_price, type: Float32, parsed text: "1.00" Column 5, name: date, type: DateTime, parsed text: "2022-04-22 17:56:00" ERROR: garbage after DateTime: ".783<CARRIAGE RETURN><LINE FEED>2,2," ERROR: DateTime must be in YYYY-MM-DD hh:mm:ss or NNNNNNNNNN (unix timestamp, exactly 10 digits) format. : While executing CSVRowInputFormat: data for INSERT was parsed from stdin: (in query: INSERT INTO tblSale FORMAT CSV). (INCORRECT_DATA)
我的數據是從SQLServer導出的, 可以用BCP導出指定逗號分隔
bcp "SELECT [id],[prod_id],[user_id] ,[cnt],[total_price],Convert(varchar(20) , date, 120) as date FROM [taobao].[dbo].[tblSale_in]" queryout tblSale_in.csv -c -t , -T -S .\SQLExpress
當我導入的csv數據量很少時, 很快就成功了, 我再用有200w條數據時的csv, 執行這條命令是感覺就沒有反應了.新打開一個CLI,進入 clickhouse-client 也沒有反應了,不知道是否死掉了
等待5分種後,出現這個錯誤,
Code: 209. DB::NetException: Timeout exceeded while reading from socket (127.0.0.1:9000, 300000 ms). (SOCKET_TIMEOUT)
但是第2天再導入的時候就算導入2000w數據也很快, 在1分鐘以內, 不知道什麼原因
=========================以下是ClickHouse和SQLServer 2012的性能對比=====================================
硬體: THINKPAD470, I5CPU,8G記憶體,機械硬碟
--統計每日銷量,2000w數據 SQLServer 17秒 SELECT Convert(varchar(8) , date, 112) as date,count(*) as dayCnt FROM [taobao].[dbo].[tblSale] group by Convert(varchar(8) , date, 112) order by Convert(varchar(8) , date, 112)
--按日統計數量, ClickHouse 2500w數據第一次用時8.35秒,第二次用時0.3秒 SELECT toYYYYMMDD(date), count(*) as dayCnt FROM tblSale group by toYYYYMMDD(date) order by toYYYYMMDD(date)
繼續導入2000w數據, 再用上面的SQL, clickhouse 4500w數據用時0.58秒, 6000w數據用時0.5秒 (第一次查詢用1.24秒), 8000w數據用時0.68秒,1億數據用時0.95秒
=================== 結論==================================
對於一般OLAP應用, 比如查用戶的歷史記錄, 不要去搞分庫分表, 那是條歪路,還是轉到列資料庫, 搞個大寬表, 儘量不要用Join, 單機都能達到這麼高的性能.