python3-cookbook中每個小節以問題、解決方案和討論三個部分探討了Python3在某類問題中的最優解決方式,或者說是探討Python3本身的數據結構、函數、類等特性在某類問題上如何更好地使用。這本書對於加深Python3的理解和提升Python編程能力的都有顯著幫助,特別是對怎麼提高Py ...
python3-cookbook中每個小節以問題、解決方案和討論三個部分探討了Python3在某類問題中的最優解決方式,或者說是探討Python3本身的數據結構、函數、類等特性在某類問題上如何更好地使用。這本書對於加深Python3的理解和提升Python編程能力的都有顯著幫助,特別是對怎麼提高Python程式的性能會有很好的幫助,如果有時間的話強烈建議看一下。
本文為學習筆記,文中的內容只是根據自己的工作需要和平時使用寫了書中的部分內容,並且文中的示例代碼大多直接貼的原文代碼,當然,代碼多數都在Python3.6的環境上都驗證過了的。不同領域的編程關註點也會有所不同,有興趣的可以去看全文。
python3-cookbook:https://python3-cookbook.readthedocs.io/zh_CN/latest/index.html
6.1 讀寫CSV數據
對於CSV文件,如果不是需要特殊處理,為了儘可能少地出意外,那麼總是應該選擇CSV模塊來讀寫CSV文件。下麵只列幾個簡單讀寫CSV文件的示例:
CSV文件stocks.csv,內容如下:
Symbol,Price,Date,Time,Change,Volume "AA",39.48,"6/11/2007","9:36am",-0.18,181800 "AIG",71.38,"6/11/2007","9:36am",-0.15,195500 "AXP",62.58,"6/11/2007","9:36am",-0.46,935000 "BA",98.31,"6/11/2007","9:36am",+0.12,104800 "C",53.08,"6/11/2007","9:36am",-0.25,360900 "CAT",78.29,"6/11/2007","9:36am",-0.23,225400
import csv # 以列表形式讀取數據
with open('stocks.csv') as f: f_csv = csv.reader(f) headers = next(f_csv) # headers和row都是一個列表 print(headers) for row in f_csv: print(row)
import csv
# 以字典形式讀取數據 with open('stocks.csv') as f: f_csv = csv.DictReader(f) # row是一個OrderedDict字典類型 for row in f_csv: # 第一條輸出為:OrderedDict([('Symbol', 'AA'), ('Price', '39.48'), ('Date', '6/11/2007'), ('Time', '9:36am'), ('Change', '-0.18'), ('Volume', '181800')]) print(row)
headers = ['Symbol','Price','Date','Time','Change','Volume'] rows = [('AA', 39.48, '6/11/2007', '9:36am', -0.18, 181800), ('AIG', 71.38, '6/11/2007', '9:36am', -0.15, 195500), ('AXP', 62.58, '6/11/2007', '9:36am', -0.46, 935000), ]
# 以列表形式寫入數據 with open('stocks.csv','w') as f: f_csv = csv.writer(f) # 寫入單行數據 f_csv.writerow(headers) # 寫入多行數據 f_csv.writerows(rows)
headers = ['Symbol', 'Price', 'Date', 'Time', 'Change', 'Volume'] rows = [{'Symbol':'AA', 'Price':39.48, 'Date':'6/11/2007', 'Time':'9:36am', 'Change':-0.18, 'Volume':181800}, {'Symbol':'AIG', 'Price': 71.38, 'Date':'6/11/2007', 'Time':'9:36am', 'Change':-0.15, 'Volume': 195500}, {'Symbol':'AXP', 'Price': 62.58, 'Date':'6/11/2007', 'Time':'9:36am', 'Change':-0.46, 'Volume': 935000}, ] # 以字典形式寫入數據 with open('stocks.csv','w') as f: f_csv = csv.DictWriter(f, headers) f_csv.writeheader() f_csv.writerows(rows)
6.3 解析簡單的XML數據
就如此小節的標題所寫,這裡只講了簡單的XML解析,如果是較小且不複雜的XML文件,可以使用內置的xml.etree.ElementTree,如果是複雜的XML文檔,可以使用三方庫lxml,功能更加強大且速度更快。對於以下示例代碼,可以直接替換為from lxml.etree import parse。
from urllib.request import urlopen from xml.etree.ElementTree import parse # 下載XML文件並解析 u = urlopen('http://planet.python.org/rss20.xml') doc = parse(u) # 查找節點channel下的title節點 e = doc.find('channel/title') # 列印節點名稱:title print(e.tag) # 列印節點文本:Planet Python print(e.text) # 列印節點的某個屬性值,因為這個節點沒有其他屬性,所以獲取xxx的結果就是None print(e.get('xxx')) # 遍歷channel下的item節點 for item in doc.iterfind('channel/item'): # 在item節點中查找對應子節點的文本 title = item.findtext('title') date = item.findtext('pubDate') link = item.findtext('link') print(title) print(date) print(link) print()
title Planet Python None Codementor: Automating Everything With Python: Reading Time: 3 Mins Sat, 22 Feb 2020 09:01:58 +0000 https://www.codementor.io/maxongzb/automating-everything-with-python-reading-time-3-mins-13v57qt7y6 Quansight Labs Blog: My Unexpected Dive into Open-Source Python Fri, 21 Feb 2020 18:38:07 +0000 https://labs.quansight.org/blog/2020/02/my-unexpected-dive-into-open-source-python/ ...
6.4 增量式解析大型XML文件
如果需要解析的XML文件太大,那麼可以考慮使用from xml.etree.ElementTree import iterparse進行增量式解析,需要說明的是,以下示例的兩個版本中,將整個XML文檔載入到記憶體中的做法性能要優於增量式解析,但是在記憶體的占用消耗上卻是要遠遠大於增量式解析了。
需要解析的XML文件potholes.xml部分內容如下,現在需要對row節點中zip節點的內容進行統計:
<response> <row> <row ...> <creation_date>2012-11-18T00:00:00</creation_date> <status>Completed</status> <completion_date>2012-11-18T00:00:00</completion_date> <service_request_number>12-01906549</service_request_number> <type_of_service_request>Pot Hole in Street</type_of_service_request> <current_activity>Final Outcome</current_activity> <most_recent_action>CDOT Street Cut ... Outcome</most_recent_action> <street_address>4714 S TALMAN AVE</street_address> <zip>60632</zip> <x_coordinate>1159494.68618856</x_coordinate> <y_coordinate>1873313.83503384</y_coordinate> <ward>14</ward> <police_district>9</police_district> <community_area>58</community_area> <latitude>41.808090232127896</latitude> <longitude>-87.69053684711305</longitude> <location latitude="41.808090232127896" longitude="-87.69053684711305" /> </row> <row ...> <creation_date>2012-11-18T00:00:00</creation_date> <status>Completed</status> <completion_date>2012-11-18T00:00:00</completion_date> <service_request_number>12-01906695</service_request_number> <type_of_service_request>Pot Hole in Street</type_of_service_request> <current_activity>Final Outcome</current_activity> <most_recent_action>CDOT Street Cut ... Outcome</most_recent_action> <street_address>3510 W NORTH AVE</street_address> <zip>60647</zip> <x_coordinate>1152732.14127696</x_coordinate> <y_coordinate>1910409.38979075</y_coordinate> <ward>26</ward> <police_district>14</police_district> <community_area>23</community_area> <latitude>41.91002084292946</latitude> <longitude>-87.71435952353961</longitude> <location latitude="41.91002084292946" longitude="-87.71435952353961" /> </row> </row> </response>
全部載入到記憶體中解析:
from xml.etree.ElementTree import parse from collections import Counter potholes_by_zip = Counter() doc = parse('potholes.xml') for pothole in doc.iterfind('row/row'): potholes_by_zip[pothole.findtext('zip')] += 1 for zipcode, num in potholes_by_zip.most_common(): print(zipcode, num)
增量式解析:
from xml.etree.ElementTree import iterparse from collections import Counter def parse_and_remove(filename, path): path_parts = path.split('/') # start事件:某個節點被創建時產生 # end事件:某個節點被創建完成時產生 doc = iterparse(filename, ('start', 'end')) # 跳過根節點 next(doc) tag_stack = [] elem_stack = [] for event, elem in doc: if event == 'start': tag_stack.append(elem.tag) elem_stack.append(elem) elif event == 'end': if tag_stack == path_parts: yield elem # 此處是減少記憶體消耗的核心語句:把yield產生的元素從它的父節點中刪除掉 elem_stack[-2].remove(elem) try: tag_stack.pop() elem_stack.pop() except IndexError: pass potholes_by_zip = Counter() data = parse_and_remove('potholes.xml', 'row/row') for pothole in data: potholes_by_zip[pothole.findtext('zip')] += 1 for zipcode, num in potholes_by_zip.most_common(): print(zipcode, num)
6.5 將字典轉換為XML
from xml.etree.ElementTree import Element可以用來創建一個XML,但需要註意的是它只能構造字元串類型的值。
from xml.etree.ElementTree import Element, tostring def dict_to_xml(tag, d): """根據一個字典創建一個XML""" elem = Element(tag) for key, val in d.items(): child = Element(key) # text的值需要是str類型 child.text = str(val) elem.append(child) return elem s = {'name': 'GOOG', 'shares': 100, 'price': 490.1} e = dict_to_xml('stock', s) # 給某個節點設置屬性值 e.set('_id', '1234') print(e) print(tostring(e))
<Element 'stock' at 0x000001761DB01B88> b'<stock _id="1234"><name>GOOG</name><shares>100</shares><price>490.1</price></stock>'
6.6 解析和修改XML
示例中修改XML時需要註意的是,所有的修改都是針對父節點來操作的,並且可以將它視為一個列表來處理。
- 刪除節點:使用父節點的remove()方法。
- 添加節點:使用父節點的insert()和append()方法。
- 索引和切片:可以對節點使用如element[i]或element[i:j]進行索引和切片操作。
- 創建新節點:使用Element類即可。
預先準備好的的文件pred.xml:
<?xml version="1.0"?> <stop> <id>14791</id> <nm>Clark & Balmoral</nm> <sri> <rt>22</rt> <d>North Bound</d> <dd>North Bound</dd> </sri> <cr>22</cr> <pre> <pt>5 MIN</pt> <fd>Howard</fd> <v>1378</v> <rn>22</rn> </pre> <pre> <pt>15 MIN</pt> <fd>Howard</fd> <v>1867</v> <rn>22</rn> </pre> </stop>
>>> from xml.etree.ElementTree import parse, Element >>> doc = parse('pred.xml') >>> root = doc.getroot() >>> root <Element 'stop' at 0x100770cb0> >>> root.remove(root.find('sri')) >>> root.remove(root.find('cr')) >>> root.getchildren().index(root.find('nm')) 1 >>> e = Element('spam') >>> e.text = 'This is a test' >>> root.insert(2, e) >>> doc.write('newpred.xml', xml_declaration=True) >>>