一、完善球賽程式,測試球賽程式。 將代碼末尾稍作改動即可,若代碼正確,則運行,否則輸出Error。 from random import random #列印程式介紹信息 def printIntro(): print("19信計2班23號鄧若言") print("這個程式模擬兩個選手A和B的乒乓球 ...
一、完善球賽程式,測試球賽程式。
將代碼末尾稍作改動即可,若代碼正確,則運行,否則輸出Error。
from random import random #列印程式介紹信息 def printIntro(): print("19信計2班23號鄧若言") print("這個程式模擬兩個選手A和B的乒乓球比賽") print("程式運行需要A和B的能力值(以0到1之間的小數表示)") #獲得程式運行參數 def printInputs(): a = eval(input("請輸入選手A的能力值(0-1): ")) b = eval(input("請輸入選手B的能力值(0-1): ")) n = eval(input("模擬比賽的場次: ")) return a, b, n # 進行N場比賽 def simNGames(n, probA, probB): winsA, winsB = 0, 0 for i in range(n): for j in range(7): #進行7局4勝的比賽 scoreA, scoreB = simOneGame(probA, probB) if scoreA > scoreB: winsA += 1 else: winsB += 1 return winsA,winsB #進行一場比賽 def simOneGame(probA, probB): scoreA, scoreB = 0, 0 #初始化AB的得分 serving = "A" while not gameOver(scoreA, scoreB): #用while迴圈來執行比賽 if scoreA==10 and scoreB==10: return(simOneGame2(probA,probB)) if serving == "A": if random() < probA: ##用隨機數生成勝負 scoreA += 1 else: serving="B" else: if random() < probB: scoreB += 1 else: serving="A" return scoreA, scoreB def simOneGame2(probA,probB): scoreA,scoreB=10,10 serving = "A" while not gameOver2(scoreA, scoreB): if serving == "A": if random() < probA: scoreA += 1 else: serving="B" else: if random() < probB: scoreB += 1 else: serving="A" return scoreA, scoreB #比賽結束 def gameOver(a,b): #正常比賽結束 return a==11 or b==11 def gameOver2(a,b): #進行搶12比賽結束 if abs((a-b))>=2: return a,b #輸出數據 def printSummary(winsA, winsB): n = winsA + winsB print("競技分析開始,共模擬{}場比賽".format(n)) print("選手A獲勝{}場比賽,占比{:0.1%}".format(winsA, winsA/n)) print("選手B獲勝{}場比賽,占比{:0.1%}".format(winsB, winsB/n)) #主體函數 def main(): printIntro() probA, probB, n = printInputs() winsA, winsB = simNGames(n, probA, probB) printSummary(winsA, winsB) try: main() except: print("Error!")
結果如下:
則測試得代碼無誤。
二、用requests庫的get()函數訪問必應搜狗主頁20次,列印返回狀態,text內容,並且計算text()屬性和content屬性返回網頁內容的長度。
關於requests庫的內容可戳以下鏈接
https://www.cnblogs.com/deng11/p/12863994.html
import requests for i in range(20): r=requests.get("https://www.sogou.com",timeout=30) #網頁鏈接可換 r.raise_for_status() r.encoding='utf-8' print('狀態={}'.format(r.status_code)) print(r.text) print('text屬性長度{},content屬性長度{}'.format(len(r.text),len(r.content)))
結果如下(取20次中的其中一次,text屬性內容太長所以不展示出來):
三、根據所給的html頁面,保持為字元串,完成如下要求:
(1)列印head標簽內容和你學號的後兩位
(2)獲取body標簽的內容
(3)獲取id的first的標簽對象
(4)獲取並列印html頁面中的中文字元
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>菜鳥教程(runoob.com)</title> </head> <body> <h1>我的第一個標題</h1> <p id="first">我的第一個段落。</p> </body> <table border="1"> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> </table> </html>
代碼如下:
from bs4 import BeautifulSoup r = ''' <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>菜鳥教程(runoob.com) 23號的作業</title> </head> <body> <h1>我的第一個標題</h1> <p id="first">我的第一個段落。</p> </body> <table border="1"> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> </table> </html> ''' demo = BeautifulSoup(r,"html.parser") print(demo.title) print(demo.body) print(demo.p) print(demo.string)
效果如下:
四、爬取中國大學排名(年費2016),將數據存為csv文件。
import requests from bs4 import BeautifulSoup ALL = [] def getHTMLtext(url): try: r = requests.get(url,timeout = 30) r.raise_for_status() r.encoding = 'utf-8' return r.text except: return "" def fillUni(soup): data = soup.find_all('tr') for tr in data: td1 = tr.find_all('td') if len(td1) == 0: continue Single = [] for td in td1: Single.append(td.string) ALL.append(Single) def printUni(num): print("{1:^2}{2:{0}^10}{3:{0}^6}{4:{0}^6}{5:{0}^6}{6:{0}^6}{7:{0}^6}{8:{0}^6}{9:{0}^5}{10:{0}^6}{11:{0}^6}{12:{0}^6}{13:{0}^6}".format(chr(12288),"排名","學校名稱","省市","總分",\ "生源質量","培養結果","科研規模","科研質量",\ "頂尖成果","頂尖人才","科技服務",\ "產學研究合作","成果轉化")) for i in range(num): u = ALL[i] print("{1:^4}{2:{0}^10}{3:{0}^6}{4:{0}^8}{5:{0}^9}{6:{0}^9}{7:{0}^7}{8:{0}^9}{9:{0}^7}{10:{0}^9}{11:{0}^8}{12:{0}^9}{13:{0}^9}".format(chr(12288),u[0],\ u[1],u[2],eval(u[3]),\ u[4],u[5],u[6],u[7],u[8],\ u[9],u[10],u[11],u[12])) def main(num): url = "http://www.zuihaodaxue.com/zuihaodaxuepaiming2016.html" html = getHTMLtext(url) soup = BeautifulSoup(html,"html.parser") fillUni(soup) printUni(num) main(10)
效果:
將爬取到的數據存為csv文件,只需將printUni()函數換掉。
改動後的代碼如下:
import requests from bs4 import BeautifulSoup import csv import os ALL = [] def getHTMLtext(url): try: r = requests.get(url,timeout = 30) r.raise_for_status() r.encoding = 'utf-8' return r.text except: return "" def fillUni(soup): data = soup.find_all('tr') for tr in data: td1 = tr.find_all('td') if len(td1) == 0: continue Single = [] for td in td1: Single.append(td.string) ALL.append(Single) def writercsv(save_road,num,title): if os.path.isfile(save_road): with open(save_road,'a',newline='')as f: csv_write=csv.writer(f,dialect='excel') for i in range(num): u=ALL[i] csv_write.writerow(u) else: with open(save_road,'w',newline='')as f: csv_write=csv.writer(f,dialect='excel') csv_write.writerow(title) for i in range(num): u=ALL[i] csv_write.writerow(u) title=["排名","學校名稱","省市","總分","生源質量","培養結果","科研規模","科研質量","頂尖成果","頂尖人才","科技服務","產學研究合作","成果轉化"] save_road="C:\\Users\\鄧若言\\Desktop\\html.csv" def main(num): url = "http://www.zuihaodaxue.com/zuihaodaxuepaiming2016.html" html = getHTMLtext(url) soup = BeautifulSoup(html,"html.parser") fillUni(soup) writercsv(save_road,num,title) main(10)
效果: