When to close cursors using MySQLdb

来源:http://www.cnblogs.com/kungfupanda/archive/2016/09/15/5874191.html
-Advertisement-
Play Games

http://stackoverflow.com/questions/5669878/when-to-close-cursors-using-mysqldb I'm building a WSGI web app and I have a MySQL database. I'm using MySQ ...


http://stackoverflow.com/questions/5669878/when-to-close-cursors-using-mysqldb

I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. What is the standard practice for getting and closing cursors? In particular, how long should my cursors last? Should I get a new cursor for each transaction?

I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?

 4 Answers 4  
active oldest votes
up vote29down voteaccepted

Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the with keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect.

As of version 1.2.5 of the module, MySQLdb.Connection implements the context manager protocol with the following code (github):

def __enter__(self):
    if self.get_autocommit():
        self.query("BEGIN")
    return self.cursor()

def __exit__(self, exc, value, tb):
    if exc:
        self.rollback()
    else:
        self.commit()

There are several existing Q&A about with already, or you can read Understanding Python's "with" statement, but essentially what happens is that __enter__ executes at the start of the with block, and __exit__ executes upon leaving the with block. You can use the optional syntax with EXPR as VAR to bind the object returned by __enter__ to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database:

connection = MySQLdb.connect(...)
with connection as cursor:            # connection.__enter__ executes at this line
    cursor.execute('select 1;')
    result = cursor.fetchall()        # connection.__exit__ executes after this line
print result                          # prints "((1L,),)"

The question now is, what are the states of the connection and the cursor after exiting the with block? The __exit__ method shown above calls only self.rollback() or self.commit(), and neither of those methods go on to call the close() method. The cursor itself has no __exit__ method defined – and wouldn't matter if it did, because with is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the with block. This is easily confirmed by adding the following code to the above example:

try:
    cursor.execute('select 1;')
    print 'cursor is open;',
except MySQLdb.ProgrammingError:
    print 'cursor is closed;',
if connection.open:
    print 'connection is open'
else:
    print 'connection is closed'

You should see the output "cursor is open; connection is open" printed to stdout.

I believe you need to close the cursor before committing the connection.

Why? The MySQL C API, which is the basis for MySQLdb, does not implement any cursor object, as implied in the module documentation: "MySQL does not support cursors; however, cursors are easily emulated." Indeed, the MySQLdb.cursors.BaseCursor class inherits directly from object and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer had this to say:

cnx.commit() before cur.close() sounds most logical to me. Maybe you can go by the rule: "Close the cursor if you do not need it anymore." Thus commit() before closing the cursor. In the end, for Connector/Python, it does not make much difference, but or other databases it might.

I expect that's as close as you're going to get to "standard practice" on this subject.

Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction?

I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it.

Is there a lot of overhead for getting new cursors, or is it just not a big deal?

The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can look at BaseCursor.__init__ on github if you're really curious to know what's happening when you create a new cursor.

Going back to earlier when we were discussing with, perhaps now you can understand why the MySQLdb.Connection class __enter__ and __exit__ methods give you a brand new cursor object in every with block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience.

If it's really that important to you to micromanage the cursor object, you can use contextlib.closing to make up for the fact that the cursor object has no defined __exit__ method. For that matter, you can also use it to force the connection object to close itself upon exiting a with block. This should output "my_curs is closed; my_conn is closed":

from contextlib import closing
import MySQLdb

with closing(MySQLdb.connect(...)) as my_conn:
    with closing(my_conn.cursor()) as my_curs:
        my_curs.execute('select 1;')
        result = my_curs.fetchall()
try:
    my_curs.execute('select 1;')
    print 'my_curs is open;',
except MySQLdb.ProgrammingError:
    print 'my_curs is closed;',
if my_conn.open:
    print 'my_conn is open'
else:
    print 'my_conn is closed'

Note that with closing(arg_obj) will not call the argument object's __enter__ and __exit__ methods; it will only call the argument object's close method at the end of the with block. (To see this in action, simply define a class Foo with __enter__, __exit__, and close methods containing simple print statements, and compare what happens when you do with Foo(): pass to what happens when you do with closing(Foo()): pass.) This has two significant implications:

First, if autocommit mode is enabled, MySQLdb will BEGIN an explicit transaction on the server when you use with connection and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit BEGIN to bypass the autocommit setting on the server. If you're used to using with connection, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add closing to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why.

Second, with closing(MySQLdb.connect(user, pass)) as VAR binds the connection object to VAR, in contrast to with MySQLdb.connect(user, pass) as VAR, which binds a new cursor object to VAR. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's connection attribute, which provides proxy access to the original connection. When the cursor is closed, its connection attribute is set to None. This results in an abandoned connection that will stick around until one of the following happens:

  • All references to the cursor are removed
  • The cursor goes out of scope
  • The connection times out
  • The connection is closed manually via server administration tools

You can test this by monitoring open connections (in Workbench or by using SHOW PROCESSLIST) while executing the following lines one by one:

with MySQLdb.connect(...) as my_curs:
    pass
my_curs.close()
my_curs.connection          # None
my_curs.connection.close()  # throws AttributeError, but connection still open
del my_curs                 # connection will close here
share|improve this answer edited Jun 16 at 1:04 ray 3,100628 answered Mar 24 '14 at 19:26 Air 3,75312446
 
2  
your post was most exhaustive, but even after re-reading it a few times, I find myself still puzzled regarding closing cursors. Judging from the numerous post on the subject, it seems to be a common point of confusion. My takeaway is that cursors seemingly DO NOT require .close() to be called -- ever. So why even have a .close() method? – SMGreenfield Jul 8 '15 at 5:02
2  
The short answer is that cursor.close() is part of the Python DB API, which was not written specifically with MySQL in mind. – Air Jul 8 '15 at 14:22
    
Why connection will close after del my_curs? – BAE Oct 6 '15 at 18:04
    
@ChengchengPei my_curs holds the last reference to the connection object. Once that reference no longer exists, the connection object should be garbage collected. – Air Oct 6 '15 at 18:35
add a comment | 

 
No problem. We won't show you that ad again. Why didn't you like it?
Oops! I didn't mean to do this.
up vote20down vote

It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too.

from contextlib import closing
import MySQLdb

''' At the beginning you open a DB connection. Particular moment when
  you open connection depends from your approach:
  - it can be inside the same function where you work with cursors
  - in the class constructor
  - etc
'''
db = MySQLdb.connect("host", "user", "pass", "database")
with closing(db.cursor()) as cur:
    cur.execute("somestuff")
    results = cur.fetchall()
    # do stuff with results

    cur.execute("insert operation")
    # call commit if you do INSERT, UPDATE or DELETE operations
    db.commit()

    cur.execute("someotherstuff")
    results2 = cur.fetchone()
    # do stuff with results2

# at some point when you decided that you do not need
# the open connection anymore you close it
db.close()
share|improve this answer edited Jul 26 '14 at 18:14 Martin Thoma 14.2k17115229 answered May 23 '13 at 15:59 Roman Podlinov 5,75452444
 
    
+1 for using with – snooze92 Jan 20 '14 at 12:01
    
I dont think the with is a good option if you want to use it in Flask or other web framework. If the situation is http://flask.pocoo.org/docs/patterns/sqlite3/#sqlite3 then there will be problems. – James King Jan 25 '14 at 16:04
    
@james-king I didn't work with Flask, but in your example Flask will close db connection itself. Actually in my code I use slightly different approach- I use with for close cursors with closing(self.db.cursor()) as cur: cur.execute("UPDATE table1 SET status = %s WHERE id = %s",(self.INTEGR_STATUS_PROCESSING, id)) self.db.commit() – Roman Podlinov Jan 27 '14 at 12:59
    
@RomanPodlinov Yeah, If you use it with cursor then things would be fine. – James King Jan 28 '14 at 12:32
add a comment | 
up vote4down vote

I think you'll be better off trying to use one cursor for all of your executions, and close it at the end of your code. It's easier to work with, and it might have efficiency benefits as well (don't quote me on that one).

conn = MySQLdb.connect("host","user","pass","database")
cursor = conn.cursor()
cursor.execute("somestuff")
results = cursor.fetchall()
..do stuff with results
cursor.execute("someotherstuff")
results2 = cursor.fetchall()
..do stuff with results2
cursor.close()

The point is that you can store the results of a cursor's execution in another variable, thereby freeing your cursor to make a second execution. You run into problems this way only if you're using fetchone(), and need to make a second cursor execution before you've iterated through all results from the first query.

Otherwise, I'd say just close your cursors as soon as you're done getting all of the data out of them. That way you don't have to worry about tying up loose ends later in your code.

share|improve this answer answered Jul 30 '11 at 19:06 nct25 125110
 
    
Thanks - Considering that you have to close the cursor to commit an update/insert, I guess one easy way to do it for updates/inserts would be to get one cursor for each daemon, close the cursor to commit and immediately get a new cursor so your ready the next time. Does that sound reasonable? – jmilloy Jul 31 '11 at 16:21
1  
Hey, no problem. I didn't actually know about committing the update/insert by closing your cursors, but a quick search online shows this: conn = MySQLdb.connect(arguments_go_here) cursor = MySQLdb.cursor() cursor.execute(mysql_insert_statement_here) try: conn.commit() except: conn.rollback() # undo changes made if error occurs. This way, the database itself commits the changes, and you don't have to worry about the cursors themselves. Then you can just have 1 cursor open at all times. Have a look here: tutorialspoint.com/python/python_database_access.htm – nct25 Jul 31 '11 at 21:43
    
Yeah if that works then I'm just wrong and there was some other reason that caused me to think I had to close the cursor in order to commit the connection. – jmilloy Aug 1 '11 at 1:39
    
Yea I dunno, that link I posted makes me think that that works. I guess a little more research would tell you if it definitely works or not, but I think you could probably just go with it. Hope I was of help to you! – nct25 Aug 1 '11 at 3:48
add a comment | 
up vote-3down vote

I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a 50x(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.

share|improve this answer answered Apr 14 '11 at 21:40 KilledKenny 220416
 
    
In MySQLdb, there is a difference between a connection and a cursor. I connect once per request (for now) and can detect connection errors early. But what about cursors? – jmilloy Apr 14 '11 at 21:47
    
IMHO it's not accurate advice. It dependes. If your code will keep connection for long time (e.g. it takes some data from DB and then for 1-5-10 mins it does something on the server and keep connection) and it's multy thread application it will create a problem pretty soon (you will exceed max allowed connections). – Roman Podlinov May 23 '13 at 16:10
add a comment | 

您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 使用SendMessage向另一進程發送WM_COPYDATA消息 Send端: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; u ...
  • R軟體功能非常強大,可以很好的進行各類統計,並能輸出圖形。下麵介紹一種R語言和C#進行通信的方法,並將R繪圖結果顯示到WinForm UI界面上。 1 前提準備 安裝R軟體,需要安裝32位的R軟體,64位的調用會報錯。另外就是講R添加到電腦環境變數中。 打開R軟體,安裝包 scatterplot3d ...
  • 經過四天的努力終於將SqlSugar ORM 成功支持ORACLE資料庫 優點: 1、高性能,達到原生最高水準,比SqlHelper性能要高,比Dapper快30% 比EF快50% 2、支持多種資料庫 ,sql版本更新最快,其它會定期更新,可以在多種資料庫用一種編程方式 3、支持.net Core ...
  • 接觸Asp.net boilerplate 一段時間,一次同事將他的代碼添加到zero項目模板中,他將路由配置成他的頁面,目的是要讓zero項目登錄成功之後跳轉到他的頁面,可是通過fiddler監視請求報了一個錯誤 後來得知CSRF,翻閱下ABP Documents,瞭解下 介紹 (CSRF)跨站點 ...
  • 簡單類型參數 Example 1: Sending a simple parameter in the Url [RoutePrefix("api/values")] public class ValuesController : ApiController { // http://localhos... ...
  • 簡單的說,C#已經內置了一些類,我們可以利用這些類來訪問資料庫。在這裡,我們假設讀者已經熟悉SqlServer資料庫或者其它資料庫(我以後也會補上相關內容)。我們如何來實現這項技術呢?大致可以分為三個步驟:1、連接資料庫 2、設置操作/命令 3、執行操作。現分述如下: 1、連接資料庫 連接資料庫我們 ...
  • 最近在看《演算法導論》這本書,在練習題當中發現了這樣的一個問題:使用二分查找法來實現插入排序,由於之前的內容當中有講解二分法的遞歸實現,所以在這便將它們結合起來希望解決這個問題。閑話不多說了,直接上代碼: 演算法思路很簡單,無非是將原來的線性查找被排序元素的合適的位置的部分換成了使用二分法來查找合適的位 ...
  • 建議47:在equals中使用getClass進行類型判斷 本節我們繼續討論覆寫equals的問題,這次我們編寫一個員工Employee類繼承Person類,這很正常,員工也是人嘛,而且在JavaBean中繼承也很多見,代碼如下: 員工類增加了工號ID屬性,同時也覆寫了equals方法,只有在姓名和 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...