ORACLE等待事件: log file parallel write

来源:http://www.cnblogs.com/kerrycode/archive/2016/09/07/5849564.html
-Advertisement-
Play Games

log file parallel write概念介紹 log file parallel write 事件是LGWR進程專屬的等待事件,發生在LGWR將日誌緩衝區(log_buffer)中的重做日誌信息寫入聯機重做日誌文件組的成員文件,LGWR在該事件上等待該寫入過程的完成。該事件的等待表示重做日... ...


log file parallel write概念介紹

log file parallel write 事件是LGWR進程專屬的等待事件,發生在LGWR將日誌緩衝區(log_buffer)中的重做日誌信息寫入聯機重做日誌文件組的成員文件,LGWR在該事件上等待該寫入過程的完成。該事件的等待表示重做日誌所處的磁碟設備緩慢或存在爭用。下麵看看官方一些資料是如何解釋log file parallel write等待事件的。

 

log file parallel write

 

Writing redo records to the redo log files from the log buffer.

 

Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.

 

 

Parameters:

 

 

·         P1 = files

·         P2 = blocks

·         P3 = requests

·         files Number of files written to.

        If you have more than one log member per group then the files are written to in parallel (if possible). This is the number of redo log members (files) that the writes apply to.

·         blocks Number of blocks to be written.

        The number of REDO blocks being written to each log member. eg: A value of 10 indicates 10 redo blocks for each log member.

·         requests Number of I/O requests.

        The number of distinct IO requests. The number of blocks to be written is divided up into a number of IO requests.

 

 

Wait Time:

 

 

        The actual wait time is the time taken for all the outstanding I/O requests to complete. Even though the writes may be issued in parallel, LGWR needs to wait for the last I/O to be on disk before the parallel write is considered complete. Hence the wait time depends on the time it takes the OS to complete all requests.

 

註意:自己看到這個等待事件parallel write,以及網上有“LGWR進程對同一組多個重做日誌文件‘同時’寫,是通過非同步I/O來實現的,因此等待事件log file parallel write應該是在同一組下有多個重做日誌文件時才會出現 ”這種說法,其實不管重做日誌是一個或多個成員,都會出現log file parallel write等待事件。只要有buffer log寫到redo log裡面就有這個等待,它不是說並行寫多個redo文件,單個文件也是這樣的。

 

log file parallel write出現原因

 

如果資料庫出現了這個等待事件,意味著重做日誌(redo log)所處的磁碟設備I/O緩慢或存在爭用

 

· 磁碟I/O性能比較差

· REDO文件的分佈導致了I/O爭用,例如,同一組的REDO成員文件放在相同的磁碟上。

 

 

查看log file parallel write等待事件

 

SQL> select s.event            as event_name
  2        ,s.time_waited/100  as time_waited
  3        ,s.average_wait     as averge_wait
  4  from v$system_event s
  5  where s.event in ('log file parallel write','log file sync');
 
EVENT_NAME                                                       TIME_WAITED AVERGE_WAIT
---------------------------------------------------------------- ----------- -----------
log file parallel write                                             45650.75         .03
log file sync                                                        54045.4         .04
 
SQL> 

 

上面time_waited、average_wait的單位為百分之一秒(in hundredths of a second), 'log file parallel write'事件的平均等待時間大於 10ms (or 1cs),這通常意味著存在較慢的I/O吞吐量。 LGWR進程寫入慢會影響用戶事務提交的時間。

 

If the log file parallel write average wait time is greater than 10ms (or 1cs), this normally indicates slow I/O throughput.The cure is the same as for the db file parallel write waits. Enable asynchronous writes if your redo logs are on raw devices and the operating system supports asynchronous I/O. For redo logs on file systems, use synchronous direct writes.

 

其它一些查看redo log相關信息的SQL

SELECT b.name, 
       a.value, 
       Round(SYSDATE - c.startup_time) days_old 
FROM   v$sysstat a, 
       v$statname b, 
       v$instance c 
WHERE  a.statistic# = b.statistic# 
       AND b.name IN ( 'redo wastage', 'redo size' ); 
 
 
SELECT sid, 
       value 
FROM   v$sesstat 
WHERE  statistic# = (SELECT statistic# 
                     FROM   v$statname 
                     WHERE  name = 'user commits') 
ORDER  BY value DESC; 

 

如何減少log file parallel write等待時間

 

要減少log file parallerl write等待時間,可以從下麵兩個方面入手解決(其它細節參考下麵英文資料):

(1)將重做日誌文件組放置到高速I/O磁碟上(不要將日誌組成員文件放置IO不一樣的磁碟上)。不要將重做日誌放在RAID5的磁碟上。

(2)儘可能的降低重做數量:

       1:儘可能使用nologging選項,例如索引的重建、重組。包括create table...as select...等操作

       2:熱備可能生成大量的重做信息,所以熱備份應該在非高峰時間運行。

       3:應用程式的commit的頻率小一些也能有一些改善。

 

Reducing Waits / Wait times:

You might want to reduce "log file parallel write" wait times in order to reduce user waits which depend on LGWR.

· Ensure tablespaces are NOT left in HOT BACKUP mode longer than needed.

· Tablespaces in HOT BACKUP mode cause more redo to be generated for each change which can vastly increase the rate of redo generarion.

· NOLOGGING / UNRECOVERABLE operations may be possible for certain operations to reduce the overall rate of redo generation

· Redo log members should ideally be on high speed disks

· eg: Historically RAID5 was not a good candidate for redo log members, although some of the RAID 5 drawbacks have been addressed more recently through larger intelligent disk caches, etc.

· Redo log members should be on disks with little/no IO activity from other sources.

· (including low activity from other sources against the same disk controller)

· With file systems like ASM and ZFS, you may also have to experiment with the disk stripe size to make more friendly with the type of IO LGWR usually performs.

· Check that the system has not become CPU (or memory) bound otherwise it may take LGWR more time to recognize IO has been completed.

· The higher the value for CPU_COUNT, the more strands can be used for writing redo. Having an increased number of strands can lead to an increase the average waits on "log file parallel write" (which in turn can affect "log file sync" waits). Filesystems that do not support Concurrent IO (CIO) are more likely to be affected by this, compared to ASM or raw devices. However, it can also impact filesystems that support CIO. On servers with CPU_COUNT greater than 128 LGWR behaviour may need manually adjusting through underscore parameters. If you think you are encountering long "log file parallel write" due to high CPU counts then contact Oracle Support.

 

---------------------------------------------------------------------------------------------------------

Unfortunately, you cannot spawn more than one LGWR process. In this case, it is critical that nothing else is sharing the mount point of the redo log files. Make sure the controller that services the mount point is not overloaded. Moving the redo logs to higher speed disks will also help.

We strongly suggest that you avoid putting redo logs on RAID5 disks, but we also understand that many times you don’t have a choice or a say in it.

Besides improving the I/O throughput, you can also work on lowering the amount of redo entries. This will provide some relief, but not the cure. Whenever possible, use the NOLOGGING option. Indexes should be created or rebuilt with the NOLOGGING option. CTAS operations should also use this option.

Note:

       The NOLOGGING option doesn’t apply to normal DML operations such as inserts, updates, and deletes. Objects created with the NOLOGGING option are unrecoverable unless a backup is taken prior to the corruption. If you have to take an additional backup, then the I/Os that you save by not logging will be spent on backup.        Database in FORCE LOGGING mode will log all changes (except for changes in temporary tablespaces), regardless of the tablespace and object settings.

       A lower commit frequency at the expense of higher rollback segment usage can also provide some relief.

       A high commit frequency causes the LGWR process to be overactive and when coupled with slow I/O throughput will only magnify the log file parallel write waits.

       The application may be processing a large set of data in a loop and committing each change, which causes the log buffer to be flushed too frequently. In this case, modify the application to commit at a lower frequency. There could also be many short sessions that log in to the database, perform. a quick DML operation, and log out.

 

下麵提供一個用戶做log file sync waits、log file parallel write troubleshoot的一個腳本。官方文檔ID為1064487.1

NAME: LFSDIAG.SQL
-- ------------------------------------------------------------------------
-- AUTHOR: Michael Polaski - Oracle Support Services
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is intended to provide a user friendly guide to troubleshoot
-- log file sync waits. The script will look at important parameters involved
-- in log file sync waits, log file sync wait histogram data, and the script
-- will look at the worst average log file sync times in the active session
-- history data and AWR data and dump information to help determine why those
-- times were the highest. The script will create a file called
-- lfsdiag_<timestamp>.out in your local directory.
 
set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hh24mi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool lfsdiag_&&dbname&&timestamp&&suffix
set trim on
set trims on
set lines 140
set pages 100
set verify off
alter session set optimizer_features_enable = '10.2.0.4';
 
PROMPT LFSDIAG DATA FOR &&dbname&&timestamp
PROMPT Note: All timings are in milliseconds (1000 milliseconds = 1 second)
	   

您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • Android的SharedPreferences用來存儲一些鍵值對, 但是卻不支持跨進程使用. 跨進程來用的話, 當然是放在資料庫更可靠啦, 本文主要是給作者的新庫[PreferencesProvider](https://github.com/mengdd/PreferencesProvider... ...
  • 原諒我只提供一個鏈接,我在這裡寫了兩遍,最後加個鏈接頁面卡死了,下麵的demo,最好真機調試。(寫博客還是在別的地方寫複製到這裡比較好!) https://pan.baidu.com/s/1mi2fHDu 另外還有一些資料分享,喜歡的可以看一下:http://www.cnblogs.com/ljcg ...
  • 今天總結一下android客戶端從伺服器端獲取json數據的實現代碼,需要的朋友可以參考下 首先客戶端從伺服器端獲取json數據 1、利用HttpUrlConnection /** * 從指定的URL中獲取數組 * @param urlPath * @return * @throws Excepti ...
  • 在網上看了很多資料都是空說一談,都只是說很簡單,然後沒有說遇到這樣的情況具體該怎麼做,看到這裡都知道是許可權問題,其實我們每一個人都知道,又是我覺得我還是要給以後遇到的朋友個解決方法: 這裡用到的資料庫是08版本的,出現了這種一直報許可權錯誤:錯誤截圖如下: 所見效果描述:在windows身份驗證的 情 ...
  • 一、導出數據外部 1)mysql連接+將查詢結果輸出到文件。在命令行中執行(windows的cmd命令行,mac的終端) -h:後面跟的是鏈接的host(主機) -u:後面跟的是用戶名 -p:後面跟的是密碼 db:你要查詢的資料庫 file:你要寫入的文件,絕對路徑 例如: 下麵將 sql語句 se ...
  • 作為DBA,時不時會遇到將數據導入到資料庫的情況,假設業務或研發提供一個包含上百萬行INSERT語句的腳本文件,而且這些INSERT 語句沒有使用GO來進行批處理拆分,那麼直接使用SQLCMD來執行會發現該進程消耗大量物理記憶體並遲遲沒有數據寫入,即使腳本中每一行都添加了GO,你依然會發現這插入效率太 ...
  • 一、多表連接查詢 1、連接 [inner] join 內連接:表A和表B以元組為單位做一個笛卡爾積,記為表C,然後在C中挑選出滿足符合on 語句後邊的限制條件的條目。建立條件的方式可選 natural 做自然連接,去除重覆的列。 left [outer] join 左外連接:在內連接的基礎上,將A中 ...
  • 調試存儲過程時,往往可以用print將存儲過程中的變數print出來, 但是print出來的字元串有一定長度限制,剛纔專門試了一下,應該是4000個字元 如果超過4000個字元,超長的字元會被自動截斷,這一點應該比較清楚。 對於超過4000的字元串沒辦法print出來的時候,往往可以選擇select ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...