log file parallel write概念介紹 log file parallel write 事件是LGWR進程專屬的等待事件,發生在LGWR將日誌緩衝區(log_buffer)中的重做日誌信息寫入聯機重做日誌文件組的成員文件,LGWR在該事件上等待該寫入過程的完成。該事件的等待表示重做日... ...
log file parallel write概念介紹
log file parallel write 事件是LGWR進程專屬的等待事件,發生在LGWR將日誌緩衝區(log_buffer)中的重做日誌信息寫入聯機重做日誌文件組的成員文件,LGWR在該事件上等待該寫入過程的完成。該事件的等待表示重做日誌所處的磁碟設備緩慢或存在爭用。下麵看看官方一些資料是如何解釋log file parallel write等待事件的。
log file parallel write
Writing redo records to the redo log files from the log buffer.
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.
Parameters:
· P1 = files
· P2 = blocks
· P3 = requests
· files Number of files written to.
If you have more than one log member per group then the files are written to in parallel (if possible). This is the number of redo log members (files) that the writes apply to.
· blocks Number of blocks to be written.
The number of REDO blocks being written to each log member. eg: A value of 10 indicates 10 redo blocks for each log member.
· requests Number of I/O requests.
The number of distinct IO requests. The number of blocks to be written is divided up into a number of IO requests.
The actual wait time is the time taken for all the outstanding I/O requests to complete. Even though the writes may be issued in parallel, LGWR needs to wait for the last I/O to be on disk before the parallel write is considered complete. Hence the wait time depends on the time it takes the OS to complete all requests.
註意:自己看到這個等待事件parallel write,以及網上有“LGWR進程對同一組多個重做日誌文件‘同時’寫,是通過非同步I/O來實現的,因此等待事件log file parallel write應該是在同一組下有多個重做日誌文件時才會出現 ”這種說法,其實不管重做日誌是一個或多個成員,都會出現log file parallel write等待事件。只要有buffer log寫到redo log裡面就有這個等待,它不是說並行寫多個redo文件,單個文件也是這樣的。
log file parallel write出現原因
如果資料庫出現了這個等待事件,意味著重做日誌(redo log)所處的磁碟設備I/O緩慢或存在爭用:
· 磁碟I/O性能比較差
· REDO文件的分佈導致了I/O爭用,例如,同一組的REDO成員文件放在相同的磁碟上。
查看log file parallel write等待事件
SQL> select s.event as event_name
2 ,s.time_waited/100 as time_waited
3 ,s.average_wait as averge_wait
4 from v$system_event s
5 where s.event in ('log file parallel write','log file sync');
EVENT_NAME TIME_WAITED AVERGE_WAIT
---------------------------------------------------------------- ----------- -----------
log file parallel write 45650.75 .03
log file sync 54045.4 .04
SQL>
上面time_waited、average_wait的單位為百分之一秒(in hundredths of a second), 'log file parallel write'事件的平均等待時間大於 10ms (or 1cs),這通常意味著存在較慢的I/O吞吐量。 LGWR進程寫入慢會影響用戶事務提交的時間。
If the log file parallel write average wait time is greater than 10ms (or 1cs), this normally indicates slow I/O throughput.The cure is the same as for the db file parallel write waits. Enable asynchronous writes if your redo logs are on raw devices and the operating system supports asynchronous I/O. For redo logs on file systems, use synchronous direct writes.
其它一些查看redo log相關信息的SQL
SELECT b.name,
a.value,
Round(SYSDATE - c.startup_time) days_old
FROM v$sysstat a,
v$statname b,
v$instance c
WHERE a.statistic# = b.statistic#
AND b.name IN ( 'redo wastage', 'redo size' );
SELECT sid,
value
FROM v$sesstat
WHERE statistic# = (SELECT statistic#
FROM v$statname
WHERE name = 'user commits')
ORDER BY value DESC;
如何減少log file parallel write等待時間
要減少log file parallerl write等待時間,可以從下麵兩個方面入手解決(其它細節參考下麵英文資料):
(1)將重做日誌文件組放置到高速I/O磁碟上(不要將日誌組成員文件放置IO不一樣的磁碟上)。不要將重做日誌放在RAID5的磁碟上。
(2)儘可能的降低重做數量:
1:儘可能使用nologging選項,例如索引的重建、重組。包括create table...as select...等操作
2:熱備可能生成大量的重做信息,所以熱備份應該在非高峰時間運行。
3:應用程式的commit的頻率小一些也能有一些改善。
Reducing Waits / Wait times:
You might want to reduce "log file parallel write" wait times in order to reduce user waits which depend on LGWR.
· Ensure tablespaces are NOT left in HOT BACKUP mode longer than needed.
· Tablespaces in HOT BACKUP mode cause more redo to be generated for each change which can vastly increase the rate of redo generarion.
· NOLOGGING / UNRECOVERABLE operations may be possible for certain operations to reduce the overall rate of redo generation
· Redo log members should ideally be on high speed disks
· eg: Historically RAID5 was not a good candidate for redo log members, although some of the RAID 5 drawbacks have been addressed more recently through larger intelligent disk caches, etc.
· Redo log members should be on disks with little/no IO activity from other sources.
· (including low activity from other sources against the same disk controller)
· With file systems like ASM and ZFS, you may also have to experiment with the disk stripe size to make more friendly with the type of IO LGWR usually performs.
· Check that the system has not become CPU (or memory) bound otherwise it may take LGWR more time to recognize IO has been completed.
· The higher the value for CPU_COUNT, the more strands can be used for writing redo. Having an increased number of strands can lead to an increase the average waits on "log file parallel write" (which in turn can affect "log file sync" waits). Filesystems that do not support Concurrent IO (CIO) are more likely to be affected by this, compared to ASM or raw devices. However, it can also impact filesystems that support CIO. On servers with CPU_COUNT greater than 128 LGWR behaviour may need manually adjusting through underscore parameters. If you think you are encountering long "log file parallel write" due to high CPU counts then contact Oracle Support.
---------------------------------------------------------------------------------------------------------
Unfortunately, you cannot spawn more than one LGWR process. In this case, it is critical that nothing else is sharing the mount point of the redo log files. Make sure the controller that services the mount point is not overloaded. Moving the redo logs to higher speed disks will also help.
We strongly suggest that you avoid putting redo logs on RAID5 disks, but we also understand that many times you don’t have a choice or a say in it.
Besides improving the I/O throughput, you can also work on lowering the amount of redo entries. This will provide some relief, but not the cure. Whenever possible, use the NOLOGGING option. Indexes should be created or rebuilt with the NOLOGGING option. CTAS operations should also use this option.
Note:
The NOLOGGING option doesn’t apply to normal DML operations such as inserts, updates, and deletes. Objects created with the NOLOGGING option are unrecoverable unless a backup is taken prior to the corruption. If you have to take an additional backup, then the I/Os that you save by not logging will be spent on backup. Database in FORCE LOGGING mode will log all changes (except for changes in temporary tablespaces), regardless of the tablespace and object settings.
A lower commit frequency at the expense of higher rollback segment usage can also provide some relief.
A high commit frequency causes the LGWR process to be overactive and when coupled with slow I/O throughput will only magnify the log file parallel write waits.
The application may be processing a large set of data in a loop and committing each change, which causes the log buffer to be flushed too frequently. In this case, modify the application to commit at a lower frequency. There could also be many short sessions that log in to the database, perform. a quick DML operation, and log out.
下麵提供一個用戶做log file sync waits、log file parallel write troubleshoot的一個腳本。官方文檔ID為1064487.1
NAME: LFSDIAG.SQL
-- ------------------------------------------------------------------------
-- AUTHOR: Michael Polaski - Oracle Support Services
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is intended to provide a user friendly guide to troubleshoot
-- log file sync waits. The script will look at important parameters involved
-- in log file sync waits, log file sync wait histogram data, and the script
-- will look at the worst average log file sync times in the active session
-- history data and AWR data and dump information to help determine why those
-- times were the highest. The script will create a file called
-- lfsdiag_<timestamp>.out in your local directory.
set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hh24mi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool lfsdiag_&&dbname&×tamp&&suffix
set trim on
set trims on
set lines 140
set pages 100
set verify off
alter session set optimizer_features_enable = '10.2.0.4';
PROMPT LFSDIAG DATA FOR &&dbname&×tamp
PROMPT Note: All timings are in milliseconds (1000 milliseconds = 1 second)