Oracle Study--Oracle Supplemental Log补全日志介绍

简介:

    Oracle补全日志(Supplemental logging)特性因其作用的不同可分为以下几种:最小(Minimal),支持所有字段(all),支持主键(primary key),支持唯一键(unique),支持外键(foreign key)。包括LONG,LOB,LONG RAW及集合等字段类型均无法利用补全日志。

      最小(Minimal)补全日志开启后可以使得logmnr工具支持链式行,簇表和索引组织表。可以通过以下SQL检查最小补全日志是否已经开启:

1
SELECT supplemental_log_data_min FROM v$database;

一般情况下我们在使用逻辑备库时启用主键和惟一键的补全日志,而有时表上可能没有主键,惟一键或唯一索引;我们通过以下实验总结这种情况下Oracle的表现。若结果返回YES或IMPLICIT则说明已开启最小补全日志,当使用ALL,PRIMARY,UNIQUE或FOREIGN补全日志时最小补全日志默认开启(即检查结果为IMPLICIT)。

首先建立相关的测试表:

1
2
3
4
5
6
alter database add supplemental log data (primary key,unique index) columns ;
create table test (t1  int  , t2  int  ,t3  int  ,t4  int  );
alter table test add constraint pk_t1 primary key (t1); –添加主键
随后使用循环插入一定量的数据
update test set t2= 10 ;       
commit;   — 更新数据

使用LOGMNR工具分析之前的操作,可以看到REDO中记录的SQL形式如下:

1
2
update “SYS”.”TEST” set “T2″ = ’ 10 ’ where “T1″ = ’ 64 ’ and “T2″ = ’ 65 ’ 
     and ROWID = ‘AAAMiSAABAAAOhiAA/';

其中where字句后分别记录了主键值,被修改字段的值和原行的ROWID。

现在我们将原表上的主键去掉来观察。

1
2
3
4
5
6
alter table test drop constraint pk_t1 ;
update test set t2= 11 ;       commit;   — 更新数据
使用LOGMNR分析可以发现,REDO中的SQL记录如下:
update “SYS”.”TEST” set “T2″ = ’ 11 ’ where “T1″ = ‘ 1 ’ 
     and “T2″ = ’ 10 ’ and “T3″ = ‘ 3 ’ and “T4″ = ‘ 4 ’ 
     and ROWID = ‘AAAMiSAABAAAOhiAAA';

当没有主键的情况下,where子句后记录了所有列值和ROWID。

1
2
3
4
5
6
7
8
9
10
11
12
以下实验在存在唯一索引情况下的表现
create unique index pk_t1 on test(t1);
update test set t2= 15 ; commit;
使用LOGMNR分析可以发现,REDO中的SQL记录如下:
update “SYS”.”TEST” set “T2″ = ’ 15 ’ where “T1″ = ‘ 9 ’ and “T2″ = ’ 11 ’ 
    and “T3″ = ’ 11 ’ and “T4″ = ’ 12 ’ and ROWID = ‘AAAMiSAABAAAOhiAAI';
以上是t1列有唯一索引但不限定not  null 的情况,下面我们加上not  null 限制
alter table test modify t1 not  null
update test set t2= 21 ; commit;
使用LOGMNR分析可以发现,REDO中的SQL记录如下:
update “SYS”.”TEST” set “T2″ = ’ 21 ’ where “T1″ = ‘ 2 ’ and “T2″ = ’ 15 ’ 
    and ROWID = ‘AAAMiSAABAAAOhiAAB';

如以上SQL所示,在存在唯一索引的情况下where子句后仍记录了所有列和ROWID;在存在唯一索引和非空约束的情况下表现与存在主键的情况一致。

当某个表上的列数量较多时且没有主键或唯一索引和非空约束的情况下,开启补全日志可能导致重做日志总量大幅提高。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
首先建立一个存在 250 列的表:
Drop table test;
create table test (
t1 varchar2( 5 ),
t2 varchar2( 5 ),
t3 varchar2( 5 ),
t4 varchar2( 5 ),  …t250 varchar2( 5 ))
insert into test values (‘TEST’,’TEST’ ……);   commit; –将 255 个列填入数据
alter database drop supplemental log data (primary key,unique index) 
columns;  –关闭补全日志
set autotrace on;
update test set t2=’BZZZZ’ where t1=’TEST'; commit;
可以从自动跟踪信息中看到,本条更新产生了 516 的重做量。
alter database add supplemental log data (primary key,unique index) 
columns;  –重新开启补全日志
update test set t2=’FSDSD’ where t1=’TEST';
跟踪信息显示产生了 3044 的重做量。

        补全日志因作用域的不同又可分为数据库级的和表级的。表级补全日志又可以分为有条件的和无条件的。有条件限制的表级补全日志仅在特定列被更新时才会起作用,有条件限制的表级补全日志较少使用,这里我们不做讨论。

下面我们来观察无条件限制表级补全日志的具体表现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
alter database drop supplemental log data (primary key,unique index) 
   columns;
alter table test add supplemental log data (primary key,unique index) 
   columns;
update test set t2=’ZZZZZ'; commit;
使用LOGMNR工具查看redo中的SQL:update “SYS”.”TEST” set “T2″ = ‘ZZZZZ’ 
   where “T1″ = ‘TEST’ and “T2″ = ‘AAAAA’ and “T3″ = ‘TEST’………
可以发现where子句之后包含了所有列值。
delete test; commit;
使用LOGMNR工具查看redo中的SQL:
delete from “SYS”.”TEST” where “T1″ = ‘TEST’ and “T2″ = ‘ZZZZZ’ 
   and “T3″ = ‘TEST’ and “T4″ = ‘TEST’ and “T5″ ……
delete操作同样在where子句之后包含了所有列值。
又我们可以针对表上字段建立特定的补全日志组,以减少where子句后列值的出现。
alter table test drop supplemental log data (primary key,unique index) 
   columns;  –关闭表上原先的补全日志
alter table test add supplemental log group test_lgp (t1 ,t2,t3,t4,t5,t6,t12,t250) 
   always; –创建补全日志组
update test set t2=’XXXXX’ ; commit;
使用LOGMNR工具查看redo中的SQL:
update “SYS”.”TEST” set “T2″ = ‘XXXXX’ where “T1″ = ‘TEST’ and “T2″ = ‘TEST’ 
   and “T3″ = ‘TEST’ and “T4″ = ‘TEST’ and “T5″ = ‘TEST’ and “T6″ = ‘TEST’ and “T12″ = ‘TEST’ and “T250″ = ‘TEST’ and ROWID = ‘AAAMieAABAAAOhnAAA';
如上所示重做日志中正确地显示了UPDATE操作中用户指定的字段值。
delete test;
使用LOGMNR工具查看redo中的SQL:
delete from “SYS”.”TEST” where “T1″ = ‘TEST’ and “T2″ = ‘XXXXX’ 
   and “T3″ = ‘TEST’ ……
delete操作在重做日志中仍然保留了所有列值。

        针对字段较多的表,我们在能够以多个列保证数据唯一性且非空的情况下(即应用概念上的主键)来指定表上的补全日志组,以减少update操作时所产生的重做日志,而对于delete操作则无法有效改善。

关于Oracle中supplemental log的补充说明

打开最小补全日志的命令如下:

在上一篇关于Oracle补全日志的介绍中漏写了关于最小补全日志(minimal supplemental log)与表级补全日志的关系;表级补全日志需要在最小补全日志打开的情况下才起作用,即若一个数据库没有开最小补全日志或之前drop supplemental log data操作则即便指定了表级补全日志,实际在重做日志输出的过程中描述的记录仍只记录rowid和相关列值。

打开最小补全日志的命令如下:

1
Alter database add supplemental log data;

其次若如之前叙述的因表上的列数过多(超过200个),则应检查视图 dba_logstdby_not_unique, 该视图记录了在数据库中没有主键或没有唯一索引并且列非空的索引(tables in the primary database that do not have a primary key or unique index with NOT NULL columns)的表。如使用以下SQL:

1
2
3
4
select owner, table_name, bad_column
from dba_logstdby_not_unique
where table_name not in
(select table_name from dba_logstdby_unsupported);
1
2
3
4
5
6
TSMSYS    SRS$    Y
HTEST    TEST    N
HGET    GETMAXID    N
HGET    HUSER    N
SCOTT    BONUS    N
SCOTT    SALGRADE    N

其中bad_column列较为关键。若该字段为 Y,表示一个表列被使用大数据类型定义,例如CLOB或BLOB。sql apply尝试维护这些表,但是你必须要保证表中除这列外的其他列的单值性。就是说,注意,如果一个表中有两行除了LOB列外,其他的值完全相同,这样表 的改动就不能被逻辑备用数据库应用,sql apply会停止。N,表示表中包含足够的列信息,需要用来在逻辑备用数据库中维护表的。

针对前文叙述的在表上列较多的情况下(超过200个列),且不能添加主键和唯一非空索引的表,我们需要特别关注。但实际如果我们想了解一个段在一定段内产生的重做量却十分困难。(method :check  how much redo generated by one segment)

已知的研究方法例如logmnr工具,和dump redologs以及oradebug都无法提供足够的信息帮助统计。

仅有的方法是通过logmnr估算,v$logmnr_contents视图中记录的rbablk与rbabyte,为重做日志中的块偏移量(redo log中512byte为一个块)与字节偏移量,通过计算差值结合data_obj#列,可以大致估算某个段上一定时间内的重做量:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
create table redo_analysis nologging  as
select data_obj#,  oper, rbablk* 512  + rbabyte curpos,
lead(rbablk* 512 +rbabyte, 1 , 0 ) over (order by  rbasqn, rbablk, rbabyte)
nextpos
from
( select distinct data_obj#,  operation oper,
rbasqn, rbablk, rbabyte  from  v$logmnr_contents
order by rbasqn, rbablk, rbabyte );
select data_obj#, oper, obj_name, sum(redosize) total_redo
from
(
select data_obj#, oper, obj.name obj_name , nextpos-curpos -1  redosize
from  redo_analysis redo1, sys.obj$ obj
where  (redo1.data_obj# = obj.obj#  or   redo1.data_obj# = obj.dataobj#)
and   nextpos != 0  – For the boundary condition
union all
select data_obj#, oper, ‘internal ‘ , nextpos-curpos  redosize
from  redo_analysis redo1
where   redo1.data_obj#= 0  and   redo1.data_obj# =  0
and  nextpos!= 0
)
group by data_obj#, oper, obj_name
order by  4

以上估算并不准确,在有手动切换(switch logfile)日志及其他特殊情况时误差较大。

关于Oracle补全日志补充说明
         Oracle日志(redo log)一般用于实例恢复和介质恢复,但是如果需要靠日志还原完整的DML操作信息(比如Logmnr、Streams和这里的Goldengate),默认记录的日志量还不够。比如一个UPDATE的操作,默认redo只记录了rowid以及被修改的字段信息,但这里GoldenGate还原这个事务,因为不是根据rowid而是SQL层面根据唯一键值来定位记录,所以还需要将主键或者其他字段的信息附加到日志中去。要往日志中增加这些额外字段信息的操作,就是开启补全日志,即Add Supplemental Logging。打开补全日志,会使数据库的日志量增加,所以只打开需要的级别和对象即可。
Oracle补全日志可以在数据库级别设置,也可以在表级别设置。在数据库级别中,补全日志按补全的信息量,对应好几个级别:
(1)最小附加日志(Minimal supplemental logging):是开启logmnr的最低日志要求,提供了行链接(chained rows)和多种数据存储(比如聚簇表、索引组织表)的信息。在Oracle 9.2之后的版本中,默认都不开启。
(2)主键补全(Primary key supplemental logging):在日志中补全所有主键列。如果表中无主键,则补全一个非空唯一索引列;如果非空唯一索引键也没,那么会补全除了LOB和LONG类型字段以外的所有列,这时就和下面的所有补全一样了。
(3)唯一键补全(Unique key supplemental logging):当唯一键列或位图索引列被修改时,在日志中补全所有唯一键列或位图索引列。打开唯一键补全也会同时打开主键补全。注意这个级别是需要条件触发的。
(4)外键补全(Foreign Key supplemental logging):当外键列被修改时,将在日志中补全所有外键列。这个级别也是需要条件触发的。
(5)所有补全(All supplemental logging):在日志中补全所有字段(排除LOB和LONG类型)。
这里对于补全日志的详细操作语句不做一一说明。
数据库级别中的5个类型中,除了最小附加日志级别,都可以在表级进行设置。除此之外,表级还可以明确指定需要补全的列。
Oracle表级补全日志需要在最小补全日志打开的情况下才起作用,即若一个数据库没有开最小补全日志或之前drop supplemental log data操作则即便指定了表级补全日志,实际在重做日志输出的过程中描述的记录仍只记录rowid和相关列值。而要关闭最小补全日志,也必须首先关闭数据库级别的其他补全级别后,才能关闭。

Oracle 官方文档:

  1. Supplemental Logging

  2. Redo log files are generally used for instance recovery and media recovery. The data needed for such operations is automatically recorded in the redo log files. However, a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging.

  3. By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. Therefore, you must enable at least minimal supplemental logging before generating log files which will be analyzed by LogMiner.

  4. The following are examples of situations in which additional columns may be needed:

  5. An application that applies reconstructed SQL statements to a different database must identify the update statement by a set of columns that uniquely identify the row (for example, a primary key), not by the ROWID shown in the reconstructed SQL returned by the V$LOGMNR_CONTENTS view, because the ROWID of one database will be different and therefore meaningless in another database.

  6. An application may require that the before-image of the whole row be logged, not just the modified columns, so that tracking of row changes is more efficient.

  7. A supplemental log group is the set of additional columns to be logged when supplemental logging is enabled. There are two types of supplemental log groups that determine when columns in the log group are logged:

  8. Unconditional supplemental log groups: The before-images of specified columns are logged any time a row is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.

  9. Conditional supplemental log groups: The before-images of all specified columns are logged only if at least one of the columns in the log group is updated.

  10. Supplemental log groups can be system-generated or user-defined.

  11. In addition to the two types of supplemental logging, there are two levels of supplemental logging, as described in the following sections:

  12. Database-Level Supplemental Logging

  13. Table-Level Supplemental Logging

  14. See Also:

  15. Querying Views for Supplemental Logging Settings

  16. Database-Level Supplemental Logging

  17. There are two types of database-level supplemental logging: minimal supplemental logging and identification key logging, as described in the following sections. Minimal supplemental logging does not impose significant overhead on the database generating the redo log files. However, enabling database-wide identification key logging can impose overhead on the database generating the redo log files. Oracle recommends that you at least enable minimal supplemental logging for LogMiner.

  18. Minimal Supplemental Logging

  19. Minimal supplemental logging logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes. It ensures that LogMiner (and any product building on LogMiner technology) has sufficient information to support chained rows and various storage arrangements, such as cluster tables and index-organized tables. To enable minimal supplemental logging, execute the following SQL statement:

  20. 1
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  21. Note:

  22. In Oracle Database release 9.0.1, minimal supplemental logging was the default behavior in LogMiner. In release 9.2 and later, the default is no supplemental logging. Supplemental logging must be specifically enabled.

  23. Database-Level Identification Key Logging

  24. Identification key logging is necessary when redo log files will not be mined at the source database instance, for example, when the redo log files will be mined at a logical standby database.

  25. Using database identification key logging, you can enable database-wide before-image logging for all updates by specifying one or more of the following options to the SQL ALTER DATABASE ADD SUPPLEMENTAL LOG statement:

  26. ALL system-generated unconditional supplemental log group

  27. This option specifies that when a row is updated, all columns of that row (except for LOBs, LONGS, and ADTs) are placed in the redo log file.

  28. To enable all column logging at the database level, execute the following statement:

  29. 1
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  30. PRIMARY KEY system-generated unconditional supplemental log group

  31. This option causes the database to place all columns of a row's primary key in the redo log file whenever a row containing a primary key is updated (even if no value in the primary key has changed).

  32. If a table does not have a primary key, but has one or more non-null unique index key constraints or index keys, then one of the unique index keys is chosen for logging as a means of uniquely identifying the row being updated.

  33. If the table has neither a primary key nor a non-null unique index key, then all columns except LONG and LOB are supplementally logged; this is equivalent to specifying ALL supplemental logging for that row. Therefore, Oracle recommends that when you use database-level primary key supplemental logging, all or most tables be defined to have primary or unique index keys.

  34. To enable primary key logging at the database level, execute the following statement:

  35. 1
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
  36. UNIQUE system-generated conditional supplemental log group

  37. This option causes the database to place all columns of a row's composite unique key or bitmap index in the redo log file if any column belonging to the composite unique key or bitmap index is modified. The unique key can be due to either a unique constraint or a unique index.

  38. To enable unique index key and bitmap index logging at the database level, execute the following statement:

  39. 1
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
  40. FOREIGN KEY system-generated conditional supplemental log group

  41. This option causes the database to place all columns of a row's foreign key in the redo log file if any column belonging to the foreign key is modified.

  42. To enable foreign key logging at the database level, execute the following SQL statement:

  43. 1
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
  44. Note:

  45. Regardless of whether identification key logging is enabled, the SQL statements returned by LogMiner always contain the ROWID clause. You can filter out the ROWID clause by using the NO_ROWID_IN_STMT option to the DBMS_LOGMNR.START_LOGMNR procedure call. See Formatting Reconstructed SQL Statements for Re-execution for details.

  46. Keep the following in mind when you use identification key logging:

  47. If the database is open when you enable identification key logging, all DML cursors in the cursor cache are invalidated. This can affect performance until the cursor cache is repopulated.

  48. When you enable identification key logging at the database level, minimal supplemental logging is enabled implicitly.

  49. Supplemental logging statements are cumulative. If you issue the following SQL statements, both primary key and unique key supplemental logging is enabled:

  50. 1
    2
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
  51. Disabling Database-Level Supplemental Logging

  52. You disable database-level supplemental logging using the SQL ALTER DATABASE statement with the DROP SUPPLEMENTAL LOGGING clause. You can drop supplemental logging attributes incrementally. For example, suppose you issued the following SQL statements, in the following order:

  53. 1
    2
    3
    4
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
  54. The statements would have the following effects:

  55. After the first statement, primary key supplemental logging is enabled.

  56. After the second statement, primary key and unique key supplemental logging are enabled.


  1. After the third statement, only unique key supplemental logging is enabled.

  2. After the fourth statement, all supplemental logging is not disabled. The following error is returned: ORA-32589: unable to drop minimal supplemental logging.

  3. To disable all database supplemental logging, you must first disable any identification key logging that has been enabled, then disable minimal supplemental logging. The following example shows the correct order:

  4. 1
    2
    3
    4
    5
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
    ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
  5. Dropping minimal supplemental log data is allowed only if no other variant of database-level supplemental logging is enabled.

  6. Table-Level Supplemental Logging

  7. Table-level supplemental logging specifies, at the table level, which columns are to be supplementally logged. You can use identification key logging or user-defined conditional and unconditional supplemental log groups to log supplemental information, as described in the following sections.

  8. Table-Level Identification Key Logging

  9. Identification key logging at the table level offers the same options as those provided at the database level: all, primary key, foreign key, and unique key. However, when you specify identification key logging at the table level, only the specified table is affected. For example, if you enter the following SQL statement (specifying database-level supplemental logging), then whenever a column in any database table is changed, the entire row containing that column (except columns for LOBs, LONGs, and ADTs) will be placed in the redo log file:

  10. 1
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  11. However, if you enter the following SQL statement (specifying table-level supplemental logging) instead, then only when a column in the employees table is changed will the entire row (except for LOB, LONGs, and ADTs) of the table be placed in the redo log file. If a column changes in the departments table, only the changed column will be placed in the redo log file.

  12. 1
    ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  13. Keep the following in mind when you use table-level identification key logging:

  14. If the database is open when you enable identification key logging on a table, all DML cursors for that table in the cursor cache are invalidated. This can affect performance until the cursor cache is repopulated.

  15. Supplemental logging statements are cumulative. If you issue the following SQL statements, both primary key and unique index key table-level supplemental logging is enabled:

  16. 1
    2
    3
    4
    ALTER TABLE HR.EMPLOYEES
    ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ALTER TABLE HR.EMPLOYEES
    ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
  17. See Database-Level Identification Key Logging for a description of each of the identification key logging options.

  18. Table-Level User-Defined Supplemental Log Groups

  19. In addition to table-level identification key logging, Oracle supports user-defined supplemental log groups. With user-defined supplemental log groups, you can specify which columns are supplementally logged. You can specify conditional or unconditional log groups, as follows:

  20. User-defined unconditional log groups

  21. To enable supplemental logging that uses user-defined unconditional log groups, use the ALWAYS clause as shown in the following example:

  22. 1
    2
    3
    ALTER TABLE HR.EMPLOYEES
    ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME,
    DEPARTMENT_ID) ALWAYS;
  23. This creates a log group named emp_parttime on the hr.employees table that consists of the columns employee_id, last_name, and department_id. These columns will be logged every time an UPDATE statement is executed on the hr.employees table, regardless of whether the update affected these columns. (If you want to have the entire row image logged any time an update was made, use table-level ALL identification key logging, as described previously).

  24. Note:

  25. LOB, LONG, and ADT columns cannot be supplementally logged.

  26. User-defined conditional supplemental log groups

  27. To enable supplemental logging that uses user-defined conditional log groups, omit the ALWAYS clause from the SQL ALTER TABLE statement, as shown in the following example:

  28. 1
    2
    3
    ALTER TABLE HR.EMPLOYEES
    ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME,
    DEPARTMENT_ID);
  29. This creates a log group named emp_fulltime on table hr.employees. Just like the previous example, it consists of the columns employee_id, last_name, and department_id. But because the ALWAYS clause was omitted, before-images of the columns will be logged only if at least one of the columns is updated.

  30. For both unconditional and conditional user-defined supplemental log groups, you can explicitly specify that a column in the log group be excluded from supplemental logging by specifying the NO LOG option. When you specify a log group and use the NO LOG option, you must specify at least one column in the log group without the NO LOG option, as shown in the following example:

  31. 1
    2
    3
    ALTER TABLE HR.EMPLOYEES
    ADD SUPPLEMENTAL LOG GROUP emp_parttime(
    DEPARTMENT_ID NO LOG, EMPLOYEE_ID);
  32. This enables you to associate this column with other columns in the named supplemental log group such that any modification to the NO LOG column causes the other columns in the supplemental log group to be placed in the redo log file. This might be useful, for example, if you want to log certain columns in a group if a LONG column changes. You cannot supplementally log the LONG column itself; however, you can use changes to that column to trigger supplemental logging of other columns in the same row.

  33. Usage Notes for User-Defined Supplemental Log Groups

  34. Keep the following in mind when you specify user-defined supplemental log groups:

  35. A column can belong to more than one supplemental log group. However, the before-image of the columns gets logged only once.

  36. If you specify the same columns to be logged both conditionally and unconditionally, the columns are logged unconditionally.










本文转自 客居天涯 51CTO博客,原文链接:http://blog.51cto.com/tiany/1552783,如需转载请自行联系原作者
相关实践学习
日志服务之使用Nginx模式采集日志
本文介绍如何通过日志服务控制台创建Nginx模式的Logtail配置快速采集Nginx日志并进行多维度分析。
目录
相关文章
|
17天前
|
Java
使用Java代码打印log日志
使用Java代码打印log日志
73 1
|
18天前
|
Linux Shell
Linux手动清理Linux脚本日志定时清理日志和log文件执行表达式
Linux手动清理Linux脚本日志定时清理日志和log文件执行表达式
72 1
|
22天前
|
SQL 关系型数据库 MySQL
MySQL数据库,可以使用二进制日志(binary log)进行时间点恢复
对于MySQL数据库,可以使用二进制日志(binary log)进行时间点恢复。二进制日志是MySQL中记录所有数据库更改操作的日志文件。要进行时间点恢复,您需要执行以下步骤: 1. 确保MySQL配置文件中启用了二进制日志功能。在配置文件(通常是my.cnf或my.ini)中找到以下行,并确保没有被注释掉: Copy code log_bin = /path/to/binary/log/file 2. 在需要进行恢复的时间点之前创建一个数据库备份。这将作为恢复的基准。 3. 找到您要恢复到的时间点的二进制日志文件和位置。可以通过执行以下命令来查看当前的二进制日志文件和位
|
28天前
|
监控 Shell Linux
【Shell 命令集合 系统管理 】Linux 自动轮转(log rotation)日志文件 logrotate命令 使用指南
【Shell 命令集合 系统管理 】Linux 自动轮转(log rotation)日志文件 logrotate命令 使用指南
48 0
|
29天前
|
存储 数据库
ALTER MATERIALIZED VIEW LOG :语句来更改现有物化视图日志的存储特征或类型。
`ALTER MATERIALIZED VIEW LOG` 语句用于修改已有的物化视图日志的存储属性或类型。配合示例中的动画图像(由于格式限制无法显示),该语句帮助优化数据库的性能和管理。
44 0
|
9天前
|
运维 安全 Ubuntu
`/var/log/syslog` 和 `/var/log/messages` 日志详解
`/var/log/syslog` 和 `/var/log/messages` 是Linux系统的日志文件,分别在Debian和Red Hat系发行版中记录系统事件和错误。它们包含时间戳、日志级别、PID及消息内容,由`rsyslog`等守护进程管理。常用命令如`tail`和`grep`用于查看和搜索日志。日志级别从低到高包括`debug`到`emerg`,表示不同严重程度的信息。注意保护日志文件的安全,防止未授权访问,并定期使用`logrotate`进行文件轮转以管理磁盘空间。
19 1
|
10天前
|
网络协议 应用服务中间件 Linux
centos7 Nginx Log日志统计分析 常用命令
centos7 Nginx Log日志统计分析 常用命令
23 2
|
10天前
|
Ubuntu Linux 网络安全
/var/log/auth.log日志详解
`/var/log/auth.log`是Linux(尤其是Debian系如Ubuntu)记录身份验证和授权事件的日志文件,包括登录尝试(成功或失败)、SSH活动、sudo使用和PAM模块的操作。登录失败、SSH连接、sudo命令及其它认证活动都会在此记录。查看此日志通常需root权限,可使用`tail`、`less`或`grep`命令。文件内容可能因发行版和配置而异。例如,`sudo tail /var/log/auth.log`显示最后几行,`sudo grep "failed password" /var/log/auth.log`搜索失败密码尝试。
39 8
|
23天前
|
监控 Java 测试技术
日志框架到底是Logback 还是 Log4j2
日志框架到底是Logback 还是 Log4j2
17 0
|
24天前
|
XML 运维 监控
【深入探究 C++ 日志库清理策略】glog、log4cplus 和 spdlog 的日志文件管理策略
【深入探究 C++ 日志库清理策略】glog、log4cplus 和 spdlog 的日志文件管理策略
62 0