0207UNIFORM SIZE=10M index insert分裂2

简介: [20170207]UNIFORM SIZE=10M index insert分裂2.txt http://blog.itpub.net/267265/viewspace-2133066/ --前几天的测试,我发现作者是先delete mytest where x

[20170207]UNIFORM SIZE=10M index insert分裂2.txt

http://blog.itpub.net/267265/viewspace-2133066/

--前几天的测试,我发现作者是先delete mytest where x<=199000;提交以后再执行插入:
--insert into mytest with x as (select level i from dual connect by level <= 199000) select i from x where mod(i, 250) = 0;.
--如果我将2个操作合在一起呢,看看会出现什么情况?
DELETE FROM mytest WHERE x <= 199000 AND MOD (x, 250) != 0;

1.环境:

SCOTT@book> @ &r/ver1

PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.4.0     Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production


2.为了测试方便重复,我写一个脚本:
--//说明一下pp脚本,是显示跟踪文件,网上可以自己找到许多类似的脚本.

$ cat a1.sql

--参数1建立数据文件大小 参数2建立UNIFORM SIZE 参数3 manual 或者 auto
--DROP TABLESPACE tea INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE &&1 AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE &&2
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT &&3
FLASHBACK ON;

drop table mytest purge;
create table mytest(x number) tablespace tea;
create index i_mytest_x on mytest(x) tablespace tea;
insert into  mytest select level from dual connect by level <= 200126;
commit;

col object_id for 9999999 new_value b_object_id
select object_id,data_object_id from dba_objects where owner=user and object_name='I_MYTEST_X';

--delete mytest where x<=199000;
--commit;

--insert into mytest with x as (select level i from dual connect by level <= 199000) select i from x where mod(i, 250) = 0;
--commit ;

DELETE FROM mytest WHERE x <= 199000 AND MOD (x, 250) != 0;
commit
alter session set events 'immediate trace name treedump level &b_object_id';
@ &r/pp

cat a2.sql
alter system flush buffer_cache;
alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 12';
INSERT INTO MYTEST VALUES (200127);
alter session set events '10046 trace name context off';
commit ;

3.测试:
SCOTT@book> @ a1 100M 10M  manual
..

----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
   leaf: 0x1800582 25167234 (-1: nrow: 540 rrow: 2)
   leaf: 0x1800583 25167235 (0: nrow: 533 rrow: 2)
   leaf: 0x1800584 25167236 (1: nrow: 533 rrow: 2)
   leaf: 0x1800585 25167237 (2: nrow: 533 rrow: 2)
   leaf: 0x1800586 25167238 (3: nrow: 533 rrow: 2)
   leaf: 0x1800587 25167239 (4: nrow: 533 rrow: 2)
   leaf: 0x1800588 25167240 (5: nrow: 533 rrow: 2)
   leaf: 0x1800589 25167241 (6: nrow: 533 rrow: 3)
   leaf: 0x180058a 25167242 (7: nrow: 533 rrow: 2)
...
   leaf: 0x180070a 25167626 (391: nrow: 500 rrow: 2)
   leaf: 0x180070b 25167627 (392: nrow: 500 rrow: 2)
   leaf: 0x180070c 25167628 (393: nrow: 500 rrow: 2)
   leaf: 0x180070d 25167629 (394: nrow: 500 rrow: 2)
   leaf: 0x180070e 25167630 (395: nrow: 500 rrow: 128)
   leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
   leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump

--//如果对比前面你可以发现nrow: 533 ,而前面delete再插入,结果是nrow: 2.

SCOTT@book> @ a2

=====================
PARSING IN CURSOR #140672891095112 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486428871330173 hv=1063204715 ad='7de42b98' sqlid='5zb5uy0zpydvb'
INSERT INTO MYTEST VALUES (200127)
END OF STMT
PARSE #140672891095112:c=1000,e=1053,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486428871330167
WAIT #140672891095112: nam='db file sequential read' ela= 13 file#=6 block#=128 blocks=1 obj#=89754 tim=1486428871330358
WAIT #140672891095112: nam='db file sequential read' ela= 11 file#=6 block#=432 blocks=1 obj#=89754 tim=1486428871330422
WAIT #140672891095112: nam='db file sequential read' ela= 11 file#=6 block#=1409 blocks=1 obj#=89755 tim=1486428871330518
WAIT #140672891095112: nam='db file sequential read' ela= 10 file#=6 block#=1808 blocks=1 obj#=89755 tim=1486428871330572
WAIT #140672891095112: nam='db file sequential read' ela= 12 file#=3 block#=256 blocks=1 obj#=0 tim=1486428871330712
WAIT #140672891095112: nam='db file sequential read' ela= 10 file#=3 block#=2533 blocks=1 obj#=0 tim=1486428871330792
WAIT #140672891095112: nam='db file sequential read' ela= 11 file#=6 block#=1408 blocks=1 obj#=89755 tim=1486428871330920
EXEC #140672891095112:c=1000,e=968,p=7,cr=2,cu=26,mis=0,r=1,dep=0,og=1,plh=0,tim=1486428871331246
STAT #140672891095112 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=2 pr=7 pw=0 time=922 us)'
WAIT #140672891095112: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=89755 tim=1486428871331418
WAIT #140672891095112: nam='SQL*Net message from client' ela= 233 driver id=1650815232 #bytes=1 p3=0 obj#=89755 tim=1486428871331682
CLOSE #140672891095112:c=0,e=7,dep=0,type=0,tim=1486428871331731
=====================

SCOTT@book> column PARTITION_NAME noprint
SCOTT@book> select * from dba_extents where file_id=6;
OWNER  SEGMENT_NAME         SEGMENT_TYPE       TABLESPACE_NAME                 EXTENT_ID    FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------ -------------------- ------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------------
SCOTT  MYTEST               TABLE              TEA                                     0          6        128   10485760       1280            6
SCOTT  I_MYTEST_X           INDEX              TEA                                     0          6       1408   10485760       1280            6

SCOTT@book>

--//你可以发现这样就没有读这个多块,因为这样这些块oracle认为不是空块,没有扫描这么多块.
--//最后才扫描dba =6,1408块.也就是这种想像仅仅在大量删除并且回收这些空间再使用的时候,插入最大值才有可能出现前面的情况.

3.如果UNIFORM SIZE设置小一些呢?
--这个索引占用空间400块
--如果设置UNIFORM SIZE=3M,这样一个段占用3*1024*1024/8192=384块.
--如果设置UNIFORM SIZE=2M,这样一个段占用2*1024*1024/8192=256块.
--如果设置UNIFORM SIZE=1M,这样一个段占用1*1024*1024/8192=128块.
--分别测试看看.先修改脚本如下:

$ cat a1x.sql

--参数1建立数据文件大小 参数2建立UNIFORM SIZE 参数3 manual 或者 auto
--DROP TABLESPACE tea INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE &&1 AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE &&2
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT &&3
FLASHBACK ON;

drop table mytest purge;
create table mytest(x number) tablespace tea;
create index i_mytest_x on mytest(x) tablespace tea;
insert into  mytest select level from dual connect by level <= 200126;
commit;

col object_id for 9999999 new_value b_object_id
select object_id,data_object_id from dba_objects where owner=user and object_name='I_MYTEST_X';

delete mytest where x<=199000;
commit;

insert into mytest with x as (select level i from dual connect by level <= 199000) select i from x where mod(i, 250) = 0;
commit ;

--DELETE FROM mytest WHERE x <= 199000 AND MOD (x, 250) != 0;
--commit
alter session set events 'immediate trace name treedump level &b_object_id';
@ &r/pp


--执行如下:
SCOTT@book> @ a1x 100M 3M manual
SCOTT@book> @a2

$ grep "blocks=1 obj#=89765" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37173.trc |wc
      4      52     483

--正常.仅仅扫描4块.

@ a1x 100M 2M manual
@ a2

$ grep "blocks=1 obj#=89767" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37193.trc |wc
    400    5200   48034
--仅仅扫描400块.

@ a1x 100M 1M manual
@ a2

$ grep "blocks=1 obj#=89769" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37210.trc |wc
      4      52     482

--继续测试assm的情况:
@ a1x 100M 3M auto
@ a2
$ grep "blocks=1 obj#=89771" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37256.trc|wc
     26     338    3158

@ a1x 100M 2M auto
@ a2

$ grep "blocks=1 obj#=89773" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37270.trc |wc
     72     936    8654

@ a1x 100M 1M auto
@ a2

$ grep "blocks=1 obj#=89775"  /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_37281.trc|wc
     70     910    8406

4.总结:
--如果设置UNIFORM SIZE=3M,MSSM,扫描4块
--如果设置UNIFORM SIZE=2M,MSSM,扫描400块
--如果设置UNIFORM SIZE=1M,MSSM,扫描4块

--如果设置UNIFORM SIZE=3M,ASSM,扫描26块
--如果设置UNIFORM SIZE=2M,ASSM,扫描72块
--如果设置UNIFORM SIZE=1M,ASSM,扫描70块

--我不做分析,不知道为什么,估计是bug.

目录
相关文章
|
12天前
|
索引 Python
row[i] = col[j] = TrueIndexError: list assignment index out of range
row[i] = col[j] = TrueIndexError: list assignment index out of range
|
3月前
|
关系型数据库 MySQL
mysql 5.5.62版本建表语句报错: Index column size too large. The maximum column size is 767 bytes
mysql 5.5.62版本建表语句报错: Index column size too large. The maximum column size is 767 bytes
|
4月前
|
关系型数据库 MySQL
MySQL【问题 02】报错 1709 - Index column size too large. The maximum column size is 767 bytes. 可能是最简单的方法
MySQL【问题 02】报错 1709 - Index column size too large. The maximum column size is 767 bytes. 可能是最简单的方法
53 0
The Double Linknode for Linear table | Data
The some code in Data book (5th Edition) from the 54 page to 55 page
57 0
|
索引 Python
成功解决ValueError: column index (256) not an int in range(256)
成功解决ValueError: column index (256) not an int in range(256)
成功解决ValueError: column index (256) not an int in range(256)
|
关系型数据库 MySQL 索引
覆盖索引 cover index
覆盖索引 cover index MySQL InnoDB
124 0
|
Oracle 关系型数据库 索引
20180316不使用INDEX FULL SCAN (MIN/MAX)
[20180316]为什么不使用INDEX FULL SCAN (MIN/MAX).txt --//链接:http://www.itpub.net/thread-2100456-1-1.
1173 0