pg 函数sfa_tmp_sleep()执行越来越慢-sql分析

简介:
pg 函数sfa_tmp_sleep()执行越来越慢 
                                                                                                                                                                                                                                                                                                                                                        
近期发现一个函数sfa_tmp_sleep()执行时间越来越长(1h32m),定时每小时执行一次(每次处理100条记录),结果第三个都开始执行了,第一个还没执行完~~ 
因此想要重点研究一下 

函数内容 
CREATE OR REPLACE FUNCTION sfa_tmp_sleep() 
RETURNS text AS 
$BODY$ 
DECLARE 
sqlsexec character varying :=''; 

r record; 
BEGIN 
------------------------------------------ 
----函数功能:对休眠样本计算样本统计信息 
------------------------------------------ 
set work_mem= '3000MB'; 

sqlsexec = 'SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1    
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash)' limit 100;                     ---这个函数只有一条sql查询 
RAISE NOTICE '%', sqlsexec; 
FOR r IN EXECUTE sqlsexec 
LOOP 
RAISE NOTICE 'sample_hash = %', r.sample_hash; 
PERFORM sfa_tmp_sample(r.sample_hash);               ---查询出来的hash,到函数sfa_tmp_sample去进行统计打分。 
END LOOP; 
RETURN 'DONE'; 
exception when others then 
RAISE EXCEPTION '(%)', SQLERRM; 
return 'error'; 
END 

手动执行第一条sql,不带limit,看耗时多久 
SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1 
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) ;  ---67528 ms. 

将limit条数修改成20条,看耗时多久 
测试发现,居然花了4115s,差不多一个小时,这速度简直让人无法忍受啊~~~~ 
我们要找出哪里是最耗时的。 

首先,通过监控pgbadger我们可以看到函数sfa_tmp_sample,做统计其实很快,修改一条hash的统计都是毫秒级别的。 
select sfa_tmp_sample('A60CD39357F6D192547CDF85065C43D7');   单独这样来执行,其实很快,才500ms 

初步怀疑还是sleep中的那个查询sql太耗时间了 
explain SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash 
and t2.sample_type=7  and t1.state=-1 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 1; 
"Limit (cost=1.31..4.34 rows=1 width=33)" 
" -> Merge Join (cost=1.31..14435878.87 rows=4759688 width=33)" 
" Merge Cond: ((t2.sample_hash)::text = (t1.sample_hash)::text)" 
" -> Merge Append (cost=1.31..11326897.81 rows=19474512 width=33)" 
" Sort Key: t2.sample_hash" 

刚开始以为是t_white这张表太大,耗时久一点情有可原 
但我又发现一个奇怪的现象,就是手动执行的时候,带上limit和不带limit,时间相差很远 
NOTICE: SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1 
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) ; 
不加limit    执行select sfa_tmp_sleep() 仅仅花了81s,此时因为刚刚执行过一遍,查询结果记录为0,也就是不需要执行后面的select sfa_tmp_sample,按常理来说,执行完这个sql,就会立马退出。 

NOTICE: SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1 
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 100; 
执行了很久都没有出结果。。。。。 

为什么加了limit反而影响了它的执行效率呢????我们来看看具体的分析结果: 
explain analyze SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1 
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash); 

"Hash Join (cost=288876.92..3072363.39 rows=4764135 width=33) (actual time=74750.884..74750.884 rows=0 loops=1)"     --总耗时才74s,下面是每一步的耗时 
" Hash Cond: ((t2.sample_hash)::text = (t1.sample_hash)::text)" 
" -> Append (cost=0.00..2589277.04 rows=19542413 width=33) (actual time=0.515..47951.685 rows=19051749 loops=1)"
" -> Seq Scan on t_white t2 (cost=0.00..0.00 rows=1 width=33) (actual time=0.001..0.001 rows=0 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Bitmap Heap Scan on t_white20110701 t2 (cost=4.61..27.18 rows=46 width=33) (actual time=0.513..1.464 rows=43 loops=1)"     ---位图索引,t_white每个分区表的查询耗时都还算快。 
" Recheck Cond: (sample_type = 7)" 
" -> Bitmap Index Scan on t_white20110701_sample_type_idx (cost=0.00..4.60 rows=46 width=0) (actual time=0.500..0.500 rows=53 loops=1)" 
" Index Cond: (sample_type = 7)" 
" -> Bitmap Heap Scan on t_white20110801 t2 (cost=296.54..3235.54 rows=15520 width=33) (actual time=3.014..128.033 rows=15201 loops=1)" 
" Recheck Cond: (sample_type = 7)" 
" -> Bitmap Index Scan on t_white20110801_sample_type_idx (cost=0.00..292.66 rows=15520 width=0) (actual time=2.753..2.753 rows=15895 loops=1)" 
" Index Cond: (sample_type = 7)" 
" -> Seq Scan on t_white20110901 t2 (cost=0.00..179.12 rows=851 width=33) (actual time=0.243..3.192 rows=828 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20111001 t2 (cost=0.00..33.66 rows=57 width=33) (actual time=0.243..1.555 rows=55 loops=1)"
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20111101 t2 (cost=0.00..26.19 rows=210 width=33) (actual time=0.290..1.000 rows=183 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Bitmap Heap Scan on t_white20111201 t2 (cost=762.69..8933.83 rows=40571 width=33) (actual time=8.507..420.264 rows=40073 loops=1)" 
" Recheck Cond: (sample_type = 7)" 
" -> Bitmap Index Scan on t_white20111201_sample_type_idx (cost=0.00..752.55 rows=40571 width=0) (actual time=7.232..7.232 rows=41184 loops=1)" 
" Index Cond: (sample_type = 7)" 
" -> Seq Scan on t_white20120101 t2 (cost=0.00..1534.72 rows=8484 width=33) (actual time=0.247..27.334 rows=8279 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20120201 t2 (cost=0.00..8966.33 rows=56440 width=33) (actual time=0.215..153.553 rows=54567 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20120301 t2 (cost=0.00..6881.99 rows=43735 width=33) (actual time=0.239..115.809 rows=42545 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Bitmap Heap Scan on t_white20120401 t2 (cost=1000.51..11229.42 rows=53193 width=33) (actual time=10.874..426.664 rows=52150 loops=1)" 
" Recheck Cond: (sample_type = 7)" 
" -> Bitmap Index Scan on t_white20120401_sample_type_idx (cost=0.00..987.21 rows=53193 width=0) (actual time=9.207..9.207 rows=53658 loops=1)" 
" Index Cond: (sample_type = 7)" 
" -> Bitmap Heap Scan on t_white20120501 t2 (cost=2679.92..32892.85 rows=143434 width=33) (actual time=25.837..1573.999 rows=143011 loops=1)" 
" Recheck Cond: (sample_type = 7)" 
" -> Bitmap Index Scan on t_white20120501_sample_type_idx (cost=0.00..2644.06 rows=143434 width=0) (actual time=21.501..21.501 rows=146864 loops=1)"   --部分分区表能够使用建立在sample_type字段的索引,因此时间明显比没用索引的要短 
" Index Cond: (sample_type = 7)" 
" -> Seq Scan on t_white20120601 t2 (cost=0.00..16996.70 rows=98081 width=33) (actual time=0.227..359.610 rows=96404 loops=1)"     --即使没有使用到sample_type上的索引,耗时也还能接受 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20120701 t2 (cost=0.00..12755.74 rows=77482 width=33) (actual time=0.233..250.715 rows=76203 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20120801 t2 (cost=0.00..36999.70 rows=214737 width=33) (actual time=0.239..628.646 rows=211867 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20120901 t2 (cost=0.00..23962.33 rows=149082 width=33) (actual time=0.246..373.619 rows=144067 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20121001 t2 (cost=0.00..11539.94 rows=84231 width=33) (actual time=0.216..179.243 rows=81386 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20121101 t2 (cost=0.00..24504.95 rows=190559 width=33) (actual time=0.246..353.219 rows=184554 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20121201 t2 (cost=0.00..26269.97 rows=180864 width=33) (actual time=0.259..400.979 rows=174210 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130101 t2 (cost=0.00..21436.80 rows=170187 width=33) (actual time=0.241..316.030 rows=165762 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130201 t2 (cost=0.00..56572.03 rows=428164 width=33) (actual time=0.010..936.162 rows=418646 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130301 t2 (cost=0.00..80714.98 rows=649873 width=33) (actual time=0.234..1272.789 rows=638643 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130401 t2 (cost=0.00..48517.72 rows=385268 width=33) (actual time=0.213..729.710 rows=379044 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130501 t2 (cost=0.00..43888.30 rows=359839 width=33) (actual time=0.222..690.170 rows=352550 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130601 t2 (cost=0.00..46958.70 rows=364984 width=33) (actual time=0.235..785.484 rows=357265 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130701 t2 (cost=0.00..59807.29 rows=362261 width=33) (actual time=0.264..929.197 rows=361180 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130801 t2 (cost=0.00..80945.31 rows=599443 width=33) (actual time=0.008..1185.351 rows=582070 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20130901 t2 (cost=0.00..66983.62 rows=551307 width=33) (actual time=0.258..893.968 rows=536798 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20131001 t2 (cost=0.00..76896.57 rows=561052 width=33) (actual time=0.234..1008.501 rows=547033 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20131101 t2 (cost=0.00..86725.89 rows=644766 width=33) (actual time=0.216..1205.315 rows=626491 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20131201 t2 (cost=0.00..81994.80 rows=590138 width=33) (actual time=0.723..1646.654 rows=572707 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140101 t2 (cost=0.00..74730.84 rows=471926 width=33) (actual time=0.254..1226.444 rows=462548 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140201 t2 (cost=0.00..104963.35 rows=794670 width=33) (actual time=0.258..1566.819 rows=784287 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140301 t2 (cost=0.00..139674.42 rows=1093780 width=33) (actual time=0.256..1892.263 rows=1077644 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140401 t2 (cost=0.00..192073.55 rows=1735035 width=33) (actual time=0.215..2498.861 rows=1706597 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140501 t2 (cost=0.00..156879.65 rows=1284820 width=33) (actual time=0.243..2429.547 rows=1262875 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140601 t2 (cost=0.00..75085.46 rows=509768 width=33) (actual time=0.200..1142.757 rows=501259 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140701 t2 (cost=0.00..109932.02 rows=801796 width=33) (actual time=0.254..1699.791 rows=785950 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140801 t2 (cost=0.00..124129.98 rows=870717 width=33) (actual time=0.244..1959.016 rows=857404 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20140901 t2 (cost=0.00..91497.77 rows=677569 width=33) (actual time=0.208..1499.579 rows=654824 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20141001 t2 (cost=0.00..99435.20 rows=652200 width=33) (actual time=0.480..1943.429 rows=640646 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20141101 t2 (cost=0.00..80946.43 rows=629295 width=33) (actual time=0.250..1826.538 rows=597895 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20141201 t2 (cost=0.00..95591.54 rows=836886 width=33) (actual time=0.382..2146.592 rows=801980 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20150101 t2 (cost=0.00..79795.20 rows=629169 width=33) (actual time=0.022..1899.265 rows=584735 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20150201 t2 (cost=0.00..64520.84 rows=581339 width=33) (actual time=0.230..1430.519 rows=530069 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20150301 t2 (cost=0.00..83947.32 rows=660344 width=33) (actual time=0.342..2088.060 rows=653357 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Seq Scan on t_white20150401 t2 (cost=0.00..37661.30 rows=288239 width=33) (actual time=0.504..892.213 rows=285864 loops=1)"    
" Filter: (sample_type = 7)" 
" -> Hash (cost=277212.84..277212.84 rows=933126 width=33) (actual time=6726.655..6726.655 rows=568872 loops=1)"
" Buckets: 131072 Batches: 1 Memory Usage: 36111kB" 
" -> Hash Anti Join (cost=147500.92..277212.84 rows=933126 width=33) (actual time=3212.152..6523.566 rows=568872 loops=1)" 
" Hash Cond: ((t1.sample_hash)::text = (t_sfa_tmp.sample_hash)::text)" 
" -> Seq Scan on t_sfa_sample t1 (cost=0.00..106786.89 rows=1861524 width=33) (actual time=0.020..1122.771 rows=1859951 loops=1)"   ---顺序扫描   t_sfa_sample 中sample_hash是主键 
" Filter: (state = (-1))"     ----过滤条件 state=-1 意味着规划节点为它扫描的每一行检查该条件,并且只输出符合条件的行 
" -> Hash (cost=123638.74..123638.74 rows=1908974 width=33) (actual time=3211.371..3211.371 rows=1898083 loops=1)" 
" Buckets: 262144 Batches: 1 Memory Usage: 120484kB" 
" -> Seq Scan on t_sfa_tmp (cost=0.00..123638.74 rows=1908974 width=33) (actual time=0.392..2210.484 rows=1898083 loops=1)"     ---顺序扫描t_sfa_tmp 表中sample_hash是主键 
"Total runtime: 74751.386 ms" 
实际的消耗时间跟直接执行函数的时间很接近。 

explain  analyze  SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1 
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 100; 
"Limit (cost=6.88..270.13 rows=100 width=33) (actual time=4208033.574..4208033.574 rows=0 loops=1)"       ---总耗时 
" -> Merge Join (cost=6.88..12541509.94 rows=4764141 width=33) (actual time=4208033.571..4208033.571 rows=0 loops=1)" 
" Merge Cond: ((t1.sample_hash)::text = (t2.sample_hash)::text)" 
" -> Merge Anti Join (cost=5.58..1027854.50 rows=933126 width=33) (actual time=0.302..323876.834 rows=568872 loops=1)" 
" Merge Cond: ((t1.sample_hash)::text = (t_sfa_tmp.sample_hash)::text)" 
" -> Index Scan using t_sfa_sample_pkey on t_sfa_sample t1 (cost=0.00..481736.97 rows=1861524 width=33) (actual time=0.277..79479.520 rows=1859951 loops=1)" 
" Filter: (state = (-1))" 
" -> Index Scan using t_sfa_tmp_pkey on t_sfa_tmp (cost=0.00..527407.31 rows=1908974 width=33) (actual time=0.017..229564.972 rows=1898083 loops=1)"   --跟上面的2210.484一比,虽然行数是一样的,但是耗时确实100倍左右 
" -> Materialize (cost=1.31..11414825.13 rows=19542439 width=33) (actual time=13.404..3855134.924 rows=19051765 loops=1)" 
" -> Merge Append (cost=1.31..11365969.04 rows=19542439 width=33) (actual time=13.397..3831856.556 rows=19051765 loops=1)" 
" Sort Key: t2.sample_hash" 
" -> Index Scan using t_white_sample_hash_key on t_white t2 (cost=0.00..8.27 rows=1 width=33) (actual time=0.005..0.005 rows=0 loops=1)"   ---所有的t_white的分区表都没有走sample_type的索引,导致耗时也增加了 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20110701 on t_white20110701 t2 (cost=0.00..121.13 rows=46 width=33) (actual time=0.040..31.548 rows=43 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20110801 on t_white20110801 t2 (cost=0.00..13031.54 rows=15520 width=33) (actual time=1.879..3965.901 rows=15201 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20110901 on t_white20110901 t2 (cost=0.00..712.33 rows=851 width=33) (actual time=0.911..199.757 rows=828 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20111001 on t_white20111001 t2 (cost=0.00..140.85 rows=57 width=33) (actual time=0.046..32.823 rows=55 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20111101 on t_white20111101 t2 (cost=0.00..112.66 rows=210 width=33) (actual time=0.023..57.701 rows=183 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20111201 on t_white20111201 t2 (cost=0.00..34747.64 rows=40571 width=33) (actual time=1.072..4954.557 rows=40073 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120101 on t_white20120101 t2 (cost=0.00..5799.41 rows=8484 width=33) (actual time=0.024..1942.406 rows=8279 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120201 on t_white20120201 t2 (cost=0.00..35006.71 rows=56440 width=33) (actual time=0.022..11110.476 rows=54567 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120301 on t_white20120301 t2 (cost=0.00..26839.66 rows=43735 width=33) (actual time=1.412..8513.762 rows=42545 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120401 on t_white20120401 t2 (cost=0.00..43719.06 rows=53193 width=33) (actual time=0.819..11715.886 rows=52150 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120501 on t_white20120501 t2 (cost=0.00..127606.13 rows=143434 width=33) (actual time=0.309..35385.370 rows=143011 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120601 on t_white20120601 t2 (cost=0.00..66095.17 rows=98081 width=33) (actual time=0.255..19346.965 rows=96404 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120701 on t_white20120701 t2 (cost=0.00..48646.21 rows=77482 width=33) (actual time=0.031..15112.948 rows=76203 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120801 on t_white20120801 t2 (cost=0.00..140243.01 rows=214737 width=33) (actual time=0.026..40043.470 rows=211867 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20120901 on t_white20120901 t2 (cost=0.00..91808.20 rows=149082 width=33) (actual time=0.764..27296.248 rows=144067 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20121001 on t_white20121001 t2 (cost=0.00..45707.42 rows=84231 width=33) (actual time=0.271..17041.615 rows=81386 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20121101 on t_white20121101 t2 (cost=0.00..96983.57 rows=190559 width=33) (actual time=0.220..37807.532 rows=184554 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20121201 on t_white20121201 t2 (cost=0.00..103932.13 rows=180864 width=33) (actual time=0.019..39480.933 rows=174210 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130101 on t_white20130101 t2 (cost=0.00..84114.31 rows=170187 width=33) (actual time=0.238..29166.750 rows=165762 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130201 on t_white20130201 t2 (cost=0.00..223210.01 rows=428164 width=33) (actual time=0.229..82961.080 rows=418646 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130301 on t_white20130301 t2 (cost=0.00..318834.06 rows=649873 width=33) (actual time=0.225..139716.301 rows=638643 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130401 on t_white20130401 t2 (cost=0.00..191974.93 rows=385268 width=33) (actual time=0.226..80686.644 rows=379044 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130501 on t_white20130501 t2 (cost=0.00..173435.62 rows=359839 width=33) (actual time=0.239..74029.353 rows=352550 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130601 on t_white20130601 t2 (cost=0.00..185554.28 rows=364984 width=33) (actual time=0.242..75051.125 rows=357265 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130701 on t_white20130701 t2 (cost=0.00..235635.52 rows=362261 width=33) (actual time=0.634..111325.936 rows=361180 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130801 on t_white20130801 t2 (cost=0.00..319847.98 rows=599443 width=33) (actual time=0.021..137529.831 rows=582070 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20130901 on t_white20130901 t2 (cost=0.00..265118.29 rows=551307 width=33) (actual time=0.199..113658.102 rows=536798 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20131001 on t_white20131001 t2 (cost=0.00..304541.23 rows=561052 width=33) (actual time=0.025..115868.850 rows=547033 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20131101 on t_white20131101 t2 (cost=0.00..343331.79 rows=644766 width=33) (actual time=0.309..134153.298 rows=626491 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20131201 on t_white20131201 t2 (cost=0.00..324976.99 rows=590138 width=33) (actual time=0.023..125280.760 rows=572706 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140101 on t_white20140101 t2 (cost=0.00..296371.65 rows=471926 width=33) (actual time=0.244..108254.151 rows=462547 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140201 on t_white20140201 t2 (cost=0.00..412775.87 rows=794670 width=33) (actual time=0.655..154092.691 rows=784287 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140301 on t_white20140301 t2 (cost=0.00..553468.21 rows=1093780 width=33) (actual time=0.020..209569.984 rows=1077644 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140401 on t_white20140401 t2 (cost=0.00..760834.35 rows=1735035 width=33) (actual time=0.281..191973.327 rows=1706594 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140501 on t_white20140501 t2 (cost=0.00..621494.36 rows=1284820 width=33) (actual time=0.317..189809.749 rows=1262874 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140601 on t_white20140601 t2 (cost=0.00..297441.20 rows=509768 width=33) (actual time=0.028..108351.308 rows=501259 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140701 on t_white20140701 t2 (cost=0.00..435577.15 rows=801796 width=33) (actual time=0.027..156456.805 rows=785950 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140801 on t_white20140801 t2 (cost=0.00..491279.21 rows=870717 width=33) (actual time=0.154..184267.384 rows=857403 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20140901 on t_white20140901 t2 (cost=0.00..362271.39 rows=677569 width=33) (actual time=0.549..134776.949 rows=654824 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20141001 on t_white20141001 t2 (cost=0.00..392512.54 rows=652200 width=33) (actual time=0.025..160953.113 rows=640646 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20141101 on t_white20141101 t2 (cost=0.00..320294.70 rows=629295 width=33) (actual time=0.035..115167.440 rows=597895 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20141201 on t_white20141201 t2 (cost=0.00..377995.69 rows=836886 width=33) (actual time=0.024..145096.638 rows=801980 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20150101 on t_white20150101 t2 (cost=0.00..314925.97 rows=629169 width=33) (actual time=0.023..95266.226 rows=584735 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20150201 on t_white20150201 t2 (cost=0.00..254031.29 rows=581339 width=33) (actual time=0.027..89744.102 rows=530069 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20150301 on t_white20150301 t2 (cost=0.00..336172.21 rows=660344 width=33) (actual time=0.025..109274.329 rows=653357 loops=1)" 
" Filter: (sample_type = 7)" 
" -> Index Scan using u_t_white20150401 on t_white20150401 t2 (cost=0.00..152327.58 rows=288265 width=33) (actual time=0.031..47819.164 rows=285895 loops=1)" 
" Filter: (sample_type = 7)" 
"Total runtime: 4208034.397 ms"       ----跟上面一比,耗时真是的是好多倍呢~ 

总结: 
虽然一般我们使用limit,都可以以减少行数的输出来降低时间的消耗,但是在这个例子中,我们发现,limit并不一定就能完全达到这种效果。 
一旦因为使用了limit而是执行计划发生改变,可能会带来致命的损失。 

处理: 
将函数中的limit限制去掉后,我们正常去执行select sfa_tmp_sleep(),发现真的很快,仅仅只需要60s的样子。 
且一般情况下,一个小时内需要去统计的hash行数也不会产生很多新的。 
这也完全验证了,之前为什么select sfa_tmp_sample()执行都是秒级别的,一旦加了上面一个sql查询,就必须消耗一个多小时的原因。 

再次测试: 
等sql查询有hash需要去统计的时候,我们再来执行一次select sfa_tmp_sleep();   ----63752ms 
真的是比较快的。 

事实证明,并不是我们写的函数有问题,而是我们在使用一些感觉会加快速度的命令时,却无意中反而拖累了我们的函数。 
相关文章
|
3月前
|
SQL 存储 缓存
四、SQL语句执行过程分析
四、SQL语句执行过程分析
34 0
|
10天前
|
SQL 数据库 索引
SQL索引失效原因分析与解决方案
SQL索引失效原因分析与解决方案
19 0
|
18天前
|
SQL 关系型数据库 MySQL
【MySQL】慢SQL分析流程
【4月更文挑战第1天】【MySQL】慢SQL分析流程
|
25天前
|
SQL 存储 Python
Microsoft SQL Server 编写汉字转拼音函数
Microsoft SQL Server 编写汉字转拼音函数
|
1月前
|
SQL 存储 Apache
在 Apache Flink SQL 中,并没有内置的 GROUP_CONCAT 函数
【2月更文挑战第16天】在 Apache Flink SQL 中,并没有内置的 GROUP_CONCAT 函数
186 2
|
2月前
|
SQL 关系型数据库 MySQL
10个SQL中常用的分析技巧
10个SQL中常用的分析技巧
|
2月前
|
SQL 存储
SQL Server基本函数
SQL Server基本函数
|
2月前
|
SQL 安全 关系型数据库
MySQL技能完整学习列表3、SQL语言基础——3、SQL运算符和函数
MySQL技能完整学习列表3、SQL语言基础——3、SQL运算符和函数
35 0
|
2月前
|
SQL 存储 关系型数据库
MySQL(终结篇二)- SQL 语句分析与优化
MySQL(终结篇二)- SQL 语句分析与优化
87 0
|
3月前
|
SQL Perl
PL/SQL的函数和包
PL/SQL的函数和包
27 1