In this article I’ll cover parallel read from tables created in the first part that you can find here.
As Exadata has many cores in all configurations, this is where I expect it has to shine.
Let’s check it.
1.
select /*+ parallel */ count(*) from tdh_nopartitions_normal;
12,27 sec
2.
select /*+ parallel */ count(*) from tdh_nopartitions_flash;
3,8 sec
3.
select /*+ parallel */ count(*) from tdh_nopartitions_columnar;
1,5 sec
4.
select /*+ parallel */ count(*) from tdh_nopartitions_columnar_flsh;
0,3 sec
5.
select /*+ parallel */ count(*) from tdh_nopartitions_in_memory;
0,24 sec
First performance boost (almost 400% from 12,27 sec—>3,8 sec) I get when flash storage has been used (case 2).
As you might recall in the first part of this series about Exadata, with serial read I get performance boost from 11,33 sec —-> 7,6 sec which is almost 150%.
This is a proof that Exadata really loves parallelism.
But first result from parallel reading normal table is disappointing, as Exadata needs more time for parallel reading from SAS based table than serial reading of the same table (case 1).
Another significant boost I get with columnar compression table (case 3). Again, parallel version provides more performance boost than the serial (case 3). Before, in serial version from the Part I, I get 7,6 —> 4,1 sec, but in parallel case impact is even greater (3,8 sec —> 1,5 sec).
When I move columnar table into the flash storage I get further improvement from 1,5 sec —> 0,3 sec (case 4).
This is much better comparing with serial version of the same SQL, where I shaved performances from 4,1 —> 3,8 sec.
The last case (case 5) provides even slightly better performances (from 0,3 sec —> 0,24 sec), while difference in serial version is more noticeable (3,8 sec —> 3,4 sec).
This is actually good news as In-Memory option is available on non Exadata HW too.
In the next article, I’ll continue with testing huge, list – range table.
Comments