Home » RDBMS Server » Performance Tuning » Hybrid Columnar Compression (ORACLE Exadata V2 and V3, , Linux)
Hybrid Columnar Compression [message #578770] Mon, 04 March 2013 18:52 Go to next message
Messages: 107
Registered: May 2005
Location: Louisville
Senior Member
We have a table that is defined as de-normalized table with avg row length of 300bytes. The table is has range partition on date and hash sub partition on ID field. The table is expected to contain over 1 trillion records. Insert only table. No update.

Can we write data into the table while table is defined forhcc compression for high at partition level. ? OR the current partition has to be uncompressed for insert ? Can someone please help in explaining the compression methodology that can be applied.


Re: Hybrid Columnar Compression [message #578771 is a reply to message #578770] Mon, 04 March 2013 19:09 Go to previous messageGo to next message
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Please read and follow the forum guidelines, to enable us to help you:


what exactly prevents you from constructing a small & simple Test Case & see for yourself what happens?
Re: Hybrid Columnar Compression [message #578784 is a reply to message #578770] Tue, 05 March 2013 01:55 Go to previous message
John Watson
Messages: 8640
Registered: January 2010
Location: Global Village
Senior Member
Direct loads will be HCC compressed, conventional inserts will be OLTP compressed. I think it becomes clear when you think it through: the HCC block format is different (a row will be distributed thought the blocks of a compression unit) so it can't be done in the buffer cache, only in the PGA.

By the way, if you find this useful, a fair return would be detail of the compression ratios you are achieving.

John Watson
Oracle Certified Master DBA
Previous Topic: PARALLEL when I don't want it
Next Topic: One job is hanging after SGA increase
Goto Forum:

Current Time: Thu Dec 02 21:57:51 CST 2021