Oracle 11g New Tuning Features Donald K. Burleson Burleson Oracle Consulting

About Don Burleson:    

Chief Tech Officer – BC Remote DBA Full-time DBA since 1983 Adjunct IT professor Author of five Oracle Press books

www.guidehorse.org

 On-site custom Oracle training  Oracle Tuning & Oracle RAC Support  Remote DBA Support

11g new tuning features:  Objects –



Table compression Function-based virtual columns

 SQL & Optimizer – –

Improved dbms_stats (extended stats) Real Application Testing (RAT)  Database workload replay  SQL Performance advisor

 Instance management – – –

Automated Memory Management Instance caging Flash cache

11g compression  Using compression saves disk space, reduces memory use in the buffer cache and increases the speed of disk read operations. alter table table_name compress for all operations;

How it works (source: Oracle)  "The algorithm works by eliminating duplicate values within a database block, even across multiple columns. Compressed blocks contain a structure called a symbol table that maintains compression metadata.  When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table."

11g inline compression  Up to a 3x disk savings - Depending on the nature of your data, Oracle compression will result in huge savings on disk space.  Cheaper solid-state disk - Because compressed tables reside on fewer disk blocks, shops that might not otherwise be able to afford solid-state flash disk can now enjoy I/O speeds up to 300x faster than platter disk.

11g inline compression  Faster full scan/range scan operations Because tables will reside on less data blocks, full table scans and index range scans can retrieve the rows with less disk I/O.  Reduced network traffic - Because the data blocks are compressed/decompressed only within Oracle, the external network packets will be significantly smaller.

Compression issues  Overhead at DML time - Whenever a SQL update, insert of delete changes a data block in RAM, Oracle must determine if the data block should be unlinked from the freelist (this threshold is defined by the PCTFREE parameter).  Compression on write - An outbound data block must be compressed to fit into it's tertiary block size (as defined by db_block_size and the tablespace blocksize keyword). For example, an uncompressed block in RAM might occupy up to 96k in RAM and be compressed into it's tertiary disk blocksize of 32k upon a physical disk write.

Compression issues  Decompress on read - At physical read time, incoming disk blocks must be expanded once and stored in the RAM data buffer. The exact mechanism for this expansion is not published in the Oracle11g documentation, but it's most likely a block versioning scheme similar to the one used for maintaining read consistency.  Increased likelihood of disk contention - Because the data is tightly compressed on the data blocks, more rows can be stored, thus increasing the possibility of "hot" blocks on disk. Of course, using large data buffers and/or solid-state disk (RAM-SAN) will alleviate this issue.

Function-based virtual columns  For example, assume that we have a table column named gross_pay which is defined as a function of the (hours_worked * hourly_pay_rate) columns.

Function-based virtual column benefits  Automatic re-computation of derived columns for ad-hoc query tools  Moves data processing logic into the database  Reduction in redundant disk space for columns that must be derived from other columns (e.g. a MONTH column that is derived from another DATE column).  Easier interval partitioning

Function-based virtual columns SQL> create table employees( 2 employee_name varchar2(30), 3 start_date date, 4 end_date date, 5 hourly_rate generated always as (annual_salary/2080), 6 annual_salary number, 7 active as (case when end_date is null then 'Y' else 'N' end));

Improved dbms_stats  runs two times faster.  automatically collects a statistically significant sample size.  Supports hybrid (expression-based) histograms  Supports multi-column histograms

Hybrid histograms:  A specific city_name column correlates to specific state_name column values.  A zip_code column value is only NOT NULL when country_code= ‘USA’.  A current_age column is related to the date_of_birth.  The senior_citizen_flag=’Y’ correlates to rows where SYSDATE-date_of_birth > 65.  The zodiac_sign column directly relates to the value of month_of_birth

Column relationships  A credit_status of “BAD” indicates a credit_rating values below 400  A lawsuit_status of “overturned” corresponds with an appeal=’YES’ column value.  A non_zero value for prison_sentence corresponds to a verdict value of “GUILTY’.

CBO extended statistics Here are the steps to create extended statistics for related table columns with dbms_stats.created_extended_stats: 1 - The first step is to create column histograms for the related columns. 2 – Next, we run dbms_stats.create_extended_stats to relate the columns together.

Extended stats  Below, we might create the column histograms and column relationships where the total_price column relates to the row expression where total_price = product_price+sales_tax: Select dbms_stats.create_extended_stats (NULL, 'sales', '(product_price+sales_tax)') from dual;

CBO expression stats: Expression statistics can be created in two ways:  Using the method_opt argument in dbms_stats  Invoking the dbms_stats create_extended_stats procedure

1 - Collect extended statistics directly on the expression.  For example, if we had a region function, we can include the region function directly in the gather_table_stats: dbms_stats.gather_table_stats (ownname => SCOTT, tabname => ‘SALES’, method_opt => ‘FOR ALL COLUMNS FOR COLUMNS (region(amt_sold))’ );

2 - Create a virtual column on the expression, then index that the virtual column. alter table sales add region GENERATED ALWAYS AS (region(amt_sold));

3) Use method_opt begin dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => ‘SALES', estimate_percent=> 100, method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY FOR COLUMNS (region_amt_sold)', cascade => true

Database workload replay  Real world workloads eliminates guesswork.  SQL Tuning sets can be captured in real-time or from historical data.

Real Application Testing (RAT)  Two components:  Database workload replay  SQL Performance Analyzer

Setup of RAT: 1 - Define the SQL workload - The DBA defines a "set" of problematic SQL statements using exception thresholds (e.g. all SQL with > 100,000 disk reads), select from the cursor cache or the AWR. This is called the SQL Tuning set, or STS. 2 - Set-up a changed environment - Here you can chose to change your initialization parms, test your performance against a previous release of the CBO (a very useful features when testing upgrades) or conduct "custom experiments" on the effect of environmental changes on your SQL tuning set.

Setup of RAT: 3 - Schedule & run your tests - The workload is scheduled for execution during "low usage" periods, so that an empirical sample of realworld execution times can be collected and compared, using different execution plans. 4 - Implement the changes – You can flag SQL statements for changes and tune them with the 11g SQLTuning advisor.

Things to test in RAT:     

Optimize the server kernel Adjusting your optimizer statistics Adjust optimizer parameters Optimize your instance Tune your SQL Access workload with physical indexes and materialized views -

Automated Memory Management (AMM)  Prior to Oracle 11g, you set the sga_target and sga_max_size parameters, allowing Oracle to reallocate RAM within the SGA. The PGA was independent, as governed by the pga_aggregate_target parameter.  Now in Oracle 11g we see the memory_max_target parameter which governs the total maximum RAM for both the PGA and SGA regions and the new memory_target parameter which governs the existing sizes.

Automated Memory Management (AMM)  Upsides – – –

Great for beginner DBA’s (The 2-day DBA) Great for “hands off” departmental systems Great for systems with lots of RAM

 Downsides: – – –

Waits until there is a problem before morphing RAM resize operations can CAUSE performance problems (when used on a server with too-little RAM) Not as flexible as manual allocation/management

Instance caging (from Oracle):  Instance caging is a method that uses an initialization parameter to limit the number of CPUs that an instance can use simultaneously. . . . If you use instance caging to limit the number of CPUs to four for each of the four instances, there is less likelihood that one instance can interfere with the others.

Instance caging (from Oracle):  Caging (a.k.a. Fencing, CPU affinity) has been around since the 1980’s.  Used on large SMP servers (32 or 64 processors) to split-off CPU resources.

Instance caging  Upsides – –

Keeps a “runaway instance” from hogging CPU Allows virtualization (processor affinity)

 Downsides – –

Does not affectively allow for sharing of computing resources Does not fence RAM resources

Flash cache (re Oracle docs)  A flash cache is an extension of the database buffer cache that lives on a flash disk, which is a solid state storage device that uses flash memory.  Without flash cache, the database re-uses each clean buffer in main memory as needed, overwriting it. If the overwritten buffer is needed later, then the database must read it from magnetic disk.

Flash cache (re Oracle)  With flash cache, the database can write the body of a clean buffer to the flash cache, enabling reuse of its main memory buffer.  The database keeps the buffer header in an LRU list in main memory to track the state and location of the buffer body in the flash cache. If this buffer is needed later, then the database can read it from the flash cache instead of from magnetic disk.

Got Questions? Get updates as I find new tips, Techniques and methods:

bc oracle Forum www.dba-oracle.com