How to maintain the stability of the performance of the Oracle Database SQL (the turn ITPUB an article)

Encountered SQL performance is unstable, a sudden deterioration of the system performance to serious problems.For large systems, SQL performance is unstable, sometimes sudden deterioration, which is often encountered. This is also some of the challenges of the DBA.

For the use of the Oracle database application system, sometimes the run suitably SQL, a sudden deterioration in performance. Especially for OLTP type system performs frequent core SQL performance problems usually affect the performance of the entire database, thereby affecting the normal operation of the system as a whole. For individual SQL, such as less use of SQL query statements like, if there is a problem, usually affects only a small number of functional modules, without affecting the entire system.

Then it should be how to maintain the stability of the SQL Performance?

SQL performance deterioration, usually in the SQL statement to re-analysis, parsing using the wrong execution plan appears. The following SQL re-parsing:

SQL statements not using bind variables, so that each SQL execution must be resolved.
SQL for a long time is not running, is brush out SHARED Pool, need to re-parse executed again.
DDL operations executed on SQL objects (tables, views, etc.), and even the structure has changed, such as the construction of an index.
4. Permission to change the object referenced by SQL.
Re-analyzed the collection of statistical information referenced by SQL tables and indexes, or table and index statistics are deleted.
6 Modify the part of the performance-related parameters.
7 Refresh the shared pool.
Of course restart the database also make all the SQL re-parsing.

SQL re-parsing, compared with the past performance suddenly deteriorated, usually for the following reasons:

1 optimization statistics for tables and indexes to be deleted, or re-collected after the statistical information is not accurate. Re-collection of statistical information is usually caused by incorrect collection strategy (method). For example, the partition table using the analyze command rather than the collection of statistical information with the dbms_stats package when the sampling ratio is too small, and so on. Oracle optimizer is heavily dependent on the statistics, Statistics, SQL can not be used easily lead to incorrect execution plan.

SQL bind variable to spy (bind peeking), bound variable corresponding column histogram; bind variable value range is too large, partitioned data is very unevenly distributed:

1) bind variables column histogram:

If the table orders to store all orders state column has three different values: 0 indicates untreated, 1 indicates that the processing is completed successfully, 2 indicated that the treatment failed. State column there is an index on the state of the vast majority of the data in the table as 1,0 and 2 in the minority. There are the following SQL:

1 select * from orders where state =: b1

: B1 variables, in most cases, this value is 0, you should use the index, but if the SQL will be parsed, and the first time you run the application to pass variables b1 value of 1, you will not use the index, full table scan to access the table. For SQL bind variables, only the first execution will the bind variables spy, and to determine the execution plan, all subsequent executions of the SQL execution plan. Subsequent executions b1 variable value is 0 when the incoming, is still the execution plan in the first run, using either a full table scan, this will lead to poor performance.

2) the changes in the value of the bind variables range is too large:

Similarly if the orders table has a created_date an order for the next single, the orders table which stores the most recent year of data, the following SQL:

1 Select * from orders where created_date> =: b1;

In most cases, application incoming b1 variable value is the value of a date within the last few days, then use SQL on the created_date column index, and b1 the value of the variable is five months before a value, it will Use a full table scan. Described above histogram problems caused, if the SQL execution 1st incoming variable values ??caused by full table scan, then the SQL subsequent execution using a full table scan, thus affecting the performance.

3) the the partition amount of data uneven:

For range and list partitioning, there may be circumstances of each partition between the amount of data is very uneven. For example, partition table orders by geographical area, partition, the P1 partition only thousands of rows, and P2 partition 2 million rows of data. At the same time if a product_id, on which there is a local partitioned index, have the following SQL:

1 select * from orders where area =: b1 and product_id =: b2

This SQL area conditions, partition elimination. If the application execution to b1 the variable’s value right on the P1 partition is likely to cause the SQL full table scan access, as described previously, resulting in a full table scan all use SQL subsequent execution.

3 other reasons, such as table do similar MOVE operation, the index is not available, the index change. Of course, this situation is caused by improper maintenance problems, the scope of discussion beyond the scope of this article.

In summary, SQL statement performance suddenly deteriorated, mainly because of reasons of bind variables and statistics. Note that only discuss the sudden deterioration of the situation, and does not discuss the gradual deterioration of the situation due to the amount of data and the increase in business performance.

In order to maintain the stability of the SQL performance or execution plan, from the following aspects:

Planning strategies to optimize the collection of statistical information. For Oracle 10g, the default strategy is able to meet most needs, but the default collection policy to collect too much on the column histogram. Bind the variable histogram inherent contradiction, in order to maintain stable performance, the column using bind variables, do not collect column on the histogram; really need to collect histogram column, the column in SQL conditions Do not use bind variables. Statistics collection strategy, consider the collection policy on most of the tables, use the system default, and for the problem, you can lock the table statistics DBMS_STATS.LOCK_STATS prevent the system from automatically collect statistical information for the table, and then write a script to customized to collect statistics on the table. Script similar to the following:

1 exec dbms_stats.unlock_table_stats …
2 exec dbms_stats.gather_table_stats …
3 exec dbms_stats.lock_table_stats …

(2) modify the SQL statement, HINT, implementation of the SQL statement execution plan specified by HINT. This need to modify the application at the same time need to execute SQL statements, plus testing and release time is longer, the higher the cost, risk.

(3) to modify the underlying parameters _optim_peek_user_binds as FALSE, modify this parameter may cause performance problems (discussed here are stability issues).

4. Use the outline. For there have been a sudden deterioration of the SQL statement execution plan, you can use the OUTLINE to reinforce its Plan of Implementation. 10g DBMS_OUTLN.CREATE_OUTLINE can already perform normal SQL cursor to create the outline. If prior outline reinforcement for all frequently executed core SQL execution plan, the maximum possible to avoid a sudden deterioration of performance of SQL statements.

The Note: DBMS_OUTLN by $ ORACLE_HOME / rdbms / admin / dbmsol.sql script to install.

5. Use SQL Profile. SQL Profile is Oracle 10g new features, will not be described here, refer to the appropriate documentation.

In addition, some of the parameters can be adjusted to avoid potential problems, such as “_btree_bitmap_plans” parameter is set to FALSE (this parameter, please refer to the article on the Internet or Oracle documentation).

In practical work, through the use of custom statistics collection policy, as well as on the part of the system to use the outline, the system is basically not a sudden deterioration of the existing SQL performance. Of course, there are maintenance personnel caused by improper operation SQL sudden deterioration of performance, such as building an index without the collection of statistical information, cause SQL to use the new index, the index is not suitable for that SQL; maintenance personnel accidentally deleted Table indexes statistics;

1 above provides a number of methods and issues, as a learning harvest will be even greater.