SQL performance optimization: a case of EBS asset report optimization

Posted by NoName.ba on Tue, 18 Jan 2022 09:03:07 +0100

The asset module has a customized report through FA_ The asset module information and accounting information can be obtained by associated query of related tables and Sub Ledger XLA. It takes one hour to run the data of a single asset a / C set in a single month before optimization, and it can be completed in about one second after SQL optimization and rewriting. This paper analyzes how to optimize step by step with relatively simple and simple means

Core guiding ideology

  1. Reduce the data range of query as much as possible
  2. Driving a big watch with a small watch

Positioning performance SQL

This is a large pkg package, which may involve hundreds of thousands of lines of code. How to quickly locate which code is the culprit, resulting in the decline of report speed? You need to use an Oracle built-in analysis tool DBMS_ Refer to this article for hprof, introduction and usage
ORACLE Performance Optimization - DBMS_HPROF

SQL before optimization

After a few simple commands in the previous step, we have located the performance sql, which is a UNION SQL with very similar structure. Here, only a part is extracted as an example

--Should sql Is a report sql,There are several parameter conditions that have nothing to do with this optimization and have been filtered out
SELECT a.asset_id,
       to_char(h.transaction_date_entered, 'yyyy-mm-dd hh24:mi:ss'),
       to_char(xe.transaction_date, 'yyyy-mm-dd'),
       b.book_type_code,
       a.asset_number,
       a.description,
       c.segment1,
       c.segment2,
       b.original_cost amount,
       'increase' trans_type
  FROM fa_transaction_headers   h,
       fa_additions_vl          a,
       fa_books                 b,
       fa_categories            c,
       xla_transaction_entities xte,
       xla_events               xe,
       fa_asset_invoices        i
 WHERE a.asset_id = h.asset_id
       AND b.asset_id = a.asset_id
       AND c.category_id = a.asset_category_id
       AND h.book_type_code = b.book_type_code
       AND i.asset_id = a.asset_id
       AND i.po_vendor_id = i.po_vendor_id
       AND b.date_ineffective IS NULL
       AND h.transaction_type_code = 'ADDITION'
       AND h.transaction_header_id = xte.source_id_int_1
       AND xte.entity_id = xe.entity_id

Processing ideas

Data analysis (not combined with business)

Assuming that we are not familiar with the EBS table structure and business, we analyze it purely from a technical point of view.
This SQL is not complex. There are only 7 tables. Through the statistical data scale, FA_ The data scale of the related table at the beginning is about tens of thousands, which belongs to a small table. In XLA_ The beginning of the table is millions to tens of millions, which belongs to a large table.
When looking at the indexes of related tables (here we focus on the indexes and structures of large tables), we find XLA_ The related table is a partitioned table through the field application_ The ID field is partitioned, but the partition limit is not used in the above SQL. After the above SQL query, it is found that the application obtained by the SQL_ The IDs are exactly the same, both 140. We learned from EBS technical consultants with one or two years of experience (assuming that we do not understand EBS business and table structure at all, and EBS technical consultants with a little experience know that this field is the business module field ID). In fact, this SQL is the SQL for the associated query between the asset module and the Sub Ledger, XLA_ All the tables at the beginning have application_ The business module field ID is the application corresponding to the asset module_ ID is 140, no other value.
Then we can use the partition table feature to add partition restrictions and add the following conditions to large tables

AND xte.application_id = xe.application_id --Reduce the scope by using the table partition table feature
AND xte.application_id = 140 --Use table partition table to narrow the scope

After re query, it is found that the efficiency has been improved a lot. It still takes a few minutes for those with a slightly large amount of data (thousands of pieces of scale) and one minute for those with a small amount of data (hundreds of pieces). This amount of data takes so long for databases with more than 100G memory and 24 logical cores. There should be room for optimization.

Combined with business analysis

When running the report, through session tracking, we locate the problematic sqlid, and then use sqlhc, a free Oracle tool, to issue a very simple command
sql> sqlhc T [sqlid]
To obtain a comprehensive analysis report (in fact, it can also be obtained with SQL Tuning Advisor, but sqlhc is simpler and can be solved with one command), and obtain an optimization suggestion:

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 99.98%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name =>
            'fpkqh87w1y36b_tuning_task', task_owner => 'SYS', replace =>
            TRUE);

2- Index Finding (see explain plans section below)
--------------------------------------------------
  The execution plan of this statement can be improved by creating one or more
  indices.

  Recommendation (estimated benefit: 98.65%)
  ------------------------------------------
  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index XLA.IDX$$_A3EE0001 on XLA.XLA_TRANSACTION_ENTITIES("SOURCE_ID_
    INT_1");

  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.  If you choose to create the
    recommended index, consider dropping the index "FA"."FA_BOOKS_N2" because
    it is a prefix of the recommended index.
    create index FA.IDX$$_A3EE0002 on FA.FA_BOOKS("TRANSACTION_HEADER_ID_OUT","
    ASSET_ID");

  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index FA.IDX$$_A3EE0003 on FA.FA_TRANSACTION_HEADERS("BOOK_TYPE_CODE
    ","TRANSACTION_TYPE_CODE","TRANSACTION_SUBTYPE");

  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index FA.IDX$$_A3EE0004 on FA.FA_ADJUSTMENTS("TRANSACTION_HEADER_ID"
    );

  Rationale
  ---------
    Creating the recommended indices significantly improves the execution plan
    of this statement. However, it might be preferable to run "Access Advisor"
    using a representative SQL workload as opposed to a single statement. This
    will allow to get comprehensive index recommendations which takes into
    account index maintenance overhead and additional space consumption.

The first part is to rewrite the implementation plan. In some cases, you can get a great performance improvement. You can try to brush into the system.
The second part is index maintenance. We pay attention to a suggestion in XLA_ TRANSACTION_ Create a field source on the large table of entities_ ID_ INT_ 1. This field is generally the primary key of transactions from various business modules, but this table is a public table of the Sub Ledger, which will also be used by other modules. We dare not add an index rashly, for fear of affecting the update efficiency of other modules. Consider whether existing indexes on the table can be used,
This table has two indexes, a general composite index:

(LEDGER_ID, ENTITY_CODE, NVL(SOURCE_ID_INT_1,(-99)), NVL(SOURCE_ID_INT_2,(-99)), NVL(SOURCE_ID_INT_3,(-99)), NVL(SOURCE_ID_INT_4,(-99)), NVL(SOURCE_ID_CHAR_1,' '), NVL(SOURCE_ID_CHAR_2,' '), NVL(SOURCE_ID_CHAR_3,' '), NVL(SOURCE_ID_CHAR_4,' '))

A unique composite index:

ENTITY_ID, APPLICATION_ID

We have used the unique index, so consider whether we can use the combined index. For this sql, we use the large table XLA_ TRANSACTION_ Source on entities_ ID_ INT_ 1 this field, but leader_ ID, ENTITY_ If the two fields code are not used, the index is not used. Can we use the sql prefix through sql rewriting without affecting the original sql results? Through business analysis, we know about leader_ ID is the GL A / C set ID, FA
The module traces the Sub Ledger into two parts: asset transaction and asset DEPRECIATION, which correspond to entity respectively_ The two values of code are TRANSACTIONS and DEPRECIATION. This part of our business does not involve asset DEPRECIATION, so we can set the entity of this large table_ The code field value is fixed to TRANSACTIONS.
At the same time, you can use the asset account book configuration table fa_book_controls to associate the asset account book with the general ledger account book, then we can use the existing index of the large table to speed up the range filtering of the large table, as shown in the following sql:

--Add asset account book configuration table 
fa_book_controls  fbc
--Add the following Association conditions
AND xte.ledger_id = fbc.set_of_books_id --utilize xla_transaction_entities Existing index
AND fbc.book_type_code = h.book_type_code --utilize xla_transaction_entities Existing index
AND xte.entity_code = 'TRANSACTIONS' --utilize xla_transaction_entities Existing index

Optimized SQL

The optimized sql is as follows:

SELECT a.asset_id,
       to_char(h.transaction_date_entered, 'yyyy-mm-dd hh24:mi:ss'),
       to_char(xe.transaction_date, 'yyyy-mm-dd'),
       b.book_type_code,
       a.asset_number,
       a.description,
       c.segment1,
       c.segment2,
       b.original_cost amount,
       'increase' trans_type
  FROM fa_transaction_headers   h,
       fa_additions_vl          a,
       fa_books                 b,
       fa_categories            c,
       xla_transaction_entities xte,
       xla_events               xe,
       fa_asset_invoices        i,
       fa_book_controls         fbc --Add asset account book to associate with Sub Ledger
 WHERE a.asset_id = h.asset_id
       AND b.asset_id = a.asset_id
       AND c.category_id = a.asset_category_id
       AND h.book_type_code = b.book_type_code
       AND i.asset_id = a.asset_id
       AND i.po_vendor_id = nvl(p_vendor_id, i.po_vendor_id)
       AND (a.description LIKE '%' || p_asset_desc || '%' OR p_asset_desc IS NULL)
       AND b.date_ineffective IS NULL
       AND h.transaction_type_code = 'ADDITION'
       AND b.book_type_code = p_book_type_code
       AND h.transaction_header_id = nvl(xte.source_id_int_1, (-99))
       AND xte.entity_id = xe.entity_id
       AND xe.transaction_date >= to_date(p_date_from, 'yyyy-mm-dd hh24:mi:ss')
       AND xe.transaction_date <= to_date(p_date_to, 'yyyy-mm-dd hh24:mi:ss') + 1
       /* The following are the new association conditions and reasons */
       AND xte.application_id = xe.application_id --Reduce the scope by using the table partition table feature
       AND xte.application_id = 140 --Use table partition table to narrow the scope
       AND xte.ledger_id = fbc.set_of_books_id --utilize xla_transaction_entities Existing index
       AND fbc.book_type_code = h.book_type_code --utilize xla_transaction_entities Existing index
       AND xte.entity_code = 'TRANSACTIONS' --utilize xla_transaction_entities Existing index

After the sql of this report is rewritten, the data of a single asset account book and a single month can be displayed in seconds.

analysis

Some of the above rewrites follow two very simple ideas

  1. Try to narrow the query range of data, such as using partition restrictions and fixed business constants
  2. Small tables are used to drive large tables. The most ideal situation for the fields associated with small tables and large tables is that they all have indexes, followed by small tables without indexes and large tables with indexes.

These two principles are not only for Oracle database, but also for other relational databases.

Topics: Oracle