Sybase Business Intelligence Solutions - Database Management, Data Warehousing Software, Mobile Enterprise Applications and Messaging
Sybase Brand Color Bar
delete

Search for    in all of Sybase.com
view all search results right arrow
  blank
 
 
 
 
 
 
 
 
 
 

 
 
CLICK TO EXPAND NAVIGATION
CLICK TO EXPAND NAVIGATION
 
 
 
 
Support > Technical Documents > Document Types > White Paper-Technical > Application Development Guide for Adaptive Server ...

Application Development Guide for Adaptive Server Enterprise 11.x & OpenClient

This guide is intended as a supplement to the set of Adaptive Server Enterprise 11.x publications to provide some essential guidelines when developing and tuning applications for Sybase ASE 11.x.
 
RSS Feed
 
 
 
This document was originally written for Adaptive Server Enterprise 11.0.x version and contains some information (particularly the discussion of deadlocks) which is in most part no more applicable to the 11.9.x version that provides the row level locking. However, other information can be still useful for the enterprise solutions developers writing their database driver code for Sybase environment.
 

Table of Contents

1. Overview
2. General
3. Server


  • 4. Client
     

  • 1. Overview

    This guide is intended as a supplement to the set of publications that is available with the release of Sybase Adaptive Server Enterprise 11.x (ASE 11.x). Its purpose is to provide some guidelines when developing and tuning applications that run on Adaptive Server Enterprise 11.x. This guide, by no means, replaces the need to attend the Performance & Tuning course given by Sybase Education. This course is a pre-requisite to any serious performance and tuning analysis. This guide is meant to compliment the Server publications and course material, and in fact, in many places great care has been taken not to be redundant with materials.

    A new document in the Adaptive Server Enterprise 11.x set is the Performance and Tuning Guide (P&T Guide). It is expected that the reader of this guide has already read that document and is familiar with its concepts. This guide is intended to focus on some of the specific areas that developers of third party applications run across.

    We also strongly advise you to (periodically) visit the Sybase World Wide Web pages for the latest information, whitepapers, performance tips, sample programs, etc. Specifically look at:
     

    http://www.sybase.com

    http://www.sybase.com/products/databases/sql_server/information/performance.html

    http://www.sybase.com/products/samples/
     

    Our goal is to help you achieve what you want of your application - to be fast (as measured by response time and throughput) and to be able to support more current users.

    Being a developer of applications that will be deployed at your customers sites presents you with 3 challenges the typical end-user is spared (not everyone in the audience will face all 3):

      writing your application to support more than 1 RDBMS, using a database specific driver;

      the application was originally architected as a mainframe application, having been (minimally) re-written for client/server technology and for relational databases (i.e. - it uses the database to store data but doesn't apply the power and technologies offered); and

      you need to integrate your application with the RDBMS as seamlessly as possible, (this especially applies to backup and system administrative functions).

    This document will address the first two items that deal with performance issues.

    We realize that many of you have not chosen Sybase as your primary platform and accordingly have written your application with another architecture in mind. This can often lead to inefficient practices on Sybase which we will try to point out and present feasible solutions.

    We will concentrate on two areas, namely concurrency (locking and deadlocking issues) and performance (once we get it to run, making it run fast).

    The table below is a summary of the topics/solutions that are discussed in more depth in subsequent sections. We included a rating (High - Medium - Low) for each on the benefit that might be gained from implementation and for the coding effort you might expect.
     
     
     
     

    Topic Section Benefit Coding Effort
    coding to avoid ‘true’ deadlocks 5.1
    H
    H
    retry-on-deadlock 5.1
    M
    H
    update only what needs to be updated 5.1
    H
    M
    strive for updates in place 5.1
    H
    L
    tracking the cause of deadlocks 5.1
    M
    L
    hold no locks during user ‘think-time’ 5.2
    H
    L
    lock at the minimum isolation level  5.2
    M
    L
    max-records-per-page: spreading the data 5.2
    H
    L
    short transactions 5.2
    M
    M
    avoid table locks 5.2
    L
    L
    named caches 5.3
    M
    M
    large I/Os 5.3
    L
    M
    using data slices 5.3
    M
    L
    TCP_NODELAY 6.1
    H
    L
    setting Packet size 6.2
    H
    L
    CS_NOAPI_CHK 6.3
    M
    L
    using the right library 6.4
    H
    L
    ct_describe() and ct_param() 6.5
    M
    L
    arrary binding 6.6
    M
    M

     

    2. General

     The architecture of many of the applications written by the enterprise solutions developers for Sybase today follow an approach where the client and the Adaptive Server Enterprise reside on the same machine.

    Adaptive Server Enterprise was not designed to share resources with the application server.
     
     

    Ideally, whether or not your application resides on the same machine as the server, you want to minimize the traffic between the application and the Adaptive Server Enterprise . This means:· Doing the work as close to the data as possible. Let the server do the work through use of stored procedures, triggers, joins and other set oriented processing wherever possible. Avoid processing one row at a time.

    · Sending multiple commands in a single batch instead of sending each command independently.

    · Filtering the results set before it is returned, eliminating unnecessary columns and rows.  You also want to build the application as optimally as possible. Keep the executable as small as possible and compile with optimization flags. Link only those libraries necessary.
     
     

    Of course, the place where you can gain the most performance improvement is in your query design, transaction design, and database design. If you have the flexibility to modify the design of your database, pay particular attention to chapter 2 of the P&T Guide and the discussions of collapsing and splitting tables.
     

    uning for Performance

    Step 1 - Knowing Your Application

     There are two parts to knowing your application. The first is understanding how the application is to be used. Which tables get hit most frequently, what is the mix of transactions, are there certain hours when the database is in an insert/update intensive mode, other periods when reporting is the major activities, are there somewhat quiet times? Knowing how the data is to be used allows you to decide whether or not some tables should be de-normalized (adding redundant columns to eliminate joins, adding derived columns so that aggregate values are readily available), and/or whether some tables should be duplicated (replicated) or split.
     
     

    The other part is the empirical knowledge you gain after collecting and examining all the statistics either from a running system or, better yet, a controlled, repeatable simulation environment. Here's where data from the profiled application is captured through a variety of tools (operating system monitoring tools, Sybase SQL monitoring tools (i.e. SQL Monitor and "dbcc monitor"), third party monitoring tools, and traces and statistics collected through the application). (Because you, the developer of this database independent API don't have any control or, sometimes, knowledge of what the application is sending, if you can build into the application tools to capture important information, like the flow and frequency of statements and how long they take to process, you'll be way ahead of the game in being able to tune your application.)  Step 2 - Knowing What's Tunable

     - packet size, frame size and other networking parameters

    - server configuration parameters, i.e. cschedspins, memory, cache

    - application configurations

    - number of cursor rows fetched at a time  Step 3 - Having the Right Testing Environment

     This includes being able to re-run the benchmarks at will (after first establishing the baseline). It means having a benchmark that approximates reality (containing a transaction mix, multiple users, with appropriate timing, that simulates what the real world will offer). And it means having the results of the tests being easily understood and compared (this means you can't stand there with a stopwatch - you must build the timings into the application).
     
     

    There are testing tools that are available that can help you build a multi-user testing environment. The following in not a conclusive list by any means (or an endorsement) but a starting place of vendors who have offering: Performix, Mercury, and Neal Nelson.  Step 4 - What to Monitor

     At the OS level: (use tools such as vmstat, netstat, iostat)

    memory usage

    CPU usage (especially important when client and server, or multiple clients, are on the same machine)

    swapping activity

    paging activity

    network activity

    I/O activity per controller
     
     

       
    At the Adaptive Server Enterprise level: (use tools such as SQL Monitor, sp_lock, sp_monitor, dbcc)

    number of packets sent/received (use dbcc monitor)

    CPU usage

    cache hit rates

    I/O activity

    if you see deadlock, determine if it's in the data pages or the index pages of a non-clustered index

    average packet size

    look at the TDS packets

    number of queries/traffic to network
     

     And, of course, from the application:

    how long did the job run, or

    what is the average response time

    using gprof output can be very useful
     

      Step 5 - What Can Be Done

    if the application allows it, try parallelizing batch operations (beware of increased possibility of causing deadlocks)

    use named caches to prevent data that's going to be re-used from being flushed

    look at hardware alternatives (if allowing write caching shows improvement in your development environment, then perhaps a solid state device might help at the production site)

    check showplan output, verify that the optimizer is doing for you what you’d expect

    look at the MRU/LRU strategy being used, override if necessary

    are there table scans where an index should be used

    set TCP_NODELAY on both the client and the server

    test with various packet sizes to optimize network performance

    if dynamic SQL is being used, are the statements being re-used?

    do the work close to the data, use stored procedures

    use temp tables

    for non-shared data

    for intermediate join results

    to establish result sets without holding locks

    they will be cleaned up automatically

    use optimistic locking (via a timestamp or version number)

    use the minimal level of locking for each command;
     

    use isolation level 0 (dirty reads) wherever you canThe query analysis tools that you will use most often are:
     
     
     
     

    Tool Use
    set showplan on Shows the query plan for a query, including the indexes selected, join order, and worktables. 
    set statistics io on Shows how many logical and physical reads and writes are performed to process the query
    set statistics time on Shows how long it takes to execute the query.
    set noexec on Usually used with set showplan on, the command suppresses execution of the query. You see the plan the optimizer would choose, but the query is not executed. It is very useful when the query would return very long results, or could cause performance problems on a production system. Note that output from statistics io is not shown when noexec is in effect (since the query doesn't perform I/O.)

     
     

    3. Server

      3.1 Deadlocking Relief
    A deadlock occurs when two users each have a lock on a separate object. Each wants to acquire an additional lock on the other user's object. When this happens, the first user is waiting for the second to let go of the lock, but the second user will not let it go until the lock on the first user's object is released.

     It is possible to encounter deadlocks when many long-running transactions are executed at the same time in the same database. Deadlocks become more common as the lock contention increases between those transactions (decreasing concurrency).

     Well-designed applications can avoid deadlocks (but may not fully eliminate them) by always acquiring locks in the same order. Updates to multiple tables should always be performed in the same order, with no table updated twice.

     At all costs, you must code to avoid deadlocks BUT they will happen anyhow (you can influence the frequency). When a deadlock occurs, Adaptive Server Enterprise will terminate one of the transactions so that the other(s) can complete. Whenever this occurs, that transaction that was terminated must be started again - automatically if it occurs in an (unattended) batch process.

       code to avoid 'true' deadlocks
       

       Be consistent in the order that all transactions access the tables. For example, wherever possible, try to avoid a situation such as:
       
       

         
      Transaction A Transaction B
      begin transaction 
      ... 
      update table_1 
      ... 
      update table_2 
      ... 
      end transaction
      begin transaction 
      ... 
      update table_2 
      ... 
      update table_1 
      ... 
      end transaction

       
       
         
      retry-on-deadlock logic should be placed in the application for multi-update transactions
       

      When a deadlock occurs, Adaptive Server Enterprise :

      chooses one of the connections as the victim (chooses the user whose process has accumulated the least amount of CPU time as the victim) and aborts the transaction, allowing the other connection to complete;

          rolls back the transaction; and
      returns an error code of 1205.
       

       The application needs to check the errorcode and automatically restart the transaction if and when a 1205 is returned.

       The following client pseudo-code illustrates retrying a transaction when it is chosen as a deadlock victim:  dobegin transaction

       update tab1 set fld1=@newval_1 where fld2=@testval_1

       update tab2 set fld1=@newval_2 where fld2=@testval_2  while (@@error != 1205)

       commit transaction  A word of caution - make sure your retry logic doesn’t put you into a continuous loop. Add a counter or some other check to allow the application to quit trying after multiple attempts.

      update only the fields that need to be updated
       

       If you update all the fields in the record, the Adaptive Server Enterprise is going to have to lock all the indices, possibly unnecessarily. So especially avoid updating the key fields (when the values don’t change).

       strive for updates in place
       

      Adaptive Server Enterprise 11 will perform the update "in place", that is just update the record on its current page as opposed to doing a delete and insert, as long as the record still fits on the page and the clustered index value hasn’t changed.

       If your records include variable length fields, you can use the max-records-per-page feature to spread your data, leaving enough room on the page to accommodate the expected expansion of these fields.

      tracking the cause of deadlocks
       

      (note: this following TechNote can be found in Answerbase)

         
       SYBASE TechNote

       Adaptive Server Enterprise Trace Flags for Tracking Deadlocks

       Summary

      -----------------------------------------------------

      This note explains some Adaptive Server Enterprise trace flags which can be used to trace the cause of deadlocks.

       Content

      -----------------------------------------------------

      If you encounter deadlocks in your application, start Adaptive Server Enterprise with trace flags -T3605 and -T1204 (not 1205). The output from these trace flags can be used to trace the cause of the deadlocks.
       
       

      WARNING!

      Use trace flag 1204 only to trace the cause of deadlocks as it can seriously degrade performance.
       
       

      The following output illustrates how trace flag output can be used to trace the cause of deadlocks.
       
       

      ================== errorlog file ==================

         
      *** Deadlock detected - process 16 trying to wait

      on process 41

      Deadlock chain -->

      LOCK REQUEST INITIATING DEADLOCK: LOGICAL:

      EX_PAGE at 0xaac690

      lockid=107001 spid=16 dbid=6

      Process 41 waiting on Process 16 for resource:

      LOGICAL. Lock requested by

      spid 41:

      SH_PAGE at 0xac549c

      lockid=106991 spid=41 dbid=6

      BLOCKED by spid 16 with the following lock:

      EX_PAGE Blocking at 0x20be7c0

      lockid=106991 spid=16 dbid=6

      pstat=0x0100 [ ]

      VICTIM: process 41 ; pstat 0x0100 [ ] ; cputime = 26

      Process 41 was selected as victim

      Waking up victim process 41 (110 : 0x0100)

      ==============================================
       
       

      As this output illustrates, an exclusive lock is requested by

      server process ID (spid) 41. Page 106,991 already has a

      shared lock already held by spid 16. So, spid 41 must wait for

      spid 16 to finish. On page 107,001, spid 16 requests an

      exclusive lock. However, spid 41 already holds a lock on page

      107,001 and deadlock occurs. To solve this problem, do the

      following.
       
       

      1. Issue the following isql commands:

      -------------------------------------

      1> select db_name(6)

      2> go
       
       

      1> use database_name

      2> go
       
       

      1> dbcc traceon(3604)

      2> go
       
       

      1> dbcc page(6, 107001) a

      2> examine PAGE HEADER: objid, level and indid

      3> go
       
       

      1> select object_name( objid )

      2> go
       
       

      1> sp_helpindex table_name

      2> go
       
       

      2. Apply step 1 to page 106,991. In some cases, the

      pages may have been deallocated.

         

      3.2 Other Locking Issues

      lock at the minimum isolation level required by the application
       

      Adaptive Server Enterprise supports isolation levels 0 (uncommitted or ‘dirty’ reads), 1, and 3 (holdlocks). The locking level can be set at per command and/or per connection. Remember that whereas reads at isolation level 3 guarantee data consistency throughout the transaction, it also means that the locks are held until the end of the transaction. Other transactions will be blocked and have to wait until the transaction releases the locks causing poor performance.

      short transactions


      By keeping the transactions short, you lessen the time a lock will be held. Often some of the work might be done outside the transaction or a transaction might be broken into two or more smaller transactions.
        For example, instead of: begin transaction
        foreach deposit_record from deposit_table
        update individual_accounts_table
      delete deposit_record from deposit_table
      end-foreach
      commit transaction
       do this:
       foreach deposit_record from deposit_table
      begin transaction
      update individual_accounts_table
      delete deposit_record from deposit_table
      commit transaction
      end-foreach
        hold no locks during user ‘think-time’
         

       If there is a lock held while data is displayed on a user’s terminal, waiting for him to decide on an action, what’s to prevent him from taking a coffee break, lunch break, or go on vacation? "But if there are no locks, how do I know that no one else has updated the record?"

      One method to ensure data consistency is to have a timestamp on the record and use it to make sure that no one else has updated it.

       max-rows-per-page: spreading the data
        Lock contention on heavily accessed tables may be alleviated by spreading the data out using the max-rows-per-page feature. This feature can be used on the leaf-level pages on any index, clustered or non-clustered. (Of course, when you use it on the clustered index you’re spreading the actual data records.) There is a trade off here - while by spreading the data you will alleviate lock contention you’ll pay for this by using more disk, having to do more physical I/O, and requiring more memory for additional cache to hold the additional pages read.

      avoid table locks (configurable lock promotion)
        By default a lock is put on the entire table once a command locks 200 pages in a table. This parameter (default value of 200) can be changed to a value greater than 200 but keep in mind that you may have to allocate more memory to handle the additional locks being held.

      declare cursor’s intent (Read Only or Updateable)
       

       If you don’t explicitly make the declaration, then Adaptive Server Enterprise will most often choose to open the cursor as updateable which could lead to lock contention.

      3.3 new Adaptive Server Enterprise 11.x Features to Boost Performance

      named caches
       

      Named Caches is a completely new feature for Adaptive Server Enterprise 11.x. No competing RDBMS product has anything of a similar concept, and it's ramifications are currently being evaluated outside Sybase to fully analyze the potential benefits.

       

      Previous to Adaptive Server Enterprise 11.x, all data read in from disk resides in a single server-wide cache. When the cache becomes full, the next call for cache space is satisfied by discarding the contents of the least recently used3 cache pages and refilling them with the currently needed data. In a heavily loaded system those least-recently-used pages may well still be needed for other ongoing or repeated queries, which will consequently need to refetch them from disk. A single ill-timed ad-hoc 'select *' query from a large table might flush the cache of critical data that is currently or soon-to-be in use by multiple core business transactions. The Named Caches feature provides a means to avoid these costly repeated physical I/Os for regularly useful data. This feature allows the DBA to divide the available cache space into any number of named caches, with the remainder staying the default cache. The DBA can then bind one or more objects to a named cache. If the bound object(s) can fit entirely into the named cache, their pages will remain in memory once read in, and will be available for all queries but are insulated from flushing due to the cache needs of other queries. Note that queries which access huge uncacheable tables can still benefit greatly by binding the index(s) to a devoted cache. A table and index may be bound to separate caches. As an example, if a huge table was the target of occasional queries whose data weren't likely to be of repeated value in cache, the table might be bound to a purposely small named cache, enforcing that pages are swapped in-and-out as needed, never swamping even the default cache. Additionally, this table's index might be designed to cover most of the frequent queries, and could be bound separately to another cache which holds all of it. This is also true if the index is clustered, in that the non-leaf pages of a clustered index can be separately bound to a cache whether or not the data pages are.

      Note that system tables can also be bound to a named cache. Sysindexes is typically a 'hot' table in OLTP environments. Lastly, note that you can bind a database to a named cache. Tempdb is a logical choice for OLTP using lots of temp tables.

      Please note that some conservatism is warranted when applying this new capability because you still have a finite amount of cache to partition, and any cache space that is devoted to a named cache will be inaccessible for anything but the object(s) bound to the cache. The default cache behaves like the cache of previous releases, and should remain of a useful size for all the non-bound data.

       large I/Os
       

      For some operations such as table scans and N-way joins, it would be possible for the optimizer to predict that the next N pages of a table are also going to be useful when it is asked to fetch page 1. This new feature allows the server to issue a single I/O call of up to 16k (8 pages) at a time. For any cache, including the default cache, the space of a cache can be divided into pools of different page sizes. The default configuration for any cache is a single pool of 2k pages.

       

      The possible page sizes are 2k, 4k, 8k and 16k. Depending upon the object(s) bound to the cache and the intended use of the object(s) one might configure a 2k pool and a 16k pool. Note that the actual pool chosen for I/O depends on complexities such as whether the page is already in the cache in one or other of the pools. There is new syntax to force this prefetch behavior on or off. By default it is on. In some cases with prefetching multiple pages in a single I/O a table scan may become faster than an index scan. Here also, conservatism is apt. If circumstances force the use of the 2k pool for a query suited to the 16k pool, and the 2k pool is too small, the processing may have to wait while 2k pages are freed up for use.  If your application performs I/O on heaps, they should use a cache that allows large I/O. data slices
         Data slices is a feature aimed at improving the parallel performance of inserting rows to a table without a clustered index. It allows multiple parallel insertion processes to run without contention. Formerly these processes would contend for the last page of the heap, where each new row is inserted. The DBA is now able to specify a configurable number of slices to a table. Each of these slices has it's own 'last page' where the next row will be inserted. Thus for a 5-slice table, 5 inserters can run unimpeded.

      Note that this is suited well for history, log tables and fast parallel BCP, but does not address the updating of indexes. If the insertions are in the order of any index the multiple processes will still serialize on the last page of that index.

       Limitations with Data Slices: A table cannot simultaneously have a clustered index and slices. Select statements going on in parallel with these inserts may incur/cause additional deadlocking because of the different order in which locks are obtained for reads and writes.

      3.4 Server Helpful Hints

      Simple Tricks to Use for Good Performance

       use ">=" rather than ">" whenever possible

      "select * from foo where x > 3"

      must scan all index pages where x=3 just to find the first qualified row!
      "select * from foo where x >= 4"
      can go directly to first qualified row.

      example:  Using a table of 10,000 records with a non-unique, non-cluster index on fld_a (an integer). The distribution of the records is such that there are approximately 5% of the records (or 500) for each value of fld_a from 1 to 20.

      query
      results
      logical reads
      access method used
      select count(*) from foo 

      where fld_a > 19

      490
      7
      index scan beginning
      at fld_a = 19
      select count(*) from foo 

      where fld_a >= 20

      490
      5
      index scan beginning 
      at fld_a = 20
      select count(*) from foo 

      where fld_a > 20

      0
      5
      index scan beginning 
      at fld_a = 20
      select count(*) from foo 

      where fld_a >= 21

      0
      2
      index scan looking to begin
      at fld_a = 21
      select count(*) from foo
      10000
      55
      index scan beginning at first record of index
      select count(*) from foo 

      where fld_x = fld_x

      10000
      625
      scan of all data pages
      select count(*) from foo 

      where fld_a >= 20 

      and fld_x = fld_x

      490
      495
      index scan beginning 
      at fld_a = 20 but for 

      each row found the 

      datapage must be read

       

      conclusions:    

      The first 4 queries illustrate how using the ">=" instead of ">" save unnecessary I/O. The last 3 queries illustrate how covered queries reduce the I/O

      use "EXISTS" and "IN" rather than "NOT EXISTS" and "NOT IN" (or COUNT)
       

      Faster in both subqueries and IF statements Easy to re-write sprocs using EXISTS or IN. For example,  if not exists (select * from ...) begin

      ... /* statement group */

      end could be re-written as :  if exists (select * from ...)

      begin

      goto exists_label

      end

      ... /* statement group */

      exists_label:

      ... EXISTS stop after 1st match as opposed to COUNT which does all the I/O to count!  example:  Using the same table of 10,000 records no index of fld_x, approximately 10% of the records have fld_x = 3.

      query
      results
      logical reads
      if not exists 

      (select * from foo 

      where fld_x = 3)

      false
      625
      if exists 

      (select * from foo 

      where fld_x = 3)

      true
      1

       

      conclusions:  A little restructuring of your logic flow can go a long way to improving performance.

      IN clause
       

      instead of just: where x in (@a, @b, @c)  add a between clause: where x in (@a, @b, @c)
      and x between min(@a, @b, @c) and max(@a, @b, @c)  example:  Still using the same table of 10,000 records with the non-clustered, non-unique index of fld_a. conclusions:

      query
      results
      logical reads
      select count(*) from foo 

      where fld_a IN (8,12,13)

      1468 
      55
      select count(*) from foo 

      where fld_a IN (8,12,13)

      and fld_a between 8 and 13

      1468
      19

       

       At times, you can help the optimizer and therefore improve performance.

      Avoid the following, if possible mathematical manipulation of SARGs (search arguments)
       

      SELECT name FROM employee WHERE salary * 12 > 100000

       better to use

      SELECT name FROM employee WHERE salary > 100000 / 12

      example: Still using the same table of 10,000 records with the non-clustered, non-unique index of salary.

      query
      results
      logical reads
      select count(*) from foo 

      where salary * 12 > 72000

      4990
      75
      select count(*) from foo 

      where salary > 72000/12

      4990
      39

       

      conclusions:

       The optimizer is not very good at algebra, but if you simplify its task then the results will be improved performance.

      use of Incompatible Datatypes between columns, SARGs, or SPROC Parameters
       

      For example: float & int, char & varchar, binary & varbinary are incompatible. int and intn (allows nulls) are OK.

      example:  Still using the same table of 10,000 records with a non-clustered, unique index on emp_id which is defined as numeric(8,0).

      query
      results
      logical reads
      select emp_id from foo 

      where emp_id =34

      34 
      625
      select emp_id from foo 

      where emp_id 

      =convert(numeric(8,0),34)

      34 
      3
      conclusions:

      Experience has shown big wins when care is taken to prevent these type mismatches.

      The following shows the output generated when the above 2 queries were ran with:

      - set showplan on

      - set statistics io on

      - set statistics time on

      - dbcc traceon(3604,302)

      1> set statistics io on

      2> set statistics time on

      3> set showplan on

      ... 1> dbcc traceon(3604, 302)

      ...

      1> select emp_id from foo

      where emp_id = 34

      Finishing q_score_index() for table 'foo' (objectid 16003088).

      Cheapest index is index 4, costing 62 pages and generating 10000 rows per scan.

      Index covers query.

      Search argument selectivity is 1.000000.

      *******************************

      QUERY IS CONNECTED

      0 -

      NEW PLAN (total cost = 1240): varno=0 indexid=4 path=0x20707878 pathtype=sclause method=NESTED ITERATION

      outerrows=1 rows=10000 joinsel=1 cpages=62 lp=62 pp=62 corder=1

      TOTAL # PERMUTATIONS: 1

      TOTAL # PLANS CONSIDERED: 2

      FINAL PLAN (total cost = 1240):

       varno=0 indexid=4 path=0x20707878 pathtype=sclause method=NESTED ITERATION

      outerrows=1 rows=10000 joinsel=1 cpages=62 lp=62 pp=62 corder=1

      STEP 1

      The type of query is SELECT.

      FROM TABLE

      foo

      Nested iteration

      Index : emp_emp_id

      Parse and Compile Time 0.

      Adaptive Server Enterprise cpu time: 0 ms.

      emp_id

      --------------------

      34.000000

      Table: foo scan count 1, logical reads: 60, physical reads: 0

      Total writes for this command: 0

      Execution Time 0.

      Adaptive Server Enterprise cpu time: 0 ms. SQL Server elapsed time: 243 ms.

      (1 row affected) now using the convert function to ensure type matching

      1> select emp_id from foo

      where emp_id = convert(numeric(8,0),34)

      *******************************

      Entering q_score_index() for table 'foo' (objectid 16003088).

      The table has 10000 rows and 625 pages.

      Scoring the SEARCH CLAUSE: emp_id EQ

      Base cost: indid: 0 rows: 10000 pages: 625

      Unique nonclustered index found--return rows 1 pages 2

      Cheapest index is index 4, costing 2 pages and generating 1 rows per scan.

      Index covers query.

      Search argument selectivity is 0.000100.

      *******************************

      QUERY IS CONNECTED

      NEW PLAN FOR ONEROW (total cost = 40):

      varno=0 indexid=4 path=0x2015386c pathtype=sclause method=NESTED ITERATION

      outerrows=1 rows=1 joinsel=163854 cpages=2 lp=2 pp=2 corder=1

      TOTAL # PERMUTATIONS: 0

      TOTAL # PLANS CONSIDERED: 0

      FINAL PLAN (total cost = 538558464):

      varno=0 indexid=4 path=0x2015386c pathtype=sclause method=NESTED ITERATION

      outerrows=1 rows=1 joinsel=163854 cpages=2 lp=2 pp=2 corder=1

      STEP 1

      The type of query is SELECT.

      FROM TABLE

            foo

            Nested iteration

            Index : emp_emp_id

            Parse and Compile Time 0.

            Adaptive Server Enterprise cpu time: 0 ms.

            emp_id

            --------------------

            34.000000

            Table: foo scan count 1, logical reads: 2, physical reads: 0

            Total writes for this command: 0

            Execution Time 0.

            Adaptive Server Enterprise cpu time: 0 ms. SQL Server elapsed time: 0 ms.

             (1 row affected)

                use of multiple "OR" statements - especially on different columns in same table
                 

                 If any portion of the OR clause requires a table scan, it will scan the table! OR strategy requires additional cost of creating and sorting a work table. Evaluate UNIONs as an alternative !

                not using the Leading Portion of the Index
                 

                (unless the query is completely covered)

                use of Non-Equal Expressions (!=) in WHERE Clause
                whenever possible, populate temporary table with "select into"
                 

                 SELECT INTO operations are not logged and therefore are significantly faster than using an INSERT with a nested SELECT.

              3.5 trace flags with which to boot the Server
               
               
              1610 - TCP_NODELAY  (see section 6.1 below) 260 - DONE_IN_PROC  To facilitate SQLDebug, Adaptive Server Enterprise needs to append an additional 9 bytes of data (a "done_in_proc" message) to the TDS packet for every command in a Stored Procedure.

             These messages may impose additional TDS communications. Stored procedures with significant "server-based" processing can reduce this overhead by surpressing these "done_in_proc" messages.

             The default behaviour of the pre- Adaptive Server Enterprise 11.x Server was to send these 9 bytes. To prevent "done_in_proc" messages, start Adaptive Server Enterprise with trace flag #260. Note: your errorlog will not indicate the use of this option with any message!

             In the Adaptive Server Enterprise 11.x Server, the default behaviour has changed and more and easier control has been given to the user. A new configuration variable (called done_inproc) has been made available. The values for this variable are: 0 - "done in proc" turned off for the whole server
            (default in Adaptive Server Enterprise 11.x)

            1 - "done in proc" turned on for the whole server (default in pre-Adaptive Server Enterprise 11.x)

            2 - "done in proc" turned on for this connection only

            3 - "done in proc" turned off for this connection only Test Carefully, turning this off has been known to break some existing 3rd Party applications.


     

    DOCUMENT ATTRIBUTES
    Last Revised: Oct 06, 1999
    Product: Open Client, Adaptive Server Enterprise
    Technical Topics: Performance & Tuning, Application Development
      
    Business or Technical: Technical
    Content Id: 1001342
    Infotype: White Paper-Technical
     
     
     

     
    © Copyright 2014, Sybase Inc. - v 7.6 Home / Contact Us / Help / Jobs / Legal / Privacy / Code of Ethics