"Let
the Server serve"
Most
of the 'heavy lifting' done in database applications is moving raw data: I/O.
Disk I/O is slow, and network I/O is slower. The best way to get performance
is to do as little I/O as possible.
The
Adaptive Server Enterprise is designed specifically to allow you to do as
much data manipulation where the data already is, instead of sending it anywhere.
Put as much of your processing into stored procedures as possible.
Also
note that the Adaptive Server Enterprise is optimized for set-wise operations.
While many applications are ports from row-at-a-time data management systems,
making it seductively straightforward to retool for an RDBMS, the best performance
gains any Sybase customer ever gets is when this sort of application logic
is recoded to operate on whole sets of data at a swoop. Per-row conditional
logic is still available if necessary in stored procedures by using database
cursors. To best achieve your goals, both in performance and in simplicity,
strive for set-oriented queries that replace seemingly complicated row-at-time
processes.
This
recoding for performance is nontrivial. We know. And coupled with that, some
of our competitor's products currently perform better (less poorly?) than
we do in row-at-a-time settings. This is logical because our competitors products
were invariably designed before Sybase, using older data models, just as Model
T Fords work better than Corvettes in ploughed fields. But regardless of your
RDBMS, your performance will benefit from a deep client-server architecture.
No product is faster at moving data than not moving data. Sybase is
committed to serving the needs of the customer, including row-at-a-time, but
if you can get us onto the highway you won't be disappointed.
4.1
TCP_NODELAY
Improving
Performance by Setting the TCP/IP Network Option TCP_NODELAY
This
information is NO more applicable with Open Client CT-Library 11.1 version
which already enables TCP Nodelay.
However,
it is necessary to set the Adaptive Server Enterprise 11.x configuration
option, "tcp no delay" to enable the Adaptive Server to communicate to the
application written with OpenClient 11.1 CT-Library using the TCP_NODELAY.
The
TCP/IP Network Protocol has, by default, a built-in delay to wait until
a network packet is full before sending it. TCP/IP will put small logical
packets into one larger physical packet by briefly delaying packets in an
effort to fill the network frames with as much data as possible. In some
cases this has no effect on Adaptive Server Enterprise or Client performance,
but in other instances waiting for the packets to fill can cause a delay
in client response times. This is especially true when the result set returned
by the server exceeds the configured packet size. For example, if the server
is returning 600 bytes of data and TDS information, and the client packet
size is configured for 512 byte packets, there will be a delay experienced
on the client side in waiting for the second packet of data to arrive.
Most
flavors of TCP/IP support the network option TCP_NODELAY. This is an option
set on the socket that tells TCP to send a packet immediately, rather than
waiting to fill it. Setting TCP_NODELAY can make dramatic improvements to
client server performance. The only argument against implementing TCP_NODELAY
is that it will cause an increase in the number of packets on the network.
In most cases, this is not problematic, but if the client is connecting
to the server over a WAN, we do not recommend a configuration that could
potentially flood the WAN with small packets. In most cases however, customers
have experienced improvements in response times by using this option. The
option is enabled within a C program by making a call to a system routine.
On the Adaptive Server Enterprise , the option is enabled via a trace flag.
The purpose of this article is to provide examples that will show how to
turn on the TCP_NODELAY option in CT-lib, DB-lib, and Adaptive Server Enterprise
. Examples are given for both socket based TCP/IP (HP-UX, AIX, Sun4 OS)
and TLI (Sun Solaris).
Many
Sybase VARs have experienced improvement in response times by setting the
TCP_NODELAY option on either the Sybase client (DB-lib or CT-lib), or on
the Adaptive Server Enterprise , or both. The best way to see if you can
benefit from this configuration change is to test it with your applications.
We recommend running timed tests with setting the option on the server only,
on the client only, and then on both the server and client. Here are code
examples that you can follow to enable the TCP_NODELAY option.
Setting
TCP_NODELAY for DBlib clients:
The
following provides information on setting the TCP_NODELAY option for TCP/IP
sockets. The code fragments below were used for setting the socket option
in the HP9000/800 HP-UX 9.0x environment. Include files may need to be altered
for other TCP/IP socket platforms.
The
following include files need to be added to the top of the source file:
/*
Include these network files for setting the socket option. */
#include
<sys/socket.h>
#include
<netdb.h>
#include
<sys/types.h>
#include
<netinet/tcp.h>
-
You need to have the following variables declared:
int
struct protoent
DBPROCESS
int
|
dbfd;
*proto;
*dbproc;
optval;
|
/* The file descriptor for this */
/* connection. */
/* Represents the network */
/* protocol you are using. */
/* Your dbprocess. You */
/* probably already have this */
/* declared. */
/* Used in setsockopt(). */ |
-
After making your dbopen() call to get your dbproc, insert the following
code:
/*
** Get
the socket (file descriptor) associated with this
** dbproc's
write connection.
*/
dbfd
= DBIOWDESC(dbproc);
/*
** Tell
TCP to send all data as soon as it gets it,
** rather
than waiting for acks from the receiving
** side.
*/
if
((proto = getprotobyname("TCP")) == (struct protoent *)NULL)
{
perror("geprotobyname()
failed");
exit
(ERREXIT)
}
optval
= 1;
if
(setsockopt(dbfd, proto->p_proto, TCP_NODELAY, (char *)&optval,
sizeof(optval))
== -1)
{
perror("setsockopt()
failed");
exit
(ERREXIT);
}
This
turns on the TCP_NODELAY option. Then go ahead and use the dbproc for all
future DB-Library work. Note that for each new dbproc you need to do similar
processing.
Note:
This should be done before any query is sent to the server by dbsqlexec(),
dbsqlsend(), or dbrpcsend().
Setting
TCP_NODELAY on the Server:
TCP_NODELAY
on the Server side is implemented by booting the Server with the trace flag
-T1610. Here is a sample RUN_SERVER file with the 1610 trace flag enabled:
#!/bin/sh
#
# Adaptive
Server Enterprise Information:
# name:
REL1001_SUN4
# master
device: /remote/ts2/devices/REL1001_SUN4/master.dat
# master
device size: 8704
# errorlog:
/remote/releases/sun4/rel1001/install/REL1001_SUN4.errorlog
# interfaces:
/remote/releases/sun4/rel1001
#
/remote/releases/sun4/rel1001/bin/dataserver
-T1610 \
-d/remote/ts2/devices/REL1001_SUN4/master.dat
-sREL1001_SUN4 \
-e/remote/releases/sun4/rel1001/install/REL1001_SUN4.errorlog
\
-i/remote/releases/sun4/rel1001
You
will see the following message in the server boot sequence in the Sybase
errorlog if TCP_NODELAY is enabled:
00:94/09/30
14:49:21.37 kernel SQL Server booted with TCP_NODELAY enabled.
This
server option is available on Adaptive Server Enterprise for HP-UX, AIX,
Sun4 OS 4.x, DG AViiON, DEC ALPHA OSF/1 and NCR. For Sun Solaris 10.0.1
Adaptive Server Enterprise s, request the latest EBF from Sybase Technical
Support that contains bug fix 53676. For Sequent and SCO UNIX, request EBF
containing bug fix 40688.
Setting
TCP_NODELAY for CT-Lib clients:
The
following code fragments can be used to set the TCP_NODELAY socket option
for CT-Lib clients. In this example, a function set_nodelay is provided.
This example was done for HP9000/800 HP-UX 9.0x. On other TCP/IP platforms
the include files may differ.
Add
the following include files to the top of the source file:
/*
Include network includes for setting the socket */
#include
<sys/socket.h>
#include
<netdb.h>
#include
<sys/types.h>
#include
<netinet/tcp.h>
Call
the set_nodelay function after the ct_connect function has been called to
establish a connection:
/*
** Allocate
a connection structure, set its properties, and
** establish
a connection.
**
** If desired,
pass in an explicit server name, and the
** server
name's length, in place of NULL and 0, respectively.
*/
retcode
= ct_connect(&connection, NULL, 0);
/*
** change
the socket option on the connection for TCP_NODELAY.
*/
retcode
= set_nodelay(connection);
Here
is the text of the set_nodelay function:
/*
** set_nodelay(connection)
**
** Obtain
the socket descriptor for the connection and call setsockopt
** to enable
TCP_NODELAY for the connection.
**
*/
int
set_nodelay(connection)
CS_CONNECTION
*connection;
{
int
CS_INT
struct
protoent
int
CS_RETCODE
|
dbfd;
outlen;
*proto;
optval;
retcode;
|
/*
The file descriptor for this */
/*
connection. */
/*
length of buffer returned */
/*
Represents the network */
/*
protocol you are using. */
/*
Used in setsockopt(). */ |
retcode
= ct_con_props(connection, CS_GET, CS_ENDPOINT, &dbfd,
CS_UNUSED,
&outlen);
if ((proto
= getprotobyname("TCP")) == (struct protoent *)NULL)
{
printf("geprotobyname()
failed");
exit
(0);
}
optval
= 1;
if
((setsockopt(dbfd, proto->p_proto, TCP_NODELAY, (char *)&optval,
sizeof(optval)))
== -1)
{
printf("setsockopt()
failed");
exit
(0);
}
return(retcode);
}
Setting
TCP_NODELAY for TLI DBlib clients :
The
following provides information on setting the TCP_NODELAY option for TLI
clients. The code fragments below were used for setting the TLI option on
Sun Solaris 2.3. The include files may need to be altered for other TLI
platforms. Also, be aware that the t_optmgmt function may vary widely on
TLI platforms, unlike the socket call, setsockopt, which is quite uniform
across platforms.
/*
Include the following to support TCP_NODELAY */
#include
<sys/types.h>
#include
<errno.h>
#include
<sys/tiuser.h>
#include
<netinet/in.h>
#include
<netinet/tcp.h>
#define
T_FUNC_MAX_RETRIES 10
/*
We'll use the function set_nodelay to set the TLI option */
/* for TCP_NODELAY.
Here is a forward declaration of the function */
int
set_nodelay(int);
Add
the following declaration to the function main:
|
|
dbfd;
set_ret;
|
/*
The file descriptor for this */
/*
connection. */
/*
return code for set_nodelay */ |
After
making your dbopen() call to get your dbproc, insert the following code:
/*
** Get
the socket (file descriptor) associated with this
** dbproc's
write connection.
*/
dbfd =
DBIOWDESC(dbproc);
/*
Set the TCP_NODELAY option */
set_ret
= set_nodelay(dbfd);
Here
is the text of the function set_nodelay:
/*
**
int set_nodelay ( int fd )
**
**
Where fd is the file descriptor returned by a previous
**
call to t_open().
**
**
Returns
**
0 Success
**
-1 Error
**
**
If -1 is returned then the global variables errno and t_errno
**
contain the system supplied error numbers.
*/
int
set_nodelay(fd)
int
fd;
{
int count;
struct t_optmgmt req, ret;
struct opthdr {
long level;
long name;
long len;
};
struct ndstruct {
struct opthdr header;
long value;
} ndreq, ndret;
t_errno = errno = 0;
ndreq.header.level = IPPROTO_TCP;
ndreq.header.name = TCP_NODELAY;
ndreq.header.len = sizeof(ndreq.value);
ndreq.value = 1;
ndret.header.level = 0;
ndret.header.name = 0;
ndret.header.len = 0;
ndret.value = 0;
req.flags = T_NEGOTIATE;
req.opt.maxlen = req.opt.len = sizeof(ndreq);
req.opt.buf = (char *)&ndreq;
ret.flags = 0;
ret.opt.maxlen = sizeof(ndret);
ret.opt.len = 0;
ret.opt.buf = (char *)&ndret;
count = 0;
while ( t_optmgmt(fd,&req,&ret) != 0 )
{
if ((t_errno == TSYSERR) && (errno == EINTR)
&& (count++ < T_FUNC_MAX_RETRIES))
continue;
return -1;
}
return 0;
}
This
turns on the TCP_NODELAY option. Then go ahead and use the dbproc for all
future DB-Library work. Note that for each new dbproc you need to do similar
processing.
Note:
This should be done before any query is sent to the server by dbsqlexec(),
dbsqlsend(), or dbrpcsend().
TCP_NODELAY
on the Server side is done with trace flag -T1610. This was not implemented
on many TLI platforms and in some cases, EBFs will have to be requested
from Sybase to have this option enabled on the server side.
Setting
TCP_NODELAY for TLI CT-Lib clients:
The
following code fragments can be used to set the TCP_NODELAY socket option
for CT-Lib TLI clients. In this example, a function set_nodelay is provided.
This example was done for Sun Solaris 2.3. On other TLI platforms the include
files may differ.
Add
the following include files to the top of the source file:
#include
<sys/types.h>
#include
<errno.h>
#include
<sys/tiuser.h>
#include
<netinet/in.h>
#include
<netinet/tcp.h>
Add
the following declarations to function main:
|
|
dbfd;
set_ret;
|
/*
The file descriptor for this */
/*
connection. */
/*
The return code from set_nodelay */
|
Call
the set_nodelay function after the ct_connect function has been called to
establish a connection:
/*
** Allocate
a connection structure, set its properties, and
** establish
a connection.
**
** If
desired, pass in an explicit server name, and the
** server
name's length, in place of NULL and 0, respectively.
*/
retcode
= ct_connect(&connection, NULL, 0);
/*
** change
the socket option on the connection for TCP_NODELAY.
*/
retcode
= ct_con_props(connection, CS_GET, CS_ENDPOINT, &dbfd, CS_UNUSED,
&outlen);
set_ret
= set_nodelay(dbfd);
Use
the same set_nodelay function provided in the Section entitled
"Setting
TCP_NODELAY for TLI DBlib clients"
to
set the TCP_NODELAY option.
Overview
TDS
packets are exchanged between Sybase clients and servers as the means of communicating
commands and data. The packets are managed by a network protocol such as TCP/IP
working over various types of physical connection such as Ethernet.
Pre-v4.6
TDS packet size is limited to 512 bytes while network frame sizes are significantly
larger, typically 1508 bytes on Ethernet and 4120 bytes on Token Ring. Note
that the specific protocol may have other limitations, for example:
-
IPX is limited to 576 bytes in a routed network.
- SPX requires
acknowledgment of each packet before it will send another.
TDS
packet size can now be set in multiples of 512 bytes from the default of 512
up to a maximum of 8192 bytes. Choosing a packet size greater than 512 may
result in more efficient network throughput. Implementation requires:
-
Consideration of the best (or best mix) of TDS packet size(s).
- Coding the
client to specify TDS packet size.
- Understanding
the impact of Adaptive Server Enterprise configuration. - Configuring the
Adaptive Server Enterprise to support the TDS packet size(s).
- (Extra credit)
Tuning the client network buffers.
CHOOSING
TDS PACKET SIZE
How
TDS Packet Size Is Important
In
a 512 packet there are approximately 480 bytes available for the packet's
`data', that is, for command syntax, result set descriptions, or result set
data. If more than 480 bytes is sent to the Adaptive Server Enterprise , or
returned by it, the information must be spread over multiple packets. At the
receiving end, the client or server must wait for the network to deliver multiple
packets before processing can begin.
For
example, if a client selects a single row with, say 1600 bytes of data, that
data is spread over 4 packets, and the underlying CT-Lib or DB-Lib must wait
until they all have been received before returning control to the higher level
client functions where the row's data will be processed. In addition, there
is the result set descriptive information (name, size, datatype, etc, for
all columns in the result set) which may require additional packets.
The
default 512 packet size is well suited to many applications; for example,
the proverbial ATM application where relatively small amounts of information
are exchanged between clients and the server. Even with occasional larger,
multi-packet operations, performance will not be appreciably slower.
When Setting Packet Size Might Be Helpful
The
choice of packet size is influenced by the amount of information passed between
a client and the server for each individual client 'request' (and in aggregate
for all the requests a client makes), not a total volume of bytes processed
by the client in a particular period. That is, a million 10-byte commands
that each returns 100 bytes of data will not run faster if packet size in
increased from 512 to 8192, in fact they will likely run slower.
Having
said that, applications in which clients and the server exchange many bytes
for a typical request (e.g., long SQL command syntax, select of long rows
or a large number of rows, frequent insert or update of long rows) will benefit
from increasing the TDS packet size.
So
why not use the maximum packet size of 8192 bytes, and be done with it? There
are several reasons why an arbitrary choice of packet size is possibly harmful,
or at least unhelpful.
A
packet is always padded to its specified size; so if the 'typical' request-
response was say 2000 bytes, each 8192 packet would carry over 6000 bytes
of spaces. This imposes a network load in which the waste/overhead is 3 times
the load needed for a useful information flow, a packet size of 2048 would
be much better.
In
addition, underlying network protocols may still need to break a packet into
the frames used for transport (e.g., 1508 bytes for Ethernet). It is better
to avoid this network overhead whenever possible.
Evaluation Of Possible Packet Sizes
There
may not be a single 'best' TDS packet size for all client programs in an application,
clients which request large amounts of data (e.g., reporting programs) will
do better with a larger packet size, while OLTP clients may do better will
smaller packet sizes. A program which opens multiple server connections may
optimally use a different packet size on each connection.
A
packet size choice for an individual program (connection) can be estimated
from a profile of application functions, the expected volume of use, and an
understanding of the mix of client requests and server response size. While
this information is important to the application development overall, it is
not a critical step to choosing packet size.
Rather,
build the 'set packet size' capability into clients from the start. Test initially
with a 512 packet size and evaluate each program (connection) using sp_monitor.
Examine
the count of packets received by the Adaptive Server Enterprise , the count
sent, and the average number of bytes in these packets. If the packets sent
is much higher than the number received, retest with a packet size that brings
the packets received/sent ratio closer to 1/1.
For
example, if the received/sent ratio is 1/3 and the 512 byte packet size contains
an average of 450 bytes, consider testing a packet greater than 3 * 450, i.e.,
the 1536 size. Next try the 1024 size, it or the 1536 size is likely to be
the most suitable choice; but also test 2048 size, largely to confirm having
reached the point of diminishing returns.
Note
that received/sent parity is not an end in itself, only a useful place to
start. Stop testing larger packet sizes once the average bytes per packet
is less than the current packet size.
The
Best Advice
Most
importantly, as a client program use many different requests/responses, with
widely varying content, the best packet size may well be the one smaller than
the packet size which shows receive/send count parity.
That
is, if a majority of request/responses are satisfied using the smaller packet
size; then those request/responses which require multiple packets will not
affect performance as much as having used the larger packet size with an average
overhead of +/- 512 blanks in most of them.
Use
the smallest packet size that gives the necessary performance.
Actual
performance measurement is the only correct determinant of what a good packet
size might be. As with most other things, the suitability of results depends
on how realistic the testing is.
Another
important goal is to have most of the connections made at or below the "default
network packet size" (defined more fully in PACKET SIZE AND THE Adaptive Server
Enterprise section) to avoid memory management problems. This means that if
a significant number of client programs would use a packet size of say 1024,
the "default network packet size" should be re-configured for that value in
place of the 512 value automatically specified at Adaptive Server Enterprise
installation.
SPECIFYING
PACKET SIZE IN CLIENT CODE
The
client code to set the TDS packet size is simple; prior to opening the Adaptive
Server Enterprise connection:
Client-Library
Use
the ct_con_props() call passing a pointer to the variable holding the requested
packet size, example:
...
CS_INT tds_pktsz = 1024;
...
retcode = ct_con_props(connection, CS_SET, CS_PACKETSIZE,
(CS_VOID *)&tds_pktsz, CS_UNUSED, (CS_INT *)NULL);
...
DB-Library
Use
the DBSETLPACKET macro passing the variable holding the requested packet size,
example:
...
short tds_pktsz = 1024;
...
DBSETLPACKET(loginrec, tds_pktsz);
...
PACKET
SIZE AND THE Adaptive Server Enterprise
The
Client 'Proposes', the Adaptive Server Enterprise 'Disposes'
While
a client may request a particular packet size when connecting to an Adaptive
Server Enterprise , the Server's configuration will control what happens to
the request.
This
is based primarily on the values for "default network packet size" and for
"maximum network packet size", i.e.:
-
If
client connection (CT-Lib or DB-Lib) does *not* specify a packet size,
512 is used.
-
If
client packet size > server `maximum' packet size, return ERROR.
-
If
client packet size <= server `default' packet size, use the client
packet size.
-
If
client packet size > server `default' and <= server `maximum', use
the client packet size, provided enough "additional netmem" is available;
if not, use as much as the memory will allow for (rounded down to the
nearest multiple of 512).
So
the configuration parameter "default network packet size" name is a little
misleading. It is a threshold used by the Adaptive Server Enterprise to decide
which memory pool will be used for the connection's packets, it has nothing
to do with the packet size used if the client does not specifically request
a size.
Choosing
Configuration Values
Default-size
(or smaller) packets are buffered using space from the Server Memory pool.
Larger packets (i.e., bigger than `default') are buffered in the Additional
Network Memory pool, allocated from the host system's memory much as the normal
server memory pool is.
As
noted in an earlier section, it is beneficial to have most client programs
connect with TDS packet size <= "default network packet size". And as covered
in the next section, "additional netmem" must be evaluated whenever there
is significant change in the number of uses of large packets or the mix of
large packet sizes client programs are using.
SETTING THE Adaptive Server Enterprise PACKET SIZE
PARAMETERS
The
DBA (sa) must set the Adaptive Server Enterprise 's configuration properly,
i.e.: sp_configure "default network packet size", nnnnn sets default
packet size per client connection. (512 as installed) sp_configure "maximum
network packet size", nnnnn
sets
maximum packet size per client connection. (512 as installed)
sp_configure
"additional netmem", nnnnn memory for large packets (i.e., > "default
network packet size") is taken from a separate pool, it does not come from
the server's memory pool set by `sp_configure memory'. (0 as installed).
An
optimum value for "additional netmem" (round up to a multiple of 2048)
= 1.02 * #-connections-using-large-packets * large-packet-size * 3 as
each connection uses 3 network buffers: one to read, one to write, and one
for overflow. The `large-packet-size' value may have to be a weighted average
if clients use a variety of large packet sized. An additional 2% (the `.02')
is put in for overhead.
EFFICIENT USE OF CLIENT NETWORK BUFFERS
Further
efficiencies may be gained by adjusting the size of the buffers that are available
to the client when writing to, or reading from, the network.
How
Send and Receive Buffers Are Important
During
application/network interface, TDS packets are placed into the socket send/receive
buffers. If the packet is too big to fit into a buffer it will be split, requiring
multiple buffer transmissions to send the entire packet. There for, the buffers
should be at least as large as the TDS packet size.
As
an example, the default size of such buffers on HP/UX is 8K, on AIX it appears
to be somewhere around 16K. Note that an 8K buffer can completely hold the
8192 TDS packet; any overhead is over and above the 8K.
If
the default buffer size is larger than the most frequently used TDS packet
size, there is no reason to change it. Presumably, the O/S vendor carefully
chose an appropriate default size for these. Anyway, CT-Lib’s code structure
actually makes it impossible to reduce the buffer size.
If
the default size is smaller, increase the buffer size to handle most TDS packets
in a single transmission; that is, to the TDS packet size chosen for the connection.
As described in the CHOOSING TDS PACKET SIZE section, this may be smaller
than the maximum block of data to be sent, but those will be split across
packets and transmitted without impacting overall performance.
Influence
of TCP_NODELAY
Once
the send buffer is full, such as happens if the buffer and packet size are
the same, network transmission of the buffer happens immediately.
When
TCP_NODELAY is enabled, network transmission also happens as soon as any data
(e.g., a TDS packet) is placed in the buffer, but regardless of how full the
buffer is. If TCP_NODELAY is not enabled, and the buffer is larger than data
placed in it, the network layer briefly delays transmission waiting for additional
data before sending the entire buffer, reducing network traffic.
If
the send buffer and packet size are the same, setting client TCP_NODELAY is
minimally useful.
If
the Adaptive Server Enterprise has TCP_NODELAY, the receive buffer size is
irrelevant, as long as it's >= TDS packet size.
However,
both the send and receive buffers require O/S memory for servicing, so increasing
them does have some cost, which is difficult to determine and probably varies
from vendor to vendor. Certainly increasing them beyond the chosen TDS packet
size is unnecessary.
SPECIFYING NETWORK BUFFERS IN CLIENT CODE
#include
<netdb.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/tcp.h>
...
CS_INT tds_pktsz = 1024;
...
if
(setsockopt(dbfd, SOL_SOCKET, SO_SNDBUF, (char
*)&tds_pktsz, sizeof(tds_pktsz)) == -1) {
perror("setsockopt()
failed - SO_SNDBUF");
exit (ERREXIT); }
if
(setsockopt(dbfd, SOL_SOCKET, SO_RCVBUF, (char
*)&tds_pktsz, sizeof(tds_pktsz)) == -1) {
perror("setsockopt()
failed - SO_SNDBUF");
exit (ERREXIT); }
...
4.3
CS_NOAPI_CHK
The
routines in Client-Library do extensive error checking to guard against
segmentation violations. This is especially important during the development
and debugging phases of the application development. But like the debug
libraries (see section 6.4), this checking imposes significant overhead
an is not necessary in a production environment.
Turning
argument checking off typically results in a 10-15% performance gain in
Client-Library usage. How much overall time is gained depends on the application
processing mix.
However,
if you send the wrong datatype to one of the CT-Lib APIs and this option
is set (CS_NOAPI_CHK is TRUE) you will probably CORE DUMP. If the option
is not set and you send the wrong datatype, you'll get a CT-Lib error message.
In most production applications, issues such as datatype matching should
not arise.
*
A Code Example
This
is the code to set the CS_NOAPI_CHK option. It turns off CT-Lib’s checking
for valid datatypes.
This
call should be after ct_init() and before ct_connect().
CS_CONTEXT *cp;
CS_BOOL boolval;
.
boolval = CS_TRUE;
/* This API must be called after
ct_init() is called */
if (ct_config(cp, CS_SET, CS_NOAPI_CHK,
(CS_VOID *)&boolval,
CS_UNUSED, NULL) != CS_SUCCEED)
{
/* Release the context structure. */
(CS_VOID)ct_exit(cp, CS_UNUSED);
(CS_VOID)cs_ctx_drop(cp);
FPRINTF(stdout,
"Can't turn off CTLIB API checking. Exiting\n");
return (CS_FAIL);
}
In
addition to speeding up insert, update, and select operations it should
also reduce the CPU cycles used by each work process.
CS-Library
argument checking can also be turned off. Use the CS-Library routine cs_config()
to set the property CS_NOAPI_CK to CS_TRUE. The default is CS_FALSE.
Comments
for both
In
argument checking, parameters passed to library (ct or cs) functions are
checked to see that they are valid. For example, if you are passing a pointer
to a CS_* structure, then the pointer might be checked to see if is valid.
If the pointer is valid, then the structure contents might be checked.
In
state checking, when library functions (ct or cs) are called, they will
check to see that they have been called at a legal time. For example, you
can not send a command w/ ct_send() if you have not opened the connection
first.
If
you disable argument and state error checking, and your code contains a
single API usage error in a single application code path, you are likely
to get a core dump or incorrect results instead of error messages. Platforms
that don't protect memory are likely to lock up (or worse).
The
two sample outputs below illustrate the effects of CS_NOAPI_CHK. In both,
statechk is a simple Sun4 CT-Lib app that attempts to send a query, fetch
results, and close the connection -- All done without opening the connection
first. This is a connection state error: you can't do any of these operations
if the connection is not open.
In
the first example, API checking is enabled (CS_FALSE) and in the second,
it is off.
----
output w/ CS_NOAPI_CHK == CS_FALSE ----------------------- bosco%
./statechk Client
Library error: number(60) severity(1) layer(1)
ct_command():
user api layer: external error:
There
is a usage error.
This routine has been called at an illegal time.
ERROR:
app_run_select: ct_command failed!
ERROR:
app_run_select() failed!
Client Library error: number(60) severity(1) layer(1)
ct_close():
user api layer: external error:
There
is a usage error.
This routine has been called at an illegal time.
ERROR:
ct_close failed! ----------------------------------------------------------
---- output
w/ CS_NOAPI_CHK == CS_TRUE ----------------------- bosco%
./statechk Segmentation
fault (core dumped)
4.4
using the right library
The
Sybase installation delivers two sets of libraries under the $SYBASE directories,
devlib and lib. The libraries in devlib are debug libraries
which are very useful while developing and debugging an application. The
stripped version of the libraries in the directory lib have had code
compiled out and typically run two or three times faster than the debug
libraries. Don’t use debug libraries for production software once development
and testing is completed.
4.5
ct_describe() and ct_param()
The
sample programs that are distributed with the installation of Sybase Client
Libraries illustrate the use of ct_describe() to get a description of the
column being retrieved and ct_param() to pass in a parameter value. These
sample programs are likely to be the starting point for a lot a customer
applications. But very often it is not necessary to use ct_describe() in
applications since the columns being fetched are already known to the application.
Parameters can also be passed by position relieving the application of the
need to place the name of the parameter and the name length in the datafmt
structure passed into ct_param(). In addition, there is a performance gain
on the server side which does not have to validate the parameter name and
can process the parameters based on position.
4.6
array binding
Array
binding is available for regular and cursor row results. This functionality
is provided to simplify application programming and very often increases
Client-Library performance. At fetch time multiple rows are copied into
array variables with a single fetch call. The application uses array binding
by setting the count in the datafmt structure that is passed to ct_bind()
and providing appropriate variable arrays.
4.7
how to choose the right CT-Lib command
A
Client-Library user has several options for sending SQL commands to the
server. It is not always obvious which is the most effective and fastest
method to use. The following can help you make the best decision, to use
step through the following:
-
If
information of the input parameters is needed, use ct_dynamic()
-
If
the SQL command is used repeatedly, use a stored procedure
-
If
simultaneous results must be fetched or processed, use ct_cursor()
(note: setting the cursor row count can improve performance, also, always
declare the cursor’s intent (Read Only or Updateable))
Otherwise use a language command with ct_command()