8.0.3 Bug fixes
(see also Critical Bug Fixes)
(see also New Features)
This section contains a description of bug fixes made since the release
of version 8.0.3.
================(Build #5304 - Engineering Case #405060)================
When Connection Lifetime was specified in a connection, the connection duration
would have been incorrect. This has been fixed so that the behaviour is as
follows:
- If the time span exceeds the value specified by Connection Lifetime, the
connection will be destroyed when it is closed.
- If the time span does not exceed the value specified by Connection Lifetime,
the connect will be returned to the pool when it is closed.
================(Build #5299 - Engineering Case #403057)================
Calling the ChangeDatabase method would have resulted in the provider looking
for name of the DSN, instead of the database name. The ChangeDatabase method
has been corrected to now use the DatabaseName(DBN) connection parameter
to change the current database.
================(Build #5289 - Engineering Case #400563)================
It was not possible to call a stored procedure via ADO.Net and have it to
use the default values for the stored procedure parameters, unless the parameters
were at the end of the list. This has been resolved by adding a new class
ASADefault/SADefault to the .NET provider. If the value of a parameter is
set to AsaDefault/SADefault.Value, the server will use the default value
to execute the procedure.
For example:
create procedure DBA.myproc( in arg1 int default 1, in arg2 int default
2, in arg3 int default 3)
begin
select arg1 + arg2 + arg3 from dummy;
end
To use the default values, set the parameters' value to SADefault.Value:
SAConnection conn = new SAConnection( "DSN=SQL Anywhere 10 Demo" );
conn.Open();
SACommand cmd = new SACommand( "DBA.myproc", conn );
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add( new SAParameter( "arg1", SADefault.Value ) );
cmd.Parameters.Add( new SAParameter( "arg2", SADefault.Value ) );
cmd.Parameters.Add( new SAParameter( "arg3", SADefault.Value ) );
int val = ( int ) cmd.ExecuteScalar();
================(Build #5260 - Engineering Case #392294)================
If the server was already stopped, the AsaClient would have thrown an exception
when closing the connection. The AsaClient was checking the error code when
closing the connection and threw the exception if the error code is not -85
Communication error. The AsaClient now ignores error whem closing the connection.
================(Build #5258 - Engineering Case #391887)================
Calling the method AsaDataReader.GetSchema() could have generated the exception:
"ASA .NET Data Provider: Column 'table_name' not found (-143)", if the database
also had a user table named systable. This has now been fixed by qualifying
references to system tables with the SYS owner name.
================(Build #5242 - Engineering Case #387070)================
The the method AsaDataAdapter.Fill may have caused an InvalidCastException
if the query returned columns which had the same name and different data
types. This problem has been fixed, but it is not recommanded to use duplicate
column names when filling a DataSet.
================(Build #5239 - Engineering Case #386109)================
The AsaDataAdapter object was very slow when filling a DataSet or a DataTable
which had primary key. This problem has been fixed.
================(Build #5236 - Engineering Case #385349)================
When inserting Multi-byte Character Set strings, by passing them as parameters
to the AsaCommand method, the strings were not saved. This has ben fixed.
================(Build #5217 - Engineering Case #379532)================
The same command object could have been deleted twice when running in a multi-threaded
environment. This could potentially have caused a crash. The problem has
been fixed.
================(Build #5212 - Engineering Case #378360)================
The Data Adapter did not set IsKey property to true when filling a DataTable
if the source tables had unique indexes. This problem has been fixed.
================(Build #5195 - Engineering Case #374580)================
An InvalidCastException would have been thrown when filling a DataTable using
AsaDataAdapter, if a returned TIME column was mapped to a STRING column in
the DataTable.
This problem has been fixed.
================(Build #5180 - Engineering Case #371411)================
The isolation level for a transaction was being set to 1 when the connection
was opened. Now, the isolation level is no longer set to any specific value.
The server default is the value defined for the connection by the database
option Isolation_level.
Note, this problem was introduced in the following builds:
8.0.3 5128
8.0.2 4442
9.0.2 2528
9.0.0 1333
9.0.1 1887
The old behavior is now restored
================(Build #5176 - Engineering Case #370326)================
The method AsaDataReader.GetSchemaTable() may have caused an InvalidCastException
when the data reader had some unique columns and computed columns. This problem
has been fixed.
================(Build #5173 - Engineering Case #369704)================
When filling a DataSet using the ASADataAdapter object, the AutoIncrement
property of DataColumn was not set properly. This has now been fixed.
================(Build #5162 - Engineering Case #367464)================
The ASACommandBuilder class could not derive parameters if the stored procedure
name was quoted. Fixed by parsing the command text and using an unquoted
procedure name for deriving parameters.
================(Build #5162 - Engineering Case #363211)================
A FillError exception was not thrown when an error occurred during a fill
operation of the ASADataAdapter object. Now, when an error occurs during
a fill operation, the adapter will call the FillError delegate. If Continue
was set to true, the adapter will continue the fill operation. Otherwise,
it will throw the exception.
================(Build #5158 - Engineering Case #366428)================
The AsaCommandBuilder class could not have generated INSERT, UPDATE or DELETE
statements for parameterized queries if the parameters were not provided.
Now, if the command is a stored procedure, the AsaClient will call AsaCommandBuilder.DeriveParameters
to add parameters for the SELECT command. If the command is text, the AsaClient
will add some dummy parameters.
================(Build #5148 - Engineering Case #364573)================
It was not possible to assign an enum value to AsaParameter.Value without
an explicit cast. Now when AsaParameter.Value is set to an enum value, the
AsaParameter.AsaDbType is set to the underllying type of the enum value (Byte,
Int16, UInt16, Int32, UInt32, Int64 or UInt64) and the value is converted
to the underlying type.
================(Build #5143 - Engineering Case #363177)================
An application that created multiple threads, and opened and closed pooled
connections on each thread, could possibly have had the threads become deadlocked,
causing come connections to fail, if the 'Max Pool Size' was smaller than
the number of threads. This problem has been fixed.
================(Build #5130 - Engineering Case #360761)================
Calling the AsaDataAdapter.Fill method multiple times on the same DataTable
that had a primary key, woulkd have caused a 'System.Data.ConstrainException'
exception on the second call. Now, if a primary key exists, incoming rows
are merged with matching rows that already exist. If no primary key exists,
incoming rows are appended to the DataTable. If primary key information is
present, any duplicate rows are reconciled and only appear once in the DataTable.
================(Build #5129 - Engineering Case #360591)================
A data reader opened with a select statement would have held a lock on the
table, even after the data reader had been closed. This has now been fixed.
================(Build #5123 - Engineering Case #359136)================
UltraLite.NET has a simpler error system and thus does not have the ADO Errors
collection. In order to make it easier to move from UltraLite to ASA, two
new properties have been added, AsaException.NativeError and AsaInfoMessageEventArgs.
================(Build #5120 - Engineering Case #358333)================
A NullReferenceException could have occurred when fetching long varchar or
long binary date using the DataReader object. This problem has been fixed.
================(Build #5120 - Engineering Case #353442)================
Calling stored procedures with long varchar or long binary output parameters,
would have resulted in the data being corrupted after 32K. The AsaClient
was always using a default maximum length of 32K. This problem has been fixed.
================(Build #5119 - Engineering Case #355145)================
A .NET application, using multiple database connections through separate
threads, could have hung when updating the same table in different threads.
When this situation occurred, one thread would have been blocked in the server
(which is expected, as it is blocked against the other connection which is
holding a lock on the table as a result of its update), and the other thread
would appear to be hang as well, but it would not have been blocked in the
server. What was happening was that the first thread had entered a critical
section and was waiting for the server's response, while the second thread
was waiting to enter the same critical section, thus caused the application
hang. This has been fixed.
================(Build #5117 - Engineering Case #355929)================
The ASA provider could have loaded the wrong unmanaged dll, (dbdata8.dll
or dbdata9.dll), if multiple version of ASA were installed. Now, the ASA
provider will search for the unmanaged dll and will continue searching until
it finds and loads the right one. If a matching version can not be found,
the latest version will be loaded with a warning message.
================(Build #5117 - Engineering Case #355474)================
When a query with multiple result sets was opened with ExecuteReader(CommandBehavior.SingleRow),
calling NextResult would always have returned false. Only the a single row
from the first result set could have been fetched. This problem has been
fixed so that a single row is now fetched from each result set, which matches
the .Net specifications.
================(Build #5003 - Engineering Case #355587)================
On dual cpu machines, after creating an new prepared AsaCommand and inserting
new rows in a loop, a communication error would have occurred after some
iterations.
For example, (VB.NET code):
Imports iAnywhere.Data.AsaClient
Module Module1
Sub Main()
Dim conn As AsaConnection
Dim cmd As AsaCommand
Dim i As Int32
Try
conn = New AsaConnection("uid=dba;pwd=sql;eng=asatest")
conn.Open()
for i = 1 to 2000
cmd = New AsaCommand("insert into ian values( 1 )", conn)
cmd.Prepared()
cmd.ExecuteNonQuery()
Next
Console.WriteLine("Inserted {0} rows", i)
conn.Close()
Catch e As Exception
Console.WriteLine(e.ToString())
End Try
End Sub
End Module
This problem has been fixed.
================(Build #5394 - Engineering Case #429262)================
If an ODBC application called SQLSetConnectAttr() to set the isolation level
while there were concurrent requests using the same connection on other threads,
the application could have hung, crashed, or the call could have failed with
an error. This problem does not occur on Windows systems when using the
Microsoft ODBC Driver Manager. This problem has been fixed.
Note, similar problems with SQLConnect, SQLBrowseConnect, SQLDriverConnect,
SQLAllocStmt/SQLAllocHandle and SQLFreeStmt/SQLFreeHandle have also been
corrected.
================(Build #5272 - Engineering Case #395662)================
An Embedded SQL or ODBC application, which used wide fetches or ODBC multi-row
rowset fetches on a cursor which had prefetched enabled, could have returned
a row with invalid data or data from a previous row when an error should
have been returned instead. An example of a row error which could have caused
this type of behaviour is the "Subquery cannot return more than one row"
error. Also, for Embedded SQL applications, SQLCOUNT was not being set correctly
to the number of rows fetched on an error. These problems have been fixed
so that the error is correctly returned, and SQLCOUNT is set correctly on
errors.
================(Build #5203 - Engineering Case #375971)================
An application could have hung, received a communication error, or have possibly
seen other incorrect behaviour, when doing a fetch with prefetch enabled,
and then immediately doing a commit, rollback, or another fetch with an absolute
or negative offset. It was rare on multiprocessor machines, and would have
been even rarer on single processor machines. As well, there may have been
other timing dependent cases which could have failed. This has been fixed.
================(Build #5191 - Engineering Case #373482)================
If a connection string included the TCP parameter VerifyServerName=NO, and
contained an incorrect server name, the connection would have failed, essentially,
the VerifyServerName parameter was ignored. This has been fixed.
================(Build #5191 - Engineering Case #373480)================
If a server was started with a server name containing non-7-bit ASCII characters,
and the client machine's character set did not match the server machine's
character set, applications may not have been able to connect when specifying
the server name, (ie ENG parameter). This has been fixed.
================(Build #5187 - Engineering Case #372175)================
The server could have leaked memory, eventually resulting in an 'Out of Memory'
error. This could have occurred while executing INSERT or LOAD TABLE statements
for tables which the server maintains statistics. This has been fixed
================(Build #5153 - Engineering Case #365460)================
An application that used GetData to retrieve an unbound column on the first
row, could have had poor performance, if prefetch was enabled and some, but
not all columns, were bound before the first fetch. This poor performance
would have been particularly noticeable if the first few rows of the query
were expensive to evaluate. Applications which used the iAnywhere JDBC driver,
on queries which had columns with type LONG VARCHAR or LONG BINARY, were
also affected by this poor performance. This has been fixed.
================(Build #5148 - Engineering Case #364378)================
If an error occurred when positioning a cursor, future fetches would have
failed with the error -853 "Cursor not in a valid state". When prefetch
was enabled (the default) the specific error when positioning a cursor may
not have been returned to the application, with "Cursor not in a valid state"
being returned instead.
For example, if a query had a WHERE clause which caused a conversion error,
the application may never have received an error stating a conversion error
occurred, but would have received the error "Cursor not in a valid state"
instead.
This has been fixed so that the initial error, which put the cursor in an
invalid state, is now returned to the application.
================(Build #5145 - Engineering Case #336902)================
In rare instances, NetWare or Unix client applications, using ecc_tls or
rsa_tls encryption, would have had connection attempts fail. The debug log
would have shown a message that the Certicom handshake had failed. This has
been fixed.
================(Build #5137 - Engineering Case #362197)================
If a multi-threaded client application had more than one connection (on more
than one thread) logging to the same log file, the logged entries from the
connections could have been mixed up. Specifically, several timestamps may
have appeared together, followed by the text of the messages. Also, in 9.x
clients, the date stamp at the top of the file would always have used lowercase
English strings. This has been fixed.
================(Build #5129 - Engineering Case #360597)================
Applications attempting Shared memory connections may have hung if the server
was forcibly shut down during the connection attempt. This has been fixed.
================(Build #5117 - Engineering Case #354838)================
If an error occurred on an embedded SQL EXECUTE statement, and there were
bound columns with all NULL data pointers, a communication error could have
occurred and the connection would have been dropped.
An example of bound columns will all NULL data pointers from ESQL is:
SQLDA *sqlda = alloc_sqlda( 1 );
sqlda->sqld = 1;
sqlda->sqlvar[0].sqltype = DT_INT;
sqlda->sqlvar[0].sqldata = NULL;
EXEC SQL EXECUTE stmt into descriptor into_sqlda;
This has been fixed so that an error is returned and the connection is not
dropped.
================(Build #5117 - Engineering Case #352159)================
It was possible for an application to have hung after attempting to cancel
a request. This problem would only have occurred very rarely on multiprocessor
systems, and was even less likely to have occurred on single processor systems.
This has been fixed.
================(Build #5004 - Engineering Case #367691)================
On AIX systems running version 4.3.1, applications, including the ASA utilities
such as dblocate, would have failed to find any network servers over TCP/IP,
if the host=xxx parameter was not specified. Code specific to AIX 4.3.1 to
find the broadcast mask was incorrect. This has now been fixed.
================(Build #5277 - Engineering Case #396873)================
Changes for Engineering Case 392484 ensured that data exceptions that occurred
during a wide fetch were not lost, but the changes introducing an error such
that warnings were lost instead. This has been corrected so that both data
exceptions and warnings are properly reported to the client.
================(Build #5269 - Engineering Case #394722)================
If an application retrieved the ResultSetMetaData and then queried the datatype
of an unsigned smallint, unsigned int or unsigned bigint column, the datatype
returned would have been incorrect. This problem has now been fixed so that
an application can properly determine the unsigned column type using the
ResultSet.getColumnType() and ResultSet.isSigned() methods.
================(Build #5267 - Engineering Case #393604)================
If an application used ResultSet.relative(0) to attempt to refresh a row,
then the iAnywhere JDBC Driver would usually have given an "Invalid cursor
position" or a "Not on row" error. It should be noted that the "Invalid cursor
position" error is valid since that error is usually given by the underlying
ODBC driver when the Statement or PreparedStatement that generated the ResultSet
is of type TYPE_FORWARD_ONLY. However, when the Statement or PreparedStatment
is scrollable, then the iAnywhere JDBC Driver should refresh the row rather
than give the "Not on row" error. This problem has been fixed.
================(Build #5246 - Engineering Case #388573)================
If a result set had a DECIMAL column with a value that has more than 10 digits,
but less than 19 digits, and was an integer, then calling ResultSet.getLong()
should have returned the entire value but did not. This problem has been
fixed.
================(Build #5209 - Engineering Case #377885)================
If an application used the PreparedStatement.setTimestamp() method to set
a timestamp parameter, then the millisecond portion of the timestamp would
not have been set. This problem has been fixed.
================(Build #5196 - Engineering Case #374840)================
Calling the Connection.getCatalog() method, when using the iAnywhere JDBC
Driver, would have yielded a string with extra characters. Note that this
problem only existed if the JDBC Driver was used to connect to a server other
than an ASA server. The problem has been fixed.
================(Build #5194 - Engineering Case #374451)================
An application using the iAnywhere JDBC Driver would have leaked memory if
it called Connection.getMetaData() repeatedly. This problem has been fixed.
================(Build #5189 - Engineering Case #373086)================
If a JDBC cursor was positioned on a row with a LONG VARCHAR column, then
calling ResultSet.getString() on the column would have returned the proper
value for the first call, but each subsequent call would have returned NULL
if the cursor had not been repositioned. This problem has now been fixed.
================(Build #5138 - Engineering Case #362356)================
While connected to a multi-byte character set database with the iAnywhere
JDBC Driver, executing a procedure whose result set was defined to have a
varchar column, but the size of the column in the definition was too small,
could have resulted in an "Out of memory" exception. This problem has now
been fixed.
For example:
CREATE PROCEDURE test()
result( c1 varchar(254) )
begin
select repeat( 'abcdef', 1000 )
end
Notice that a varchar( 254 ) column is much too small to hold the result
of repeat( 'abcdef', 1000 ). In this case, executing the procedure test would
have resulted in an "Out of memory" exception.
================(Build #5117 - Engineering Case #350688)================
If an application called ResultSet.last(), to scroll to the last row in the
result set, and then called ResultSet.isLast(), to check to see if the cursor
was positioned on the last row, the iAnywhere JDBC Driver would have incorrectly
returned false, rather than true. This problem has now been fixed.
================(Build #5343 - Engineering Case #417500)================
When an array of parameters was used as input to an ODBC execute (for example,
to insert more than one row with one SQLExecute), some of the parameter values
could have be sent to the server incorrectly. It was possible for this to
have occurred if CHARACTER or BINARY data was used, and the data size for
a particular column was slightly larger in subsequent rows than previous
rows within a single array of parameters (for example, if the first row in
the array had a value of length 1000, and the second row in the array had
a value of length 1050 for the same column). This has now been fixed so that
the data is correctly sent to the server in this situation.
================(Build #5314 - Engineering Case #407012)================
The following ODBC driver problems have been fixed:
1) Fetching multiple rows into a bound WCHAR column, which was too small
for the column (thus truncating the data), could have copied the data into
the wrong user buffer location. Data was filled correctly on the first fetch,
but incorrectly on subsequent fetches, likely overrunning memory and possibly
crashing the application.
2) Using SQLGetData to put data into a WCHAR buffer, where the client or
database charset was multibyte, may have caused a subsequent SQLGetData on
the same column to contain incorrect data. Also, indicator values may have
been set incorrectly for SQLGetData or SQLFetch calls to fill WCHAR buffers
when using multibyte character sets.
3) Using SQLGetData to put data into a WCHAR buffer with more than 64K WCHARs
could have returned the wrong indicator, when there is no truncation.
4) Using SQLPutData to put more than 64K of data of ANY TYPE in one SQLPutData
call, may have put incorrect data.
5) Calling SQLPutData with CHAR (not WCHAR) data and SQL_NTS length, only
appended the first 256K of data. Appended data greater than 256K was lost.
6) Calling SQLPutData with WCHAR data and using odd lengths (i.e. partial
characters), may have put incorrect data if the client or database charset
was multibyte.
================(Build #5286 - Engineering Case #400466)================
If an ODBC application described a signed bigint column, the precision would
have been returned as 20 when it should really have been 19. This problem
has been fixed.
Note that the precision of unsigned bigint columns is still returned as
20, which is the correct value.
================(Build #5272 - Engineering Case #392484)================
If an application using either the ASA ODBC driver, or the iAnywhere JDBC
driver, fetched a set of rows in which one of the rows encountered a data
exception, then it was likely that the error would not have been reported.
Note that Prefetch must have been on for the problem to occur. This problem
has now been fixed, but in addition to this change, the changes to the server
for Engineering Case 395662 are also required
================(Build #5266 - Engineering Case #393587)================
The ODBC driver was not returning some reserved words for SQLGetInfo( SQL_KEYWORDS
). The mssing reserved words were:
character
dec
options
proc
reference
subtrans
These words are synonyms for other reserved words, and have now been added
to the list returned by SQLGetInfo( SQL_KEYWORDS ).
================(Build #5204 - Engineering Case #355595)================
Calling the ODBC function SQLGetData() with a length of 0 would have failed
for SQL_WCHAR.
SQLRETURN SQLGetData(
SQLHSTMT StatementHandle,
SQLUSMALLINT ColumnNumber,
SQLSMALLINT TargetType,
SQLPOINTER TargetValuePtr,
SQLINTEGER BufferLength,
SQLINTEGER * IndPtr);
SQLGetData can be used to obtain the amount of data available by passing
0 for the BufferLength argument. The amount of data available is returned
in the location pointed to by IndPtr. If the amount available cannot be determined,
SQL_NO_TOTAL is returned. When the TargetType was SQL_C_WCHAR, the amount
of available data was incorrect (a character count rather than byte count
was returned). This has been fixed.
There were also some problems returning correct indicator values for databases
using the UTF8 collation. This has also been fixed.
================(Build #5186 - Engineering Case #370604)================
In ODBC, changing the option for a cursor's scrollability, could have caused
the driver to change the cursor type as well. For instance if the cursor
type was forward-only, changing the scrollability to scrollable would have
changed the cursor type to dynamic. The problem was that the driver was
always changing the cursor type to dynamic regardless of the existing cursor
type. This has been corrected.
================(Build #5182 - Engineering Case #370905)================
An ODBC application that allocated both an ODBC 2.0 style environment handle
and an ODBC 3.0 style environment handle, could have have returned ODBC 3.0
result codes when functions were called in the ODBC 2,0 environment, or vice
versa. This could have lead to subsequent function calls failing with erroneous
error messages, including 'function sequence error'. Now, the driver will
always allocate a separate environment handle each time SQLAllocEnv or SQLAllocHandle
is called.
================(Build #5160 - Engineering Case #367232)================
If a database created with the UTF8 collation, had a column that contained
a 5 or 6 byte Chinese character sequence, ODBC client applications would
likely have crashed fetching the column. This has been fixed.
This problem was introduced by the changes for Engineering Case 364608.
================(Build #5148 - Engineering Case #364608)================
If an ODBC application running on a Windows system fetched a string with
embedded null characters, the resulting string would have been truncated
at the first null character. This problem has been fixed.
Note that a similar problem exists for applications running on Unix platforms
as well. However this problem exists in the iAnywhere JDBC Driver and is
addressed by Engineering Case 364379.
================(Build #5147 - Engineering Case #364278)================
Support for message callbacks has now been added to the ODBC driver. The
message handler is installed by calling the SQLSetConnectAttr() function.
For example:
static char mybuff[80];
static int mytype;
// callback for messages
void SQL_CALLBACK my_msgproc(
void * sqlca,
unsigned char msg_type,
long code,
unsigned short len,
char * msg )
{
memcpy( mybuff, msg, len );
mybuff[ len ] = '\0';
mytype = msg_type;
}
// install the message handler for this connection
rc = SQLSetConnectAttr( dbc,
ASA_REGISTER_MESSAGE_CALLBACK,
(SQLPOINTER) &my_msgproc, SQL_IS_POINTER );
Then a SQL statement such as:
Message 'after' type status to client;
will invoke the ODBC client application's message handler.
================(Build #5128 - Engineering Case #360479)================
The ODBC SQLDisconnect() function could have returned a failing error code
if the server dropped the client's connection (e.g. for idle time-out reasons),
while the connection had a dirty transaction to be committed or rolled back
(i.e. SQLExecute, SQLExecDirect or SQLSetPos was called), and SQL_AUTOCOMMIT_OFF
option was set ON for the connection.
Now, SQLDisconnect() returns SQL_SUCCESS_WITH_INFO and sets the SQLSTATE
to 01002 (Disconnect error).
================(Build #5124 - Engineering Case #359428)================
A memory leak can have occurred in the ODBC driver if the database server
closed the connection for any reason, for example an idle time-out. A pointer
to the driver's connection object was being set to null, but the object was
not freed. This has now been corrected.
================(Build #5117 - Engineering Case #352572)================
An application that used multiple threads to access the same connection through
ODBC could have hung. This has been fixed.
================(Build #5117 - Engineering Case #349975)================
Blob data, written to a database by an ODBC application using SQLPutData,
would have been corrupted if the following conditions were true:
- the application used a different charset than the database
- the server had character set translation enabled
- the length parameter passed to SQLPutData was larger than SQL_ATTR_MAX_LENGTH
In this case the data is send as VARCHAR and the server does character set
translation. This has now been fixed.
================(Build #5412 - Engineering Case #432537)================
The OLE DB provider did not properly support 4-part queries when used with
the SQL Server 2005 Linked Server feature. For example, attempting the following
SQL statement in Microsoft SQL Server 2005 Management Studio:
SELECT * FROM ASASERVER..dba.customer
would have caused the error:
Msg 7320, Level 16, State 2, Line 1 Cannot execute the query
However, the OPENQUERY form of SELECT works fine:
SELECT * FROM OPENQUERY( ASASERVER, 'SELECT * FROM customer')
This problem has now been fixed. Note, this problem does not appear with
SQL Server 2000.
================(Build #5401 - Engineering Case #433726)================
The Windows CE version of the OLE DB Provider could have failed when attempting
conversions to and from Variant types. Also, some properties were not supported
properly under Windows CE and would have incorrectly returned errors. These
problems have now been fixed.
================(Build #5395 - Engineering Case #429527)================
The changes for Engineering case 404908, introduced a problem where obtaining
parameter information on a stored procedure that was owned by a different
user could have resulted in a crash.
The following Delphi code demostrates the problem:
procedure TForm1.Button1Click(Sender: TObject);
var
adoParams: Parameters;
nCnt: integer;
begin
mmoParams.Clear;
FAdoCommand.Set_ActiveConnection(ADOConnection1.ConnectionObject);
FAdoCommand.Set_CommandText("sp_test");
FAdoCommand.Set_CommandTimeout(30);
FAdoCommand.Set_CommandType(adCmdStoredProc);
adoParams := FAdoCommand.Get_Parameters();
adoParams.Refresh;
for nCnt := 0 to adoParams.Count - 1 do begin
mmoParams.Lines.Add(adoParams.Item[nCnt].Name);
end;
end;
This problem has been fixed.
================(Build #5381 - Engineering Case #426364)================
When scrolling backwards or forwards through a cursor, the OLEDB provider
would have refetched deleted rows. This problem has been fixed.
================(Build #5381 - Engineering Case #426361)================
The OLEDB provider was not converting columns of type BIT to DBTYPE_BOOL
correctly. A true value 1 should map to a 16-bit integer value equal to -1
rather than to the value 1, as it was. This problem has now been fixed.
================(Build #5381 - Engineering Case #424931)================
When the OLEDB provider was used with ADO, wide character strings (UTF16/OLECHAR)
were written correctly to a database that used the UTF8 collation. However,
when fetching the inserted values back from the database, the UTF8 strings
were not correctly converted back into UTF16. This has been fixed. The OLEDB
provider now does proper conversion from UTF8 strings to UTF16 strings.
================(Build #5375 - Engineering Case #424217)================
The iAnywhere OLEDB provider was not correctly executing multi-row fetches
as a result of calling the GetRowsAt() method. For example, when using GetRowsAt()
to fetch rows 1 through 50 of a result set and then fetching rows 26 through
75, the provider would have incorrectly returned rows 51 through 75 on the
second fetch. This problem has been fixed.
================(Build #5373 - Engineering Case #423911)================
Insertion of rows into a UTF8 database, using ADO and the OLEDB provider,
would sometimes have failed. The following Visual Basic script is an example
that reproduces the problem.
recordset.Open "Employees", connection, , , adCmdTable
recordset.AddNew
recordset.Fields.Item("EmployeeID") = 1000
recordset.Fields.Item("GivenName") = "Munroe"
recordset.Fields.Item("Surname") = "Marvin"
recordset.Fields.Item("DepartmentID") = 201
recordset.Fields.Item("Street") = "No fixed Street"
recordset.Fields.Item("City") = "No City"
recordset.Fields.Item("Salary") = 12000.00
recordset.Fields.Item("StartDate") = "2006-01-03"
recordset.Update
This has now been fixed.
================(Build #5373 - Engineering Case #423901)================
The OLEDB GetRowsByBookmark method, as implemented by the iAnywhere provider,
did not work correctly. It was possible to return invalid row handle values.
As well, incorrect row status array values were being returned. This has
now been fixed.
================(Build #5373 - Engineering Case #423508)================
The provider calls the ADO CreateParameter method using the adBSTR datatype.
Each time the Execute method was called, memory would have been leaked.
The following is a C++ example of the code.
std::wstring strParam1( 84, L'\x9F5F' );
// Create the parameters.
_ParameterPtr param1;
_variant_t val1( strParam1.c_str() );
param1 = cmd->CreateParameter( L"", adBSTR, adParamInput, (long)strParam1.length(),
val1 );
cmd->Parameters->Append( param1 );
// Run the command.
_variant_t vtEmpty1( DISP_E_PARAMNOTFOUND, VT_ERROR );
_variant_t vtEmpty2( DISP_E_PARAMNOTFOUND, VT_ERROR );
cmd->Execute( &vtEmpty1, &vtEmpty2, adCmdStoredProc | adExecuteNoRecords
);
When a variant BSTR object (val1) was converted to a BSTR object (the bound
parameter), a new object was created and the memory for that object was lost.
This problem has now been corrected.
================(Build #5348 - Engineering Case #417837)================
When the ADO methods AddNew() or Update() were called, a memory leak would
have occurred in the OLE DB provider. This problem has been fixed by appropriately
freeing the allocated memory.
================(Build #5343 - Engineering Case #417622)================
The OLE DB schema rowsets (e.g., INDEXES) are supported by stored procedures
like sa_oledb_indexes (and others). They return result sets that allow for
table names and other identifiers up to 128 characters in length. However,
the input parameters to these stored procedrues only allowed for identifiers
of up to 80 characters in length. This problem has been fixed so that parameters
can now be up to 128 characters in length.
================(Build #5322 - Engineering Case #409320)================
When retrieving long varchar fields using a client side ADO recordset, a
null character was appended to the end of the data by the OLEDB provider.
This problem has been fixed.
================(Build #5304 - Engineering Case #405321)================
An attempt to insert a value into a SQL TINYINT column declared as adTinyInt,
that was greater than 128, would have failed with the error "Count field
incorrect". Unfortunately, this message did not clearly indicate where the
problem occurred. The ADO adTinyInt type (DBTYPE_I1) includes numbers in
the range -128 to 127. A value like 255, which is outside of this range,
cannot be converted to an adTinyInt. The ASA TINYINT type corresponds to
an unsigned 1-byte integer, which matches the ADO adUnsignedTinyInt type
(DBTYPE_UI1). This type accepts values in the range 0 to 255. The conversion
error that results from trying to convert a value like 255 to an adTinyInt
type is now properly reported as "Cannot convert parameter X to a DBTYPE_I1"
where X is the index of the parameter to the INSERT statement.
================(Build #5303 - Engineering Case #404908)================
A memory leak in the OLE DB provider associated with calling stored procedures
has been fixed.
================(Build #5303 - Engineering Case #404290)================
A stored procedure call that included the owner name would have failed when
using ADO and the SQL Anywhere OLE DB provider. The following is an example
using Visual Basic:
' Set CommandText equal to the stored procedure name.
objCmd.CommandText = "fred.ShowSalesOrderDetail"
objCmd.CommandType = adCmdStoredProc
objCmd.ActiveConnection = objConnection
A couple of problems arose when using this form of the stored procedure
call. Internally, the OLE DB provider uses a SQL query to obtain information
on the number of parameters to the stored procedure. It asked for information
on a procedure name called "fred.ShowSalesOrderDetail" and there was no such
procedure name since the owner name had not been separated from the procedure
name. This problem has been fixed.
The second problem involved using the ADO parameter "Refresh" method as
shown below in a Visual Basic example:
' Automatically fill in parameter info from stored procedure.
' This avoids having to do CreateParameter calls.
objCmd.Parameters.Refresh
The problem is related to the way ADO interprets the components of the stored
procedure name. There are three basic forms recognized by ADO.
1. ProcedureName
2. Owner.Catalog
3. Owner.ProcedureName.Catalog
Here is how ADO interprets these forms.
The first form is what is most often used. When only a name appears (no
qualifiers), it is interpreted to be the procedure name. This form poses
no problems.
When the second form is used (two names separated by a period), the first
name is assumed to be the owner name and the second name is assumed to be
the catalog name. SQL Anywhere doesn't support the notion of catalogs.
When the Refresh method is called, the OLE DB provider is presented with
the following query by ADO.
EXECUTE sa_oledb_procedure_parameters inProcedureCatalog='Catalog',inProcedureSchema='Owner'
This query from ADO is meaningless to the SQL Anywhere OLE DB provider,
as the procedure name is missing. ADO has interpreted the two components
as a schema name (the owner) and a catalog name. The query returns information
on all procedures owned by "Owner" which is of no use.
When the third form is used, the first name is assumed to be the owner name,
the second name is assumed to be the procedure name. and the third name is
assumed to be the catalog name. When the Refresh method is called, the OLE
DB provider is presented with the following query by ADO.
EXECUTE sa_oledb_procedure_parameters inProcedureCatalog='Catalog',inProcedureSchema='Owner',inProcedureName='ProcedureName'
Since the procedure name and owner are present and since the catalog name
is ignored by the OLE DB provider, this query will return the correct information
about the procedure parameters. However, the actual procedure call will result
is a syntax error since "call Owner.ProcedureName.Catalog" is not an acceptable
form for a stored procedure call.
These problems can be avoided by not using the ADO "Refresh" method and,
also, by avoiding the use of the three-part syntax. The best solution is
to avoid the use of an owner qualification entirely since the "X.Y" form
is misinterpreted by ADO.
================(Build #5298 - Engineering Case #403026)================
For FORWARD-ONLY cursors, the iAnywhere OLEDB provider was returning the
error "HY109" when fetching rowsets with more than 1 row, and contained a
column with more than 200 bytes. This problem has been fixed.
================(Build #5298 - Engineering Case #403014)================
The iAnywhere OLEDB Provider's default ('PUBLIC') for the GRANTEE restriction
column for the COLUMN_PRIVILEGES and TABLE_PRIVILEGES rowsets was incorrect.
The default should be the empty string (''). Also, the Provider did not enforce
the GRANTOR and GRANTEE restriction columns for the COLUMN_PRIVILEGES rowset.
These problems have been fixed.
To install new versions of the OLEDB provider schema rowsets into a database,
load and rerun scripts\oleschem.sql.
================(Build #5298 - Engineering Case #403010)================
The iAnywhere OLEDB Provider could have reported a "Table not found" error
for any of the following rowsets:
COLUMN_PRIVILEGES
TABLE_PRIVILEGES
FOREIGN_KEYS
The SQL scripts that implement these schema rowsets did not qualify the
SYSGROUP or SYSTRIGGER tables with "SYS.". This has been corrected.
To install new versions of the OLEDB provider schema rowsets into a database,
load and rerun scripts\oleschem.sql.
================(Build #5293 - Engineering Case #402278)================
The Microsoft Query Analyzer uses the iAnywhere OLEDB provider to process
queries from (ASA) Linked Servers. The Query Analyzer can call ASA stored
procedures provided that Remote Procedure Calls have been enabled. The Linked
Server properties RPC and RPC Out must be selected; otherwise a message like
"Server 'asatest9' is not configured for RPC" will be issued.
The Query Analyzer forms a query that is syntactically incorrect for ASA
and this results in a syntax error message.
For example, the following query will result in the error messages shown
below.
asatest9..dba.sp_customer_list
Could not execute procedure 'sp_customer_list' on remote server 'asatest9'.
[OLE/DB provider returned message: Syntax error near '1' on line 1]
The Query Analyzer issues the SQL statement "{?=call "dba"."sp_customer_list";1}".
The ";1" results in a syntax error.
This has been fixed. The iAnywhere OLEDB provider will now remove the ";1"
from the statement as a work-around.
================(Build #5293 - Engineering Case #402276)================
The iAnywhere OLEDB Provider was reporting inconsistent type information
for NUMERIC/DECIMAL columns. In some cases it would have reported DBTYPE_DECIMAL
for NUMERIC/DECIMAL columns, in other cases it reported DBTYPE_NUMERIC. It
would also have inconsistently reported precision for NUMERIC/DECIMAL columns,
changing it between compile time and runtime. These problems have now been
fixed. The changes have been made to the provider, as well as the supporting
stored procedures. To install new versions of the OLEDB provider schema rowsets
into a database, scripts\oleschem.sql must be rerun against the database.
================(Build #5293 - Engineering Case #401990)================
When adding a new row into a table using the OLEDB provider, if the value
for a column was the empty string, Visual Basic would have reported reports
a run-time error. The conversion functions assumed that a result length of
0 indicated an error, which is true if the input length is non-zero, but
if the input length is 0 then a result length of 0 is expected. This problem
has been fixed.
================(Build #5293 - Engineering Case #401989)================
A Visual Basic application, using the OLEDB provider to add a new row to
a table that contained an unitiialized (NULL) column value , would have crashed
with a null pointer exception. This problem has been fixed.
================(Build #5293 - Engineering Case #401987)================
When adding a row to a table that contained a BIT column using the OLEDB
provider, the BIT column value was always recorded as FALSE. This problem
has been fixed.
================(Build #5285 - Engineering Case #400186)================
Insertion of multibyte strings (Chinese, Japanese, Korean, etc.) into a database
using the UTF8 collation would have failed. The following C# code illustrates
the problem.
System.Data.OleDb.OleDbDataAdapter Ada=new System.Data.OleDb.OleDbDataAdapter
("select * from test",conn);
System.Data.OleDb.OleDbCommandBuilder CommandBuilder=new System.Data.OleDb.OleDbCommandBuilder
(Ada);
System.Data.DataTable dt=new DataTable ();
Ada.FillSchema (dt,System.Data.SchemaType .Mapped );
System.Data.DataRow dr=dt.NewRow ();
dr["Col01"]=textBox1.Text ;
dt.Rows.Add (dr);
Ada.Update (dt);
If the text entered into "textBox1" was a multibyte character, it would
not have been inserted correctly into column "Col01" of the "test" table
when the database was using the UTF8 collation. This problem has now been
fixed.
================(Build #5283 - Engineering Case #399872)================
When using FoxPro to display a result set, the first record was missing.
The following code sample illustrates the problem.
conn = CREATEOBJECT( 'ADODB.Connection' )
conn.ConnectionString = [Provider=ASAProv.90;uid=dba;pwd=sql;]
conn.Open
rs = CREATEOBJECT( 'ADODB.RecordSet' )
rs.ActiveConnection = conn
curs = CREATEOBJECT( 'CursorAdapter' )
curs.datasourcetype = 'ADO'
curs.datasource = rs
curs.SelectCmd = 'SELECT * FROM SYS.SYSTABLE WHERE table_id = 2'
curs.alias = 'ADO Test'
curs.cursorfill
BROWSE
FoxPro obtained the first row in the result set and then issued a RestartPosition()
call to reposition to the first row in the result set. The cursor type for
the query in the above example was FORWARD ONLY. With this type of cursor,
the RestartPosition() procedure failed to reposition to the start of the
result set. This problem has been fixed. RestartPosition() will now re-execute
the query to position to the start of the result set when the cursor type
is FORWARD ONLY.
================(Build #5281 - Engineering Case #397798)================
If a Visual Basic application attempted to open a record set using a STATIC
cursor with a PESSIMISTIC locking mechanism, the OLEDB provider would have
selected a DYNAMIC cursor instead. This was done by the provider to ensure
that updating of the records was possible, usually by locking records at
the data source immediately before editing. Of course, this also meant that
the records were unavailable to other users once editing had begun, until
the lock was released by calling Update. This type of lock is used in a system
where you can't afford to have concurrent changes to data, such as in a reservation
system. If the application then tried to obtain the current bookmark value,
an error would have occurred. This made it appear that STATIC cursors didn't
support bookmarks, whereas the real problem was that DYNAMIC cursors do not
support bookmarks. If the application specifies a STATIC cursor with a READ-ONLY
locking mechanism, then a STATIC cursor will be used and bookmarks are supported.
The OLEDB provider has been changed so that a KEYSET cursor will be selected
instead of a DYNAMIC cursor when PESSIMISTIC locking is requested. This will
allow the use of bookmarks.
================(Build #5249 - Engineering Case #389502)================
The OLEDB PROVIDER_TYPES rowset did not implement the DATA_TYPE and BEST_MATCH
restrictions. It implemented a restriction on TYPE_NAME instead and ignored
BEST_MATCH. This problem has been fixed so that the PROVIDER_TYPES rowset
now implements the DATA_TYPE and BEST_MATCH restrictions. To install a new
version of the PROVIDER_TYPES rowset into your database, load and run scripts\oleschem.sql
against the database.
As well, not all type names are included in the rowset, and some data types
that could be included were not. This has also been corrected.
================(Build #5217 - Engineering Case #379901)================
The OLEDB provider was failing to close the result set cursor between prepared
command executions. Enigeering Case 351298 reintroduced this bug, which was
originally described by Case 271435. This fix addresses both issues, an open
cursor is now closed before a SQLExecute, when the command has been previously
prepared.
================(Build #5211 - Engineering Case #376453)================
An ADO .Net application that attempted to obtain the primary keys from a
query on a table using the OLEDB provider, may have been returned incorrect
results when the table had more than one primary key column and/or columns
with unique constraints or unique indexes.
A sample code fragment follows:
DataTable Table = new DataTable(textTableName.Text);
OleDbDataAdapter adapter;
OleDbConnection connection = new OleDbConnection(textConnectionString.Text);
using (connection)
{
try
{
connection.Open();
adapter = new OleDbDataAdapter("select * from dba." + textTableName.Text
+ " where 1=0", connection);
adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey;
adapter.Fill(Table);
listBox1.Items.Clear();
foreach(DataColumn col in Table.PrimaryKey)
{
listBox1.Items.Add(col.ColumnName);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
The DataTable PrimaryKey property is an array of columns that function as
primary keys for the data table. This problem has been fixed.
One of the elements that ADO.Net uses in deciding whether a column belongs
in this set is the column metadata rowset.
IColumnsRowset::GetColumnsRowset - Returns a rowset containing metadata
about each column in the current rowset. This rowset is known as the column
metadata rowset and is read-only. The optional Metadata Column DBCOLUMN_KEYCOLUMN
is described to contain one of the values VARIANT_TRUE or VARIANT_FALSE or
NULL.
VARIANT_TRUE ? The column is one of a set of columns in the rowset that,
taken together, uniquely identify the row. The set of columns with DBCOLUMN_KEYCOLUMN
set to VARIANT_TRUE must uniquely identify a row in the rowset. There is
no requirement that this set of columns is a minimal set of columns. This
set of columns may be generated from a base table primary key, a unique constraint
or a unique index.
VARIANT_FALSE ? The column is not required to uniquely identify the row.
This column used to contain VARIANT_TRUE or VARIANT_FALSE. It now contains
NULL since OLEDB cannot correctly set the value. As a result, ADO.Net uses
other means for determining which columns belong in the PrimaryKey columns
property.
================(Build #5172 - Engineering Case #369521)================
If the GetCurrentCommand method of the ICommandPersist interface was called,
the memory heap could have been corrupted. This problem has been fixed.
================(Build #5172 - Engineering Case #369072)================
When using the OLEDB provider ASAProv, String parameters may not have been
passed correctly to stored procedures. This problem has been fixed.
The following Visual Basic example calls a stored procedure with a String
parameter.
Dim sendParam1 As String
sendParam1 = "20040927120000"
Dim cmd As ADODB.Command
cmd = New ADODB.Command
With cmd
.CommandText = "testproc1"
.CommandType = ADODB.CommandTypeEnum.adCmdStoredProc
.ActiveConnection = myConn
.Prepared = True
.Parameters(0).Value = sendParam1
Call .Execute()
End With
An example of a stored procedure follows.
ALTER PROCEDURE "DBA"."testproc1" (in param1 varchar(30))
BEGIN
message 'in Parameter [' + param1 + ']';
END
================(Build #5171 - Engineering Case #369272)================
A call to IRowsetChange::InsertRow() in the OLEDB provider, ASAProv, results
in a crash. This call can be made from C++ using a simple table insert:
CTable<CAccessor<CSimpleAccessor> > dbSimple;
hr = dbSimple.Insert(1);
This problem has been fixed.
================(Build #5171 - Engineering Case #368574)================
The execution of a SELECT statement containing JOINs of several tables by
applications using the OLEDB provider ASAProv, would have resulted in a memory
leak. This has been fixed.
================(Build #5169 - Engineering Case #369016)================
When an application using the OLEDB driver provided a DBTYPE_DECIMAL parameter
with over 15 digits, the most significant digits would have been lost. For
example, if the value 1234567890.123456 was provided as a DBTYPE_DECIMAL
parameter, this would have been incorrectly interpreted as 234567890.123456
(the leading 1 would be lost). In particular, this could affect Visual Basic
applications using an OleDbDataAdapter on a query with a numeric or decimal
typed column, and a generated DataSet. The problem has now been fixed.
================(Build #5138 - Engineering Case #362011)================
The OLEDB provider ASAProv assumed that DBTYPE_BOOL values were 1 byte long.
So for database columns of type BIT, it would indicate that only 1 byte needed
to be allocated for a DBTYPE_BOOL column. This was incorrect, since DBTYPE_BOOL
values are actually 2 bytes long. Any consumer application that fetched
2 bytes for a DBTYPE_BOOL column, (such as applications based on Borland's
Delphi), and examined both bytes, would have obtained an incorrect result.
Also columns adjacent to DBTYPE_BOOL columns will overlap in memory. This
has been fixed.
================(Build #5137 - Engineering Case #362207)================
In the FOREIGN_KEYS rowset (implemented by the sa_oledb_foreign_keys stored
procedure), the DEFERRABILITY column contained the values 5 or 6. This column
should contain one of the following:
DBPROPVAL_DF_INITIALLY_DEFERRED 0x01
DBPROPVAL_DF_INITIALLY_IMMEDIATE 0x02
DBPROPVAL_DF_NOT_DEFERRABLE 0x03
These corrections will appear in the "oleschema.sql" file located in the
"scripts" folder once the EBF has been applied. To implement the corrections
to an existing database, connect to the database with Interactive SQL and
load and run the contents of "oleschema.sql".
================(Build #5137 - Engineering Case #362198)================
In the COLUMNS, PROCEDURE_PARAMETERS and PROCEDURE_COLUMNS rowsets, the CHARACTER_MAXIMUM_LENGTH
column contained incorrect values for BIT and LONG VARCHAR/LONG VARBINARY
columns and parameters. This column should contain the maximum possible length
of a value in the column. For character, binary, or bit columns, this is
one of the following:
1) The maximum length of the column in characters, bytes, or bits, respectively,
if one is defined. For example, a CHAR(5) column in an SQL table has a maximum
length of five (5).
2) The maximum length of the data type in characters, bytes, or bits, respectively,
if the column does not have a defined length.
3) Zero (0) if neither the column nor the data type has a defined maximum
length.
4) NULL for all other types of columns.
As well, the CHARACTER_OCTET_LENGTH column contained incorrect values for
LONG VARCHAR/LONG VARBINARY columns and parameters. The CHARACTER_OCTET_LENGTH
column should contain the maximum length in bytes of the parameter if the
type of the parameter is character or binary. A value of zero means the parameter
has no maximum length. NULL for all other types of parameters.
In the PROCEDURE_COLUMNS and PROCEDURE_PARAMETERS rowsets, the column name
should have been CHARACTER_OCTET_LENGTH rather than CHAR_OCTET_LENGTH.
In the PROVIDER_TYPES rowset, the COLUMN_SIZE column contained incorrect
values for LONG VARCHAR/LONG VARBINARY types.
These problems have been corrected and will appear in the "oleschema.sql"
file located in the "scripts" folder once the EBF has been applied. To implement
the corrections to an existing database, connect to the database with Interactive
SQL (dbisql) and run the contents of "oleschema.sql".
================(Build #5135 - Engineering Case #361464)================
This change fixes several problems with the DBSCHEMA_INDEXES rowset, implemented
by the sa_oledb_indexes stored procedure.
The COLLATION column was always empty. It now contains a 1 for ASCENDING
ordering and a 2 for DESCENDING ordering.
The CLUSTERED column was always TRUE, even for a non-clustered index. This
column now contain 0 unless the index is clustered, in which case it will
contain a 1.
The INDEX_NAME column contained the additional string "(primary key)" when
the index was based on a primary key. In the DBSCHEMA_PRIMARY_KEYS rowset
(implemented by the sa_oledb_primary_keys stored procedure), the PK_NAME
column does not contain this string.
Since the names were different, it was difficult to join the information
in these two tables. Elsewhere the index name is reported using only the
table name (in access plans for example). For these reasons, the INDEX_NAME
column will no longer contain the string "(primary key)".
The following column names have been corrected:
FKTABLE_CAT is now FK_TABLE_CATALOG
FKTABLE_SCHEMA is now FK_TABLE_SCHEMA
PKTABLE_CAT is now PK_TABLE_CATALOG
PKTABLE_SCHEMA is now PK_TABLE_SCHEMA
These corrections are to the "oleschema.sql" file located in the "scripts"
folder, which will be in effect for newly created databases. To implement
the corrections to an existing database, connect to the database with Interactive
SQL dbisql and run the contents of "oleschema.sql".
================(Build #5130 - Engineering Case #360678)================
ADO applications using the ASA OLEDB provider, ASAProv, could have failed
with an "invalid rowset accessor" error.
The following Visual Basic code example demonstrates the problem:
Dim conn As new OleDbConnection()
Dim cmd As OleDbCommand
Dim reader As OleDbDataReader
Try
conn.ConnectionString = "Provider=ASAProv;uid=dba;pwd=sql;eng=asademo"
conn.Open()
cmd = New OleDbCommand("SELECT * FROM DEPARTMENT", conn)
reader = cmd.ExecuteReader()
While reader.Read()
Console.WriteLine(reader.GetInt32(0).ToString() + ", " _
+ reader.GetString(1) + ", " + reader.GetInt32(2).ToString())
End While
reader.Close()
reader = Nothing
conn.Close()
conn = Nothing
Catch ex As Exception
MessageBox.Show(ex.Message)
End Try
This problem has been fixed, the rgStatus array passed to the IAccessor::CreateAccessor
method is now correctly initialized.
================(Build #5127 - Engineering Case #360196)================
The ASA OLEDB provider ASAProv did not return the correct error codes as
documented by Microsoft. For example, the ICommand::Execute method should
have returned DB_E_INTEGRITYVIOLATION when a literal value in the command
text violated the integrity constraints for a column, but was returning E_FAIL.
This has been corrected.
The following additional error codes are also now returned:
DB_E_NOTABLE
DB_E_PARAMNOTOPTIONAL
DB_E_DATAOVERFLOW
DB_E_CANTCONVERTVALUE
DB_E_TABLEINUSE
DB_E_ERRORSINCOMMAND
DB_SEC_E_PERMISSIONDENIED
================(Build #5124 - Engineering Case #359675)================
When a binary array of bytes was inserted into a binary column using the
OLEDB provider "ASAProv", the data was converted to a hexadecimal string
and stored into the binary column. For example:
BYTE m_lParm1[2]={0x10,0x5f);
would have been stored as the binary value 0x31303566 which is the original
binary value stored as a hexadecimal string of characters. This has been
fixed so that parameters are not converted from the user's type to a string.
Instead, bound parameters are converted to the type specified by the application.
================(Build #5123 - Engineering Case #358843)================
A memory leak occurred in the OLEDB provider, ASAProv, when a repeated sequence
of calls to SetCommandText(), Prepare(), and GetColumnInfo() was executed.
These calls could be generated by an ADO Open() call with a SELECT statement
containing a number of table JOINs. This problem has now been fixed.
================(Build #5117 - Engineering Case #350319)================
When using the Microsoft Query Analyzer with Microsoft SQL Server 2000 to
issue a query on a Linked Server definition that referenced an ASA server,
an error such as the following would have been reported:
Server: Msg 7317, Level 16, State 1, Line 1
OLE DB provider 'ASAProv.80' returned an invalid schema definition.
For example:
select * from ASA8.asademo.dba.customer
where "ASA8" is the name of the Linked Server, "asademo" is the catalog
name, "dba" is the schema name and "customer" is the table name. This problem
has been fixed, but the following must also be done in order to support a
Linked Server query:
- When the Linked Server is defined, "Provider Options" must be selected.
(this button is greyed out and unusable once the Linked Server has been defined).
In the Provider Options dialog, the "Allow InProcess" option must be selected.
- ASA does not support catalogs, so the four part table reference must omit
the catalog name (two consecutive periods with no intervening characters,
ie select * from ASA8..dba.customer). Including a catalog name will result
in the error: "Invalid schema or catalog specified for provider 'ASAProv.80'"
- The database must be updated to include the revised stored procedures
found in the scripts\oleschem.sql directory. This file includes a new stored
procedure dbo.sa_oledb_tables_info that is required for Linked Server support.
================(Build #5003 - Engineering Case #357700)================
When using a database with the UTF8 collation, statements containing non-English
characters could fail with the error "Syntax error or access violation",
and Unicode bound data stored in the database could be corrupted.
This problem would affect any application using ODBC or OLEDB, including
Java-based applications using the JDBC-ODBC bridge (8.0) or iAnywhere JDBC
Driver (9.0), including DBISQL and Sybase Central.
dbmlsync was also affected.
The bug was introduced in the following versions and builds:
8.0.2 build 4409
9.0.0 build 1302
9.0.1 build 1852
This problem has been fixed.
================(Build #5350 - Engineering Case #417977)================
If an ASA process (such as dbremote, dbmlsync or dbmlsrv8) had been started
as a service, it was possible for the process to hang when the service was
shut down. This has been corrected so that these services now shutdown correctly.
================(Build #5272 - Engineering Case #395071)================
After installing the 8.0.3 build 5260 EBF for CE, it did not deploy to the
device at the end of the install, even if that option is selected. This has
been corrected.
A workaround is to select:
"Deploy SQL Anywhere for Windows CE" from the Start Menu under: SQL Anywhere
8
================(Build #5269 - Engineering Case #384130)================
If the length of an indexed table column was increased on a big endian machine,
using the index may have caused the server to crash due to an unaligned memory
reference. This has been fixed.
================(Build #5219 - Engineering Case #380746)================
Playback of a silent install recording may have failed if the recording included
the ADO.NET feature, but the machine being installed to did not have the
.NET Framework installed. This has been fixed.
================(Build #5219 - Engineering Case #379106)================
A multithreaded Embedded SQL application could, depending on timing, have
failed with the error "Invalid statement" (SQLCODE -130). For this to have
occurred, the application had to use the syntax "EXEC SQL DECLARE ... CURSOR
FOR SELECT ..." in code which could be run by multiple threads concurrently.
This has been fixed so that the SQL Preprocessor generates code for the syntax
"EXEC SQL DECLARE ... CURSOR FOR SELECT ..." that is thread safe.
Note the syntax "EXEC SQL DECLARE ... CURSOR FOR :stmt_num" is thread safe
(and is not affected by this problem), while the syntax "EXEC SQL DECLARE
... CURSOR FOR statement_name" is not thread safe (and cannot be made thread
safe).
================(Build #5170 - Engineering Case #369278)================
The stored procedure sp_jdbc_stored_procedures is used by jConnect to retrieve
stored proc metadata. Unfortunately the definition of the stored procedure
was incorrect and the PROCEDURE_TYPE column of the metadata result set was
returning whether or not the particular stored proc returned a result set.
In actuality, the PROCEDURE_TYPE column should return whether or not the
particular stored proc returns a return value. This procedure has now been
corrected.
Note, new databases will have the corrected procedure, but to update existing
databases, run the Upgrade utility dbupgrad.
================(Build #5426 - Engineering Case #437306)================
It was possible, although extremely rare, for a database to get into an inconsistent
state, where some rolled back transactions would appeared to have been committed.
The chances of this happening were moret likely when performing several large
operations without doing any commits, and the operations also spanned checkpoints.
The connection doing the operations would have needed to be disconnected
before committing, and then the server must have gone down dirty at the appropriate
time. It also would have been more likely when 'savepoints' and 'rollback
to savepoint' were used. This has now been fixed.
================(Build #5416 - Engineering Case #433741)================
Although rare, the server may have crashed if at that moment a shared memory
connection was being established, the client process disappeared. For the
same reason a client application may have crashed if the server disappear
at the moment the application was about to establish a shared memory connection.
This has been fixed.
================(Build #5414 - Engineering Case #433047)================
The LOAD TABLE statement could have incorrectly loaded multi-byte character
data, due to incorrect accounting for a split character in a buffer. The
loaded data would appear to have characters or bytes within it that were
not part of the original data. This has been fixed.
================(Build #5414 - Engineering Case #431369)================
An INSERT statement with the ON EXISTING UPDATE clause could have failed
with an incorrect referential integrity error. This would only have occurred
if the table had a single column primary key, and there existed a row where
value of this column was the same as another column value in the VALUES clause.
This has been fixed, but a workaround would be to use an UPDATE statement
when the primary key value already exists.
================(Build #5413 - Engineering Case #430576)================
It was possible, although rare, for the server to hang while doing an ALTER
TABLE. A deadlock problem updating histograms has been fixed.
================(Build #5411 - Engineering Case #432886)================
Attempting to reference a hexadecimal literal with an odd number of characters
between 9 and 15 characters long inclusive, would have caused a syntax error.
For example:
select cast( 0xF1234ABCD as bigint );
This has been fixed, as now an extra zero is added to the front of odd length
hexadecimal strings. Note, hexadecimal strings that are odd in length, and
too large to fit into a BIGINT, will still generate a syntax error.
================(Build #5404 - Engineering Case #431810)================
The server may have crashed, rather than return the error "OMNI cannot handle
expressions involving remote tables inside stored procedures" (SQLCODE -823
: SQLE_OMNI_REMOTE_ERROR).
For example (x is a remote table)
declare @x int
declare @y int
select @x =1
select @y =1
if (select x from x where y = @y) = @x + 1
print 'yes'
else
print 'no'
This has been fixed.
================(Build #5404 - Engineering Case #409006)================
The server could have deadlocked at the end of an on-line backup if at least
two other concurrent transactions performed various unsafe actions. This
has been fixed.
================(Build #5395 - Engineering Case #426799)================
The server could have become deadlocked (it would have appeared to be hung),
if non-temporary database pages were updated, then freed, in quick succession.
This has been fixed.
================(Build #5394 - Engineering Case #429438)================
When the return value of a user-defined function was truncated, the server
created a temporary string that was not freed until the cursor was closed.
This problem only occurred with multibyte character set databases. This has
been corrected.
================(Build #5393 - Engineering Case #429029)================
The changes for Engineering case 406765 could have caused a server crash.
The circumstances where this occurred were rare, and would have required
that the CONNECT or REMOTE permissions for a user be dropped, while dbremote
was actively replicating transactions for the user. This has been fixed.
================(Build #5391 - Engineering Case #428879)================
When using the Windows Service Manager to start and stop ASA services, the
option to pause
running services was available. This option has been removed for all services,
except for
dbltm.
================(Build #5391 - Engineering Case #428685)================
If the NUMBER(*) function was used in a subquery of an expression, and the
expression referenced a proxy table, the server may have crashed. This has
been fixed.
================(Build #5384 - Engineering Case #426984)================
When upgrading a database, either by executing the ALTER DATABASE UPGRADE
statement, or running the Upgrade utility, the upgrade did not run the oleschem.sql.
This has now been corrected
================(Build #5381 - Engineering Case #426056)================
Attempting to execute a "SELECT INTO #temporary-table-name" statement, may
have caused a server crash in cases where it should have returned the error
"Statement size or complexity exceeds server limits". This has been fixed.
================(Build #5381 - Engineering Case #407422)================
It was possible for the server to fail with Assertion 200602: 'Incorrect
page count after deleting pages from table {table_name} in database {database_name}',
when dropping or truncating a table for which a LOAD TABLE had failed. It
was also possible for a failed LOAD TABLE to leak table pages, or temporary
file pages. A failed ALTER TABLE could have resulted in many other assertions
or strange behaviours. These problems would only have occurred if there where
other transactions actively committing and rolling back transactions at the
time of the LOAD TABLE or the ALTER TABLE. These problems have now been
fixed.
================(Build #5376 - Engineering Case #423681)================
The server may have crashed during the second or subsequent executions of
a procedure,
if after its first execution, one of the referenced objects (table, view,
proxy table, etc) had been dropped and recreated with a changed type. For
example, a referenced table was dropped and a view with the same name was
recreated. This has been fixed
================(Build #5376 - Engineering Case #423578)================
Attempting to OPEN a cursor on a procedure call that returned the SQL warning
"Procedure has completed", would not have opened a cursor. The server still
created a cursor handle though, which was not freed until database shutdown.
This has been fixed so that the server will now return a NULL handle if the
cursor has not been opened.
================(Build #5373 - Engineering Case #423858)================
Applicatons connected via the CmdSeq protocol, could have crashed if a connection
request resulted in a warning. This has been fixed.
================(Build #5373 - Engineering Case #423129)================
It was possible, although very rare, for all connections to the server appear
to be hung (deadlocked) in the presence of DDL statements, checkpoints, backups,
or other operations, if a connection that was executing Java was cancelled.
This has now been fixed.
================(Build #5371 - Engineering Case #422487)================
If a cursor was opened on a stored procedure call, and the procedure executed
a RAISERROR statement, the error was not reported on the OPEN. The value
of @@error would have been set to the integer value given in the RAISERROR
statement and would have remained set to that value even after other statements
were executed, until another RAISERROR was executed or the connection was
dropped. This has been fixed so that an error will now be reported to the
application on the OPEN.
================(Build #5369 - Engineering Case #408903)================
Running the Validation utility would have cause the server's temporary file
to grow on each execution if validation with express check (-fx) was chosen.
This is the default mode in 9.0.2. The amount of growth would depend on the
number of tables in the database and the size of those tables. This has now
been fixed. A workaround is to validate using -fn rather than -fx. Validation
performance should improve with this fix as well.
================(Build #5364 - Engineering Case #420171)================
A failed UPDATE statement, for example due to a deadlock error, could have
resulted in a corrupted index. It is likely that only databases involved
in replication would have been affected by this problem. A database validation
with 'full check' would have detected that the index had missing values.
This has been fixed, but dropping and re-creating an index in such a state
will fix the corruption.
================(Build #5361 - Engineering Case #420042)================
An INSERT, UPDATE or DELETE statement may have incorrectly returned the error
"Invalid statement", if the execution of the statement caused a trigger to
fire, or an RI action was executed, and a subsequent statement then failed
with a deadlock error. This has been fixed.
================(Build #5360 - Engineering Case #420864)================
While a database upgrade is in progress, event execution is not allowed.
If a scheduled
event did fire during a database upgrade it was correctly not executed,
but it was also incorrectly not scheduled for its next execution. This meant
that the event would not have been executed again until the database was
restarted. This has been fixed, so that now the event is rescheduled for
the next execution.
================(Build #5359 - Engineering Case #419782)================
If a query contained an alias in the SELECT list, with the aliased expression
containing an IF or CASE expression, where the predicate in the IF or CASE
contained an EXISTS, ANY, or ALL predicate, and the aliased expression was
used as an outer reference in another subquery, and the execution plan contained
a materializing operator such as Hash Join, then incorrect results could
have been returned for the IF/CASE expression and the referencing subquery.
The following query demonstrates the problem using the demo database:
select T2.dept_id,
( if exists ( select 1 from sys.dummy T4 where T2.dept_head_id <>
-1 ) then 1 endif ) as S2,
( select 1 from sys.dummy T5 where S2 > 0 ) as S3
from
rowgenerator T0 with ( no index )
join department as T2 with ( no index )
on T0.row_num*100 = T2.dept_id
This has been fixed.
================(Build #5359 - Engineering Case #419708)================
Subqueries with an ANY or ALL condition, that also contained both a DISTINCT
and a TOP n clause (where n > 1), could have returned incorrect results.
The server was incorrectly eliminating the DISTINCT, which meant that some
rows could have been missed from the subquery. This has now been fixed.
================(Build #5357 - Engineering Case #420137)================
On Unix systems, if an application autostarted a server, used ASA's embedded
SQL function db_start_engine(), or used the Spawn utility to start a server,
and the application was terminated abnormally by SIGINT, SIGQUIT, SIGTERM
or some other unhandled signal (SEGV, etc), the server that had been started
would shutdown unexpectedly, although cleanly. This has been fixed.
A work-around is to specify -ud on the server command line, although this
has the side effect that the controlling application (e.g. dbspawn) will
return/continue immediately before the server has fully started all of its
databases, etc.
================(Build #5350 - Engineering Case #418346)================
If two or more users with DBA authority granted the same column update permission
to a user, these column permissions could then only have been revoked by
revoking the table update permission. Any attempt to revoke the column update
permission would have failed with the error "You do not have permission to
revoke permissions on <table>". This has been fixed.
================(Build #5347 - Engineering Case #404396)================
The server could have gone into an infinite loop while performing operations
on a database corrupted in a certain way. Validation of the database may
or may not have experienced the same behaviour. The server will now fail
with "Assertion 202101: Invalid bitmap links on page 0x%x" when such a condition
is encountered.
================(Build #5337 - Engineering Case #409861)================
The server could have failed with the error - "Assertion failed : 101412
Page Number on page does not match the page requested" or possibly crashed.
This has been fixed.
================(Build #5336 - Engineering Case #415979)================
When the server was using a database file that was located on another machine
via Windows file sharing, or was located on the local machine but referenced
using a UNC name (such as \\mymachine\myshare\mydb.db), it could have failed
with one of the following errors (and possibly other assertion failures and
fatal errors, including OS error 58):
I/O Fatal error: Unknown device error
Assertion failed: 201125 Read Error with OS error code: 64
The problem only occurs on certain machines with certain patches and/or
hardware and/or drivers, the specifics of which have not been completely
identified. The problem is related to scattered reads (aka "group reads")
-- see the documentation to determine when the server may use scattered reads.
The server now works around this OS problem by disabling scattered reads
for files accessed by remote file sharing, or by UNC reference.
================(Build #5326 - Engineering Case #407565)================
When run on some Windows CE devices, after the server had grown a file, the
OS would start to ignore the FILE_FLAG_WRITE_THROUGH flag, as well as the
FlushFileBuffers() call. On Windows CE, the server uses FlushFileBuffers()
to guarantee IO ordering for recoverability. In an attempt to work around
this problem, the server has been changed to close and reopen files after
the first call to FlushFileBuffers() after a file grows. It is not known
exactly which versions of Windows CE contain this problem.
================(Build #5325 - Engineering Case #409772)================
The server may have crashed when executing a query with a subselect, if the
subselect caused an error when describing the result set. This has been fixed.
================(Build #5323 - Engineering Case #409773)================
If an error occurred while executing a subquery, then there is a chance that
the server would have left heap unfreed. This has now been fixed.
================(Build #5322 - Engineering Case #408655)================
Executing a very large query could have caused the server to crash. This
has been fixed.
================(Build #5321 - Engineering Case #409283)================
If an SMTP email was sent using the xp_sendmail procedure, and the engine
received a socket timeout error for a socket receive, the next xp_sendmail
or xp_stopsmtp call would have caused the server to crash. This has been
fixed.
================(Build #5320 - Engineering Case #402416)================
The performance of a Java stored procedure may have degraded the longer it
ran. This would occur if a JDBC Statement object that remained open for a
long time caused muliple warnings. Eventually, the buildup of SQLWarning
objects would have exhusted the Java heap, causing the Java garbage collector
to run continuously. These SQLWarning objects would have been garbage collected
when the JDBC Statement was closed, or when the clearWarnings function was
called by the Java procedure. This problem has been fixed by clearing the
Statement's SQLWarning objects whenever the Statement's associated ResultSet
is closed.
Note that the default heap size may be to small for some applications, but
can be increase with the Java_heap_size database option.
================(Build #5319 - Engineering Case #408745)================
Image backups produced by a server when running on a multi-processor machine
could have been corrupt. This would have occurred if there was more than
one transaction concurrently modifying the database. This has now been corrected.
================(Build #5319 - Engineering Case #402417)================
A memory leak in the Java VM would have caused poor performance of a Java
stored procedure if it was called hundreds of times. The cause of the performance
drop was exhaustion of the Java heap, forcing the Java garbage collector
to run continuously. This has been fixed.
Note that the default heap size may be to small for some applications, but
can be increase with the Java_heap_size database option.
================(Build #5318 - Engineering Case #407670)================
Subqueries with an EXISTS predicate may no longer be flattened if the subquery
contains two or more tables in the FROM clause, and it is not known if the
subquery returns at most one row. The main issue with flattening subqueries
with more than one table in the FROM clause is that a Distinct on rowids
is always needed for the plan of the main query block.
================(Build #5317 - Engineering Case #408062)================
If a query referencing a proxy table was to be executed in "No passthru"
mode and the proxy table contained a varchar column, then any trailing blanks
in the varchar column would have been incorrectly stripped. This problem
has been fixed.
================(Build #5315 - Engineering Case #408097)================
On Unix systems, calling the ODBC function SQLDataSources() would have caused
the server to crash with a SegFault, if no Driver Manager was being used.
This has been fixed.
================(Build #5311 - Engineering Case #406857)================
If a query like "SELECT TOP 5 * FROM T" was executed without an ORDER BY
clause, the server would normally return a warning stating that "the result
returned is non-deterministic." For Open Client applications, this warning
is now nonger returned.
================(Build #5311 - Engineering Case #406765)================
Database recovery could have failed with assertion 201501 "Page for requested
record not a table page or record not present on page". The problem would
only have occurred in the event of the database server not shutting down
cleanly while dbremote replication was taking place. It would also have been
required that another connection revoked CONNECT or REMOTE from the remote
user for which the transactions were being replicated. This has been fixed
so that connections attempting to revoke permissions from the user for which
replication is currently occurring will now receive an error indicating that
the operation is "Not allowed while user 'user_name' is actively replicating
transactions". The connection attempting to REVOKE permissions will need
to wait until replication completes (commit/rollback) before being able to
revoke the permissions.
================(Build #5311 - Engineering Case #400091)================
When a multi-byte character string was truncated in the middle of a character
by being but into a column that was not wide enough to for the whole string,
subseqent selects would have return a NULL at the end of the data. This could
have caused MobiLink synchronizations to fail, as the NULL was misinterpreted.
This has been fixed by truncating the string at the last full character.
================(Build #5311 - Engineering Case #393270)================
If a server was running with a ServerName longer than 32 bytes, attempts
to connect to it using SPX would have failed. This has now been fixed so
that only the first 32 bytes are relevant for SPX.
================(Build #5309 - Engineering Case #406030)================
The server could have crashed when referencing a variable declared as a VARBINARY
of unspecified length. This has been fixed.
================(Build #5309 - Engineering Case #405397)================
When run on Windows CE devices, the server was not have recognising the registry
setting to change the location of the temporary folder. This has been fixed,
so that temporary files may now be moved to a storage card.
================(Build #5303 - Engineering Case #404767)================
If an INSERT...SELECT was executed, such that the table being inserted into
was in a publication and the table being selected from was a proxy table,
then it was possible that the statement would have failed with unnexpected
errors. The most likely error was a "conversion error", but any number of
errors could have been returned. The problem has now been fixed.
================(Build #5302 - Engineering Case #393093)================
If executing an SQL Remote PASSTHROUGH statement caused a trigger to fire,
the statement would not have been send to the remote server. The problem
did not occur for PASSTHROUGH ONLY statements, since the statement did not
execute locally, no triggers were fired. This has been fixed.
================(Build #5301 - Engineering Case #404151)================
Calling a procedure owned by another user with the result of a function owned
by a third user would have resulted in a 'permission denied' error. This
has now been fixed.
For example, the following this sequence would have generated the 'permission
denied' error:
create procedure userA.hello( in name char(10) )
begin
message 'Hello ' || name;
end;
grant execute on userA.hello to userC;
create function userB.world()
returns char(10)
begin
select 'world';
end;
grant execute on userB.world to userC;
create procedure userC.say_hello( )
begin
call userA.hello( userB.world() );
end;
call userC.say_hello();
================(Build #5301 - Engineering Case #394297)================
Attempting to execute a statement like INSERT INTO v SELECT ..., where v
is a view on a proxy table, would have caused the server to crash. The problem
has now been fixed.
================(Build #5299 - Engineering Case #403355)================
Executing a REORGANIZE TABLE statement without the PRIMARY KEY, FOREIGN KEY
or INDEX clauses could have caused the server to become deadlocked and appear
to hang. This problem has now been fixed.
================(Build #5299 - Engineering Case #403151)================
The database server could have crashed when connecting to the utility_db
from an ODBC application. This would only have occurred if the ODBC driver
was a newer version than the server. This has been fixed.
================(Build #5298 - Engineering Case #402727)================
Issuing a START DATABASE request for a database with a mismatched transaction
log could have failed with the error message "Cannot open transaction log:
'???' belongs to a different database". The error message now correctly
specifies the transaction log file name in place of the string '???'.
================(Build #5293 - Engineering Case #402038)================
Executing an INSERT statement that specified a very long list of columns
(approx 1/4 of the database page size in bytes), could have caused a server
crash. This problem has been resolved. A temporary (inefficient) work around
would be to start the server with a larger cache page size.
================(Build #5292 - Engineering Case #401645)================
The server gathers and updates column statistics as part of the query execution
process. In the case of NOT predicates, the server could have incorrectly
changed the stored selectivity of these predicates to 100%. This problem
has been resolved.
================(Build #5290 - Engineering Case #400924)================
Setting the system clock back to an earlier time (either manually or by a
system daemon) on non Windows platforms (i.e. NetWare, Unix) could have caused
any of the following symtoms:
- timestamps reported in the request level log could jump ahead approximately
25 days
- client applications could be incorrectly disconnected
This has been fixed.
================(Build #5289 - Engineering Case #399488)================
Multiple errors were defined with SQLSTATE 28000. As a result, an incorrect
error message could have been returned. This has been corrected with the
following changes:
SQLSTATE_PARM_TOO_LONG changed from 28000 to 53W06
SQLSTATE_PASSWORD_TOO_SHORT changed from 28000 to 54W07
SQLSTATE_PASSWORD_TOO_LONG changed from 28000 to 54W08
SQLSTATE_INVALID_LOGON and SQLSTATE_INVALID_PASSWORD continue to return
28000
================(Build #5289 - Engineering Case #399155)================
The server would have crashed under the following conditions:
- A BEGIN END block contained a nested BEGIN END block
- Inside the nesting block a local temporary table was declared (e.g. using
execute immediate)
- The outer block declared a cursor on the temporary table and was opened
inside the nested block but not closed in the nested block
- The cursor used an index on the temporary table
This has been fixed.
================(Build #5286 - Engineering Case #400467)================
Additional changes have been made to ensure that proper metadata is provided
for newer versions of jConnect. If a new version of jConnect is to be used,
then it is recommended that the new version of JCATALOG.SQL is run against
the database.
================(Build #5282 - Engineering Case #398122)================
While initializing the in-memory free list on start up, the server could
have run out of memory if the database was corrupt. The server will now generate
assertion 202100 - "Invalid bitmap page at page (page number)" when encountering
such corruption.
================(Build #5281 - Engineering Case #398604)================
It was possible for the server to loop infinitely while going through automatic
database recovery if the database file was corrupted. This could have occurred
when deleting an entry from a corrupted index. "Assertion 200901: Corrupted
page link found on page (0x%x) while deleting from index %s" will now be
generated for this situation.
================(Build #5279 - Engineering Case #397772)================
If a LOAD TABLE command failed, for example with a primary key violation,
then it was possible that the server would leak table pages and index pages
in the database. These leaked pages would not have belonged to any database
object and would also not have been marked as free. As a result, the database
file could have been larger than necessary. The only way to recover the
lost space is to rebuild the database.
================(Build #5277 - Engineering Case #398132)================
If a backup of the database was being performed and the transaction log was
either being renamed or truncated, if the database engine was unable to open
a new transaction log (or mirror log), an error would have been returned
to the backup process, but the server could have continued to apply operations
without logging them to the transaction log. The server will now faile an
assertion when it cannot open a new transaction log or mirror after the truncation
or rename, as this should be a fatal error.
================(Build #5275 - Engineering Case #397635)================
The server could have crashed while executing a stored procedure, if it was
executed often enough to enable caching. This has been fixed.
================(Build #5274 - Engineering Case #396584)================
If a query referenced a proxy table and contained an ORDER BY clause, and
an ON condition in the FROM clause, it may have returned incorrect results
or an error. This has been fixed.
================(Build #5273 - Engineering Case #397134)================
A query that referenced proxy tables and contained a row limitation in the
SELECT clause (ie. FIRST n or TOP n) may have incorrectly returned the warning
"The result returned is n
non-deterministic.", even if the query had an ORDER BY clause. This has
been fixed.
Note, this will not have happened if the complete query was forwarded to
the remote
server, (ie. Full Pass-thru mode).
================(Build #5272 - Engineering Case #396813)================
Changes for Engineering Case 395908 introduced a bug such that long binary
and long varchar columns were truncated for Open Client and jConnect applications.
This problem has been fixed.
================(Build #5272 - Engineering Case #396058)================
Inserting a string longer than 64K bytes into a column of a proxy table,
would have caused the local server to crash. This has been fixed.
================(Build #5272 - Engineering Case #395908)================
If an Open Client or jConnect application described a column that was of
type Varchar(n) or Varbinary(n), the size reported for the column would have
been 32768 instead of n, if n was greater than 255. This problem has now
been fixed.
================(Build #5272 - Engineering Case #395054)================
If the database option Wait_for_commit was set to ON while executing a LOAD
TABLE statement, and it failed with a referential integrity error, then the
database could have been left in an inconsistent or corrupt state. This has
been fixed.
Some of the errors that might be characteristic of this problem are:
- Assertion 200602 - Incorrect page count after deleting pages from table
'table_name' in database 'database_name' - could occur during TRUNCATE TABLE
or DROP TABLE.
- Database validation could report that an index has inaccurate leaf page
count statistics.
- Database validation could report that a foreign key is invalid and that
some primary key values are missing.
- Database validation could report that the rowcount in SYSTABLE is incorrect.
- Inconsistent row counts might be observed when querying the table sequentially
versus via an index.
================(Build #5270 - Engineering Case #393745)================
When running the reload.sql generated by the Unload utility, executing LOAD
STATISTICS statements may have failed. This would have occurred if the column
was of type binary or long binary, and the source database and the target
database had different collations
(e.g. one had a single byte collation and the other one a multi-byte
collation).
This has been fixed so that now the statistics of binary columns are only
loaded if both databases have the same collation.
================(Build #5269 - Engineering Case #394668)================
The ON EXISTING UPDATE clause of the INSERT statement can be used to update
rows that already exist in the database. By default, columns with default
values in existing rows should be left unmodified unless their values are
explicitly changed by the INSERT statement. Under some circumstances, the
server could have modified these columns incorrectly. This problem has been
resolved.
As a simplified example consider the following:
drop table a;
CREATE TABLE a (
a1 INTEGER NOT NULL,
a2 INTEGER NOT NULL,
a3 INTEGER NOT NULL DEFAULT AUTOINCREMENT,
a4 INTEGER NOT NULL,
PRIMARY KEY ( a1, a2 ) );
INSERT INTO a VALUES( 1, 1, 1, 1);
INSERT INTO a VALUES( 2, 1, 2, 2);
commit;
INSERT a ON EXISTING UPDATE WITH AUTO NAME
SELECT 1 AS a1, 99 AS a2, 11 AS a4
union all
SELECT 2 AS a1, 1 AS a2, 88 AS a4;
The INSERT statement should
1. Insert a new rows into table a with PKEY <1,99>, and
2. Update the value of a.a4 to 88 in the row with PKEY <2,1>. The default
column a.a3 in this row should now remain unchanged.
================(Build #5267 - Engineering Case #387997)================
A database file may have grown, even when free pages existed in the dbspace,
if the free space was fragmented such that no 8-page cluster aligned on an
8-page boundary existed within the dbspace, and pages for a large table (one
with bitmaps) were being allocated. When growing a large table prior to this
change, the server always allocated table pages in clusters of eight pages
so that group-reads could be performed on a sequential scan. If no clusters
were found, the dbspace was grown to create one. Now, if no free cluster
is found, the server will attempt to allocate pages from a cluster that has
both free pages as well as pages allocated to the table that is being grown.
If no free pages are found by this method, the server will use any free page
in the dbspace. So now te dbspace will not grow until there are no free pages
left in the dbspace.
As a work-around, periodically running REORGANIZE TABLE on all tables will
generally avoid the problem
================(Build #5267 - Engineering Case #384959)================
Changes made for Engineering Case #363767 could have caused the database
file to grow unnecessarily. A page that was in use as of the last checkpoint
is allowed to be reused before the next checkpoint, provided its preimage
has been saved in the checkpoint log. Prior to the changes for case 363767,
the preimage for a freed page was forced to disk and the page was allowed
to be reused immediately. After the changes for case 363767, the freed page
was not allowed to be reused until after the next checkpoint, because the
server no longer forced the preimage to disk for performance reasons. If
an application freed and reused pages frequently (for example, repeatedly
deleting all rows from a table then inserting rows back into the table),
the server would not have allowed many of the free pages to be used until
after the next checkpoint. The problem has been fixed by keeping track of
the set of free pages that would normally be allowed to be reused if only
the preimages were committed to disk.
Note that this growth was not unbounded and was not a 'leak', as the pages
are freed as of the next checkpoint. This problem only affected databases
created with 8.0.0 or later.
================(Build #5266 - Engineering Case #393746)================
If a space did not follow the method name in the EXTERNAL NAME clause of
the wrapper function to a Java method, calls to the function would have resulted
in a procedure not found error.
For example:
a wrapper function definition of
CREATE FUNCTION MyMeth (IN arg1 INT, IN arg2 varchar(255),IN arg3 INT )
RETURNS Int
EXTERNAL NAME 'TestClass.MyMethod(ILjava/lang/String;I)I'
LANGUAGE JAVA;
would have resulted in the error
Procedure 'TestClass.MyMethod(ILjava/lang/stringI)I' not found.
where as
EXTERNAL NAME 'TestClass.MyMethod (ILjava/lang/String;I)I'
would have worked. This has been fixed so that a space is no longer required
between the method name and the left parenthesis.
================(Build #5263 - Engineering Case #393022)================
If an expression contained a reference to a proxy table, the server would
have crashed if the expression was used:
- in a MESSAGE or PRINT statement
- in a RETURN statement of a function or procedure
- in a time/delay expression in a WAITFOR statement
- in an offset expression of a FETCH statement
This has been fixed so that the server now correctly returns the error "OMNI
cannot handle expressions involving remote tables inside stored procedures"
================(Build #5262 - Engineering Case #392668)================
If an application used the Remote Data Access feature to perform an INSERT
from SELECT in 'no passthru' mode, and the insert received an error, it was
possible for the server to have crashed. This problem has now been fixed.
================(Build #5262 - Engineering Case #392216)================
If a proxy table contained a column declared with DEFAULT AUTOINCREMENT,
and an insert into that table did not contain a value for that column, the
server may have crashed. For this to have happened, one of the column values
in the insert statement had to be an expression or function call that needed
to be evaluated. This has been fixed.
================(Build #5261 - Engineering Case #392502)================
If a Java application closed an SAConnection object, and then subsequently
called the 'isClosed' method on the same object, an exception would have
been thrown erroneously. This has been fixed.
================(Build #5259 - Engineering Case #392068)================
When backing up a database to multiple tapes, after the first tape had been
written, the request for the next tape would have failed. This problem has
been fixed.
================(Build #5258 - Engineering Case #391751)================
Numerous changes have been made to the system procedures used by jConnect
when connected to ASA servers. Newly created databases will have these changes,
but to update existing databases run the script jcatalog.sql, which is in
the scripts directory.
================(Build #5257 - Engineering Case #391357)================
If the text of a stored procedure, view or trigger was made unreadable using
the HIDDEN keyword, and the definition string was invalid, the server may
have crashed. This has been fixed.
================(Build #5257 - Engineering Case #391182)================
If the text of a stored procedure was made unreadable with the HIDDEN clause,
and it contained a call to itself with at least one parameter, then any call
to this procedure will have failed with the error "Wrong number of parameters
to function 'proc_name'". The error would have disappeared after restarting
the database or reloading the procedures due to the execution of a DDL statement.
This has now been fixed.
================(Build #5255 - Engineering Case #390765)================
If an Open Client dblib application attempted to fetch a tinyint value using
the dbdata() function, instead of dbbind(), then the application would always
get the value 0, instead of the actual tinyint value. Note that this problem
only occurred if the tinyint value was nullable. This problem has now been
fixed.
================(Build #5251 - Engineering Case #389796)================
When performing an UPDATE (or possibly a DELETE) using a keyset cursor, over
a table that was joined to itself (i.e. appears multiple times in the UPDATE
or FROM clause), the server could have failed to obtain write locks on rows
it modified. This could have resulted in lost updates, or a corrupted index,
if an update was made to an indexed column. This has been fixed.
================(Build #5250 - Engineering Case #389598)================
If an Open Client application used unsigned datatypes in an Remote Procedure
Call, there was a good chance the application would hang. This problem has
now been fixed.
================(Build #5249 - Engineering Case #388498)================
The server automatically gathers and maintains column statistics as queries
are executed. If multiple connections concurrently updated column statistics
as a result of query execution, there was a potential for some column statistics
to become inaccurate. This problem has been resolved.
================(Build #5249 - Engineering Case #382839)================
When using the Microsoft SQL Server "Import and Export Data" tool to move
tables from a Microsoft SQL Server database to an ASA database, and the connection
to the ASA server used the OLEDB provider, column data was truncated to 200
bytes. This has now been fixed.
================(Build #5248 - Engineering Case #389230)================
While running multiple connections that fetched from different proxy tables,
and different remote servers using ASEJDBC, if one connection was killed
then the server's Java Virtual Machine can no longer be started. This has
now been fixed.
================(Build #5248 - Engineering Case #388838)================
If a proxy table was created with a column that contained a DEFAULT clause,
then an insert into that table would have failed if the insert explicitly
specify the column, but with a different case for the column name. The returned
error would have been "Duplicate insert column". For example:
create table T1 ( col1 int default 10, col2 int ) at '....';
insert into T1 ( COL1, col2 ) values ( 1, 1 );
This has been fixed.
================(Build #5248 - Engineering Case #388752)================
If a column's COMPUTE clause contained a string constant, or a string constant
expression, the server would have crashed each time the compute expression
was evaluated. This has been fixed.
================(Build #5241 - Engineering Case #387061)================
The locked_heap_pages and main_heap_pages statistics were being reported
incorrectly if the performance monitor was left running during server restarts.
Further, these counters were not being reported as accurately as they could
have been. These problems have been corrected.
================(Build #5240 - Engineering Case #386918)================
When an error occurred in a subselect that was part of a procedural statement
(for example, SET, MESSAGE, IF/WHILE conditions, etc.), the server would
have failed to release part of the cache that was used by that subselect.
Subselects that are part of queries that return results sets, explicitly
opened cursors, insert...select, or select...into statements, are not affected.
This would not cause any immediate problems, however if a large number of
calls were made to such procedures, an increasing portion of the database
server cache would have become unavailable to the server for normal use.
This would then have caused the cache to grow larger than necessary, and
eventually, given enough such calls, have failed with a 'Dynamic Memory Exhausted'
error. This may also have shown up as steadily decreasing server performance.
This problem was more likely to appear if stored procedures were written
with exception handlers or ON EXCEPTION RESUME. This has now been fixed.
A workaround is to restart the server whenever performance drops below an
acceptable level, or at shorter intervals than the memory exhaustion error
is reported.
================(Build #5236 - Engineering Case #377749)================
A REMOVE JAVA CLASS ... statement would sometimes have failed to remove the
Java class, as an obsolete version of the class had a foreign key reference
to the current version. This problem was been fixed by deleting all obsolete
versions of a Java class first, and then deleting the current version.
================(Build #5235 - Engineering Case #385530)================
If an ALTER VIEW statement was executed by the creater of the view, but that
user no longer has RESOURCE authority, a "permission denied" error would
have been reported. The view definition in SYSTABLE.view_def would still
have been updated, but the preserved source for the view in SYSTABLE.source
would not have been updated. This has been fixed, now no error will be reported
in this situation, and the preserved source will be updated.
================(Build #5235 - Engineering Case #385494)================
The IsNumeric() function would have returned an error when the parameter
was too long. It now returns FALSE, since the parameter can't be numeric.
================(Build #5234 - Engineering Case #385158)================
When the schema of a database object was modified by a DDL statement, any
existing views that refered to the modified object could potentially have
become invalid. However, the server did not detect any problems until the
view was subsequently referenced. In order to avoid such problems from happening,
it was necessary to recompile the affected views via the "ALTER VIEW ...
RECOMPILE" statement. If such recompilation was not done after dropping a
column that is referenced by a view for example, then the server could have
crashed in certain situations when the affected view was referenced. This
has been fixed, the server will now generate an error without crashing.
The server will now generate an error without crashing.
================(Build #5233 - Engineering Case #381771)================
When performing a backup to a tape device on Windows systems, the server
would have asked for a tape switch after 1.5 GB of data had been backed up,
even if the tape capacity was larger. This problem has been fixed. The server
will now use the entire capacity remaining on the tape.
As a workaround, the desired capacity can be specified using the "capacity="
option in the device string.
For example:
BACKUP DATABASE TO '\\.\tape0;capacity=20064829' ATTENDED ON etc.
The value specified is in K and is calculated by dividing 20,546,384,896
(which is the capacity in bytes reported by Windows for a 20 GB tape) by
1024.
================(Build #5230 - Engineering Case #382307)================
When run on Sun Solaris systems, the server could, in rare circumstances,
have crashed executing a CREATE DOMAIN statement. This has been fixed.
================(Build #5229 - Engineering Case #382022)================
A memory leak in the Java Heap would have caused poor performance for Java
stored procedures, that used internal JDBC to access the database, if it
was called repeatedly. When a procedure like this was called repeatedly without
disconnection, the Java Heap would slowly grow until it reached its maximum,
at which time the Java Garbage Collector would run every time a memory allocation
request was made, causing poor performance. The memory leak has been fixed.
================(Build #5227 - Engineering Case #381486)================
If a large number of statements was being used concurrently by applications
connected to a database, a "SQL Statement error" could be issued. The maximum
number of concurrent statements that can be handled by the server per database,
was limited to 20 * (number of connections to the database) + 16384. This
limit has now been changed to be approximately 20 * (number of connections
to the database) + 65534. The exact number is dependent on the page size
of the database cache.
================(Build #5227 - Engineering Case #377911)================
Using AWE on Windows 2003 would very likely have caused some or all of the
following:
1) a blue screen error 76 with text "Process has locked pages",
2) event log messages indicating that a "driver is leaking locked pages",
3) ASA fatal errors indicating the reads or writes were failing with the
error code 1453 (ERROR_WORKING_SET_QUOTA), and/or
4) other generic fatal read/write errors
Microsoft has fixed this problem in Service Pack 1 of Windows 2003. It is
our understanding that no fix will be made by Microsoft prior to the release
of Service Pack 1.
In order to prevent these serious consequences the database server can no
longer be started on Windows 2003 pre-SP1 while using AWE. Any attempt to
do so will result in a startup error "Windows 2003 does not properly support
AWE caching before Service Pack 1"
At the time this description was written there was no existing Microsoft
Knowledge Base (KB) article describing this issue.
================(Build #5226 - Engineering Case #382345)================
Attempting to autostart a database with an invalid DatabaseSwitches connection
parameter could have caused the server to hang. If the server was also being
autostarted, the connection attempt could have hung. If the server was already
running, the connection attempt would not hang, but the server may have hung
when shutting down. These problems have now been fixed.
================(Build #5226 - Engineering Case #381438)================
It was possible for column statistics to become incorrect, which could have
potentially resulted in poor access plans. The situaltion was rare, and was
likely to have no other visible symptoms. This problem has been fixed.
================(Build #5226 - Engineering Case #381264)================
When running on NetWare in a low-memory situation, if the server was unloaded
using the NetWare console, and there were active connections to the server,
it was possible that the server would abend. This has been fixed.
================(Build #5225 - Engineering Case #379077)================
When the LOAD TABLE statement is used with the CHECK CONSTRAINTS OFF clause,
it does not validate any check constraints on the table. However, the check
constraints were built and annotated as part of the LOAD TABLE and failure
to do so resulted in the statement being rejected. The server will now ignore
the check constraints completely, by not attempting to build them at all.
As an example the following returns a "table not found error" when the table
CT1 referenced in the check constraint is not visible to the user executing
the LOAD TABLE statement:
grant dba,resource to dbowner
CREATE TABLE dbowner.CT1 ( c1 integer NOT NULL )
CREATE TABLE dbowner.T2 ( c1 integer NOT NULL, check(not c1 = any(select
c1 from CT1) ) )
LOAD TABLE dbowner.T2 ... CHECK CONSTRAINTS OFF ...
================(Build #5223 - Engineering Case #381001)================
An ASA remote database which had been synchronized, and then forcefully shutdown,
may not have recovered properly. Further synchronization attempts may fail
with the error "Missing transaction log(s) ... ". This problem has now been
fixed.
================(Build #5222 - Engineering Case #381217)================
If a BACKUP DATABASE statement specified a long directory name for the target
directory and also included TRANSACTION LOG RENAME MATCH, the server could
have subsequently crashed. This has been fixed. A workaround is to use a
shorter directory name in the BACKUP statement.
================(Build #5220 - Engineering Case #381112)================
The Datadd() and Datediff() functions would sometimes have produced incorrect
results when the time unit was Hours, Days, Weeks, Months or Years.
For example,
select dateadd(month, 119987, '0001/1/1'), dateadd(week,521722,'0001-01-01'),
datediff(month,'0001-01-01','9999-12-31')
produced the result:
****-04-01 00:00:00.000 1833-11-10 19:44:00.000 -11085
This has been fixed. Now, the result will be:
9999-12-01 00:00:00.000 9999-12-27 00:00:00.000 119987
================(Build #5220 - Engineering Case #380970)================
A CREATE PROCEDURE statement that did not qualify the procedure name with
a userid, would have failed with the error "Item 'procedurename' already
exists", even if the user did not own a procedure with the same name, but
there existed a procedure with the same name in the user's namespace, (ie
owned by a group the user was a member). This has been corrected.
================(Build #5220 - Engineering Case #380491)================
When adding days to a date using the Dateadd() function, it may have overflowed
without an error, returning an incorrect date value.
For example:
select dateadd(day, 2250318 , '2005-02-16') would have returned 0000-01-00'
This has been fixed, an error message will now be returned.
================(Build #5220 - Engineering Case #379951)================
On some new versions of the Windows CE operating system, it was possible
to get a 'Fatal Error: No such file or directory' message from the server,
when bringing the device out of STANDBY mode. This has been fixed.
================(Build #5219 - Engineering Case #380351)================
The server will no longer display command-line options in the console window
when any part of the command line is derived from an obfuscated file. It
will now display a command-line syntax error message with asterisks, instead
of the text that caused the syntax error when obfuscation is used, (ie Error
in command near "***" )
================(Build #5217 - Engineering Case #380378)================
Malformed GRANT statements could have resulted in incorrect behaviour. This
has now been fixed.
================(Build #5217 - Engineering Case #379892)================
If a variable was assigned a long string value (>254 character) from a table,
and the table was subsequently dropped or truncated, the server would likely
have crashed on the next use of the variable.
For example:
begin
declare local temporary table t(c varchar(32767));
declare rv varchar(32767);
insert into t values(repeat('*',5000));
select c into rv from t;
truncate table t;
select lcase(rv) from dummy;
end;
A work around is to concatenate a zero-length string to column value, i.e.
use "select c || '' into rv from t" in the example above.
================(Build #5216 - Engineering Case #380206)================
The server could have left connections hanging indefinitely, when receiving
multi-piece strings on a heavily loaded system. There was a chance that this
problem could also have occurred under other conditions, such as when the
network connection between client and server introduced long delays. This
was more likely to occur when the number of active requests was larger than
the number of tasks that were servicing requests (ie -gn). This has been
fixed.
================(Build #5216 - Engineering Case #380084)================
The Dateadd() function could have produced incorrect results when the time
unit was milliseconds, minutes, hours, days, or weeks.
For example:
select dateadd(hour,365*24*7923,'0001-01-01 21:45:37.027'),dateadd(hour,69399300,'0001-01-01
21:45:37.027')
would have resulted in:
'****-09-18 00:00:37.027' '9998-07-01 00:00:37.027'
This has been fixed. Now, the results are:
'7918-09-29 00:00:37.027' '7918-01-15 00:00:37.027'
Similarily, the Datediff() function produced incorrect results when the
time unit was milliseconds, minutes, hours, days, or weeks.
For example,
select datediff(minute,'2005-02-03 21:45:37.027','6088-02-26 23:52:37.027')
resulted in a range error. This has been fixed. Now, the result is
2147483647
================(Build #5216 - Engineering Case #379896)================
Dropping a column from a table could have caused any UPDATE OF {column} triggers
on that table to fire incorrectly. This has been fixed.
================(Build #5215 - Engineering Case #379916)================
Attempting to insert into a local table, selected rows from a proxy table
when the remote server was down or not available, would likely have caused
the server to crash. Note that the crash will only occur if this INSERT-SELECT
performs the first connection attempt to the remote server. This problem
is now fixed and a proper error message will now get displayed.
================(Build #5213 - Engineering Case #379688)================
If an application was using Java in the database, or was using Remote Data
Access with a JDBC class, then there was a possibility that the server may
have lost a SQL error. This was most likely to occur if the SQL error was
set, but the SQL error did not get reported to the client prior to the VM
Garbage Collector running. Due to the asynchronous nature of the VM Garbage
Collector, this problem is highly unreproducible. The problem has been fixed.
================(Build #5213 - Engineering Case #379414)================
The Dateadd() function would have produced incorrect results when the value
to be added was close to the maximum or minimum 32-bit signed integer values
and the time unit was milliseconds. For example:
select dateadd(ms,2147483647,'2005-02-03 21:45:37.027')
would have resulted in:
2005-01-10 01:15:**.***
This has been fixed, now, the result is:
2005-02-28 18:17:00.674
The Datediff() function also produced incorrect results when the difference
was close to the maximum or minimum 32-bit signed integer values and the
time unit was milliseconds. For example:
select datediff(ms,'2005-02-28 18:17:00.675','2005-02-03 21:45:37.027')
would have resulted in a range error. This has also been fixed, the result
is now:
-2147483648
================(Build #5213 - Engineering Case #379371)================
If a CREATE TABLE statement was executed, and the table had a multi-column
primary key, the statistics of the first primary key column would not have
been used until the database had been restarted. This has been fixed
================(Build #5213 - Engineering Case #379190)================
The Dateadd() function would have produced incorrect results when the value
to be added was close to the maximum or minimum 32-bit signed integer values
and the time unit was seconds. For example:
select dateadd(ss,2147483647,'2005-02-03 11:45:37.027')
would have resulted in:
1937-01-16 08:32:**.***
This has been fixed, now, the result is:
2073-02-21 14:59:44.027
The Datediff() function also produced incorrect results when the difference
was close to the maximum or minimum 32-bit signed integer values and the
time unit was second. For example:
select datediff(ss,'2073-02-21 14:59:45.027','2005-02-03 11:45:37.027')
would have resulted in a range error. This has also been fixed, the result
is now:
-2147483648
================(Build #5213 - Engineering Case #378835)================
The INSERT ... ON EXISTING UPDATE statement updates an existing row with
the new column values. If a column list had been specified, then in addition
to modifying the specified columns, the statement also modified columns with
their default values. Now, the server will no longer update default columns
unless explicitly asked to.
The following describes the new server behaviour:
1. When the row does not exist, the new row is inserted as per the
ÿÿ usual rules of the INSERT statement.
2. If the row exists, and ON EXISTING SKIP is specified, no changes
ÿÿ are made to the row.
3. If the row exists, and ON EXISTING UPDATE is specified, the row is
ÿÿ updated as par the following rules:
ÿÿ (a) All columns that have been explicitly specified in the
ÿÿÿÿÿÿ INSERT statement are updated with the specified values.
ÿÿ (b) Columns with defaults that are meant to be changed on an
ÿÿÿÿÿÿ UPDATE are modified accordingly. These special defaults include
ÿÿÿÿÿÿ "DEFAULT TIMESTAMP", "DEFAULT UTC TIMESTAMP", and
ÿÿÿÿÿÿ "DEFAULT LAST USER".
ÿÿ (c) Columns with other defaults are not modified unless some
ÿÿÿÿÿÿ of these are explicitly mentioned with a non default
ÿÿÿÿÿÿ value in the INSERT statement, in which case these columns
ÿÿÿÿÿÿ are modified as par rule 3(a) above.
ÿÿ (d) Computed columns are re-evaluated and modified using the new row.
ÿÿ (e) Any other columns are left unchanged.
================(Build #5213 - Engineering Case #378742)================
If an item in the GROUP BY clause was an alias reference, that appeared more
than once, then the statement would have failed with the eror "Function or
column reference to '???' must also appear in a GROUP BY". This has been
fixed.
The workaround is to remove the repeated item.
================(Build #5210 - Engineering Case #378243)================
Repeatedly calling a stored procedure that performed an INSERT, UPDATE or
DELETE into a proxy table, would likely have caused a server crash. This
problem has been fixed.
================(Build #5210 - Engineering Case #378242)================
If a batch containing a call to an external procedure was executed, and the
external procedure was subsequently canceled, the batch would have continued
execution, instead of being canceled as well. This problem has been fixed.
================(Build #5210 - Engineering Case #378026)================
Executing a query involving a column for which the server estimate for NULL
selectivity had become invalid (ie greater than 100%), could have caused
the server to crash. The server will now deal with this situation without
crashing. The problem can be rectified by recreating the affected column
statistics using the CREATE STATISTICS statement.
================(Build #5209 - Engineering Case #377433)================
If a jConnect or Open Client application made several requests to the server
using many host variables, but didn't open any cursors, and then attempted
to use a jConnect or Open Client cursor with host variables, then the server
would likely have crashed. This problem has been fixed.
================(Build #5207 - Engineering Case #377566)================
Under rare circumstances, when executing an UPDATE statement, the server
could have created empty histograms, and/or corrupted the selectivity of
IS NULL predicates. This problem has now been resolved.
================(Build #5206 - Engineering Case #370899)================
Several race conditions in the server while starting and stopping databases
have now been fixed:
- The server could have autostopped a database when it should not have,
or not autostopped a database when it should have.
- In rare timing dependent cases, the server could have deadlocked, asserted
or possibly crashed, when a database was starting up or shutting down.
Also in rare timing dependent cases, the server could have asserted or possibly
crashed if HTTP or HTTPS requests were made to a database that was starting
up or shutting down.
================(Build #5205 - Engineering Case #376977)================
If an application's operating system used a multibyte character set, which
was different from the character set of the database being unload by the
Unload utility dbunload, then dbunload could have generated an invalid reload.sql
script, and dbunload -an could have failed with a syntax error. Note that
dbunload -an turns off character set translation so the character set used
by dbunload in that case is the same as the database character set. For example,
running dbunload -an on a Chinese (cp936) machine to unload and recreate
a UTF8 database could have failed with a syntax error, or could have crashed.
This has been fixed.
================(Build #5204 - Engineering Case #376742)================
If the sybase_sql_ASAUtils_retrieveClassDescription() function was called
with a very long class name, a server crash could have occurred. This has
been fixed.
================(Build #5204 - Engineering Case #376699)================
On Windows 95, 98 or ME, the build number of the operating system was displayed
incorrectly.
For example:
"Running on Win98 build 67766222"
The correct OS build number is now displayed.
================(Build #5203 - Engineering Case #376608)================
If an Open Client application opened a cursor which cuased the warning: "cursor
options changed", the application would have failed to open the cursor. This
problem has now been fixed. There are situations where Open Client applications
are not expecting warnings, so certain warnings that are known to not be
handled are suppressed, while other warnings are sent as the client actually
expects them. The "cursor options changed" warning has been added to this
list of warnings not to be returned to an Open Client applications.
================(Build #5203 - Engineering Case #376606)================
Creating a COMMENT on a local temporary table would have caused the server
to fail with assertion 201501 - "Page for requested record not a table page
or record not present on page".
Example:
declare local temporary table temp1(c1 int);
comment on table temp1 is 'my comment';
Now an error is returned when attempting to add a comment to a local temporary
table.
================(Build #5203 - Engineering Case #368540)================
When used in Java Stored Procedures, cursors and prepared statements were
left open until the connection disconnected. If called repeatedly, they could
accumulate until a "resource governor exceeded error" occured. This has been
fixed.
================(Build #5201 - Engineering Case #376019)================
Executing queries containing the GROUP BY clause could have caused the server
to crash. The changes for Engineering Case 332010, and a related change for
case 363861, introduced the problem, which has now been fixed.
================(Build #5200 - Engineering Case #375757)================
If a BACKUP or RESTORE statement was executed from the Open Client Isql utility,
while the backup.syb file was marked as read-only, the server could have
crashed. This has been fixed.
================(Build #5198 - Engineering Case #375236)================
Under rare situations, calls to functions that took string parameters, could
have crashed the server. This was only a problem on Unix systems, and has
now been fixed.
================(Build #5197 - Engineering Case #375084)================
If a request-level log contained host variable information for a TDS connection,
the system procedure sa_get_request_times would not have recorded the host
variable information in the satmp_request_hostvar table. This has been fixed.
================(Build #5197 - Engineering Case #374988)================
An INSERT statement, using the ON EXISTING clause to insert the result set
of a query involving a remote table, into a local table, would have failed
with a syntax error. The server will now execute these statements correctly.
For example, instead of generating a syntax error, the following will cause
table 'bar' to contain one row:
CREATE SERVER asademo CLASS 'asaodbc' USING 'driver=Adaptive Server Anywhere
9.0;dbn=asademo';
CREATE TABLE foo(c1 int) AT 'asademo...';
create table bar( c1 int primary key );
insert into foo values(1);
insert into foo values(1);
insert into foo values(1);
commit;
insert into bar on existing skip select * from foo;
select * from bar
================(Build #5197 - Engineering Case #374976)================
The debugger would have shown all connections on the server, instead of only
showing those connections to the database that the debugger was connected
to. This has been fixed.
================(Build #5194 - Engineering Case #374452)================
If a proxy table to a DB2 table with a CLOB column, was used in a query,
then selecting that CLOB column would have failed with an unsupported datatype
error. This problem has been fixed.
================(Build #5191 - Engineering Case #373613)================
An obsolete Java class could have caused the error "-110 - 'Item ... already
exists'",
when attempting to install a new version of a Java class previously removed.
This has been fixed.
================(Build #5191 - Engineering Case #373462)================
If a CREATE TABLE statement failed, for example because of duplicate column
names, and no commit or rollback was executed so far, the next attempt to
execute a CREATE TABLE statement, on any connection, would have crashed the
server or cause an assertion failure 102801. This has now been fixed.
================(Build #5190 - Engineering Case #372882)================
If a view was created with an owner like: "create view dba.V1 as select c1
from T1" the preserved source of this view (column SOURCE in SYSTABLE) would
have contained an space character following the owner; e.g "create view dba
.V1 as select c1 from T1".
As well, if a view was created with a "select * ...", the view definition
(column VIEW_DEF in SYSTABLE) was unparsed without the space between the
"select" and "*"; e.g. "select* ...".
Neither of these errors caused problems in ASA, but did cause problems for
Powerdesigner. Both issues have been fixed.
================(Build #5189 - Engineering Case #373028)================
Stopping a server while a database was in the process of either starting
or stopping, could have caused incorrect behaviour, such as the database
requiring recovery the next time it is started, or the server asserting,
crashing or hanging. Now, server shutdown waits for databases which are not
active, to finish starting or stopping before shutting down, and ensures
that a database is not stopped twice.
================(Build #5188 - Engineering Case #373039)================
An attempt to create two user-defined types, whose names were the same except
for case, in a case sensitive database was permitted. This now results in
an error, since these names should always be case insensitive.
Also, dropping a user-defined type required the name to have matching case,
in a case sensitive database. This is no longer required.
================(Build #5186 - Engineering Case #372605)================
If an application, connected to a server via jConnect, cancelled a request
or closed a JDBC statement, the cancel or close could have failed and/or
dropped the connection entirely. This problem has been fixed.
================(Build #5185 - Engineering Case #372481)================
The estimate of the rate at which pages are being dirtied has been made less
pessemistic (a smoothed version of the estimates used in 7.x). Also, on
32 bit Windows systems, the server now measures the random write times rather
than using the cost model estimates, as this caused the estimates to be off
by a factor of 50 in some cases.
This is a further performance improvement to the issue originally addressed
by Engineering Case 355123.
================(Build #5185 - Engineering Case #372122)================
Engineering Case 304975 added support for handling UUID/GUID columns in proxy
tables to remote servers. Unfortunately, that change had the side effect
of disallowing creation of existing proxy tables with smalldatetime columns.
The problem with the smalldatetime column has now been fixed.
================(Build #5184 - Engineering Case #355123)================
The server could have performed poorly relative to 7.x servers when doing
a long sequence of database inserts, updates or deletes. The server was spending
longer than necessary cleaning up the cache in preparation for a checkpoint.
This time has now been reduced. Also, current servers now estimate the recovery
time better. Thus the Recovery_time database option may need to be set to
a larger value in order to have the server more closely match the value the
7.x server would have used.
================(Build #5181 - Engineering Case #370421)================
If the ROUND() function rounded a numeric value, the resulting value may
not have fit into the original NUMERIC data types percision. For example:
The constant 9.995 is of type NUMERIC(4,3). The result of ROUND(9.995,1)
is 10.000 which does not fit into numeric(4,3). As a result the numeric value
generated by the ROUND() function could have been invalid and a conversion
of this numeric value to a string could have returned '?'.
This problem has been fixed. If the numeric value passed to ROUND() is a
constant, the resulting data types percision is increased by one, (numeric(5,3)
in the above example). If it is not a constant and the resulting value does
not fit, then a SQLE_OVERFLOW_ERROR is generated if the option Convertion_error
is set, otherwise NULL is returned.
================(Build #5180 - Engineering Case #343581)================
When a row was inserted into a table with a COMPUTE or CHECK clause that
contained a correlated subselect, where the outer references was to a column
of the base table, the server may have crashed. This has been fixed.
In the example below, the outer reference T.a is used in a subselect of
the COMPUTE clause for the column T.b:
create table T(
a int,
b char(10) not null COMPUTE (left( '0001', (select
max(row_num) from rowgenerator where row_num = T.a )) ))
insert into T values (1, '1')
================(Build #5179 - Engineering Case #371032)================
If a query referenced both proxy and local tables, or proxy tables from different
servers, then it will have to be executed in 'no passthru' or 'partial passthru'
mode. If such a query also contained column references of the form 'user.table.column',
then the query would have failed with error -845 "Owner '<owner name>' used
in qualified column reference does not match correlation name '<table name>'".
This problem has now been fixed.
================(Build #5179 - Engineering Case #370456)================
Executing a VALIDATE TABLE statement, and using the WITH EXPRESS clause,
(or dbvalid -fx), would have failed with the error "Not enough memory to
start", if the currently available cache space was not large enough. If cache
resizing is possible, the server will now try to increase the cache size
to the amount required.
================(Build #5179 - Engineering Case #370071)================
When the BACKUP DATABASE TO statement failed and returned an error, (for
example if the location for the achive file was not writable), then subsequent
BACKUP DATABASE TO statements that failed would have caused the server to
fail with assertion 104400 (a stack overflow) on Solaris 8 or 9 systems.
This has been fixed.
================(Build #5176 - Engineering Case #370312)================
A query in a procedure or batch, with an expression that involved remote
tables and a unary minus or simple cast operator, would have failed with
the error:
ASA Error -823: OMNI cannot handle expressions involving remote tables
inside stored procedures.
This problem has now been fixed so that these operators do now work in expressions
involving remote tables.
================(Build #5175 - Engineering Case #370045)================
Insert performance on systems with three or more processors would have been
much poorer than on single processor systems. The drop in performance would
have been more noticable as the number of processors increased (and was likely
even more noticeable on Unix systems). This problem has now been corrected.
================(Build #5171 - Engineering Case #369410)================
If a stored procedure was dropped and then another stored procedure with
the same name was immediately created, users who had permission to access
the first procedure, and had already called it, will still have been able
to access the second procedure, even if they have not explicitly been given
permission, until the next time the database was stopped. This has been fixed.
================(Build #5170 - Engineering Case #369234)================
The server may have occasionally appeared to temporarily hang with CPU usage
at 100%. This would have occurred when there were only a few pages of the
server's cache available for reuse. The server would have continually reused
these few pages rather than immediately grow the cache. This problem has
been corrected.
================(Build #5170 - Engineering Case #369122)================
The server may have exhibited poor performance if many connections try to
concurrently truncate a global temporary table. This was due to each connection
attempting to acquire an exclusive lock on the global temporary table definition.
Since each connection
already has a pointer to the table definition, acquiring an exclusive lock
is no longer done.
================(Build #5170 - Engineering Case #368236)================
In rare circumstances, if a database which required recovery was autostarted,
the server could hang with the server window still minimized. One situation
where this could have occurred was when the database had a "disconnect" event.
A workaround is to start the database manually first to allow the database
to recover, and then shutdown this engine.
This issue has been fixed.
================(Build #5168 - Engineering Case #368251)================
The server would have failed to return the result set under certain circumstances.
One such situation was when the option Row_counts was set to 'ON' and the
query access plan had an indexed sort node at the top. This problem has now
been fixed.
================(Build #5167 - Engineering Case #368551)================
The server could have crashed when executing Java code. This has been fixed.
================(Build #5167 - Engineering Case #366682)================
A query that satisfied the following conditions would have incorrectly failed
with the error "Invalid use of an aggregate function":
- it used an ANY, ALL, IN, or NOT IN subquery
- the subquery used a grouped derived table or view
- the derived table or view aliased an aggregate function
- the alias was used in another column of the derived table or view
For example:
select 1
where 1 not in ( select b from (select sum(1) as b, b + 1 as c) dt )
This problem has now been fixed.
================(Build #5165 - Engineering Case #368127)================
If a remote server was created using one of the Remote Data Access ODBC classes,
opening a cursor on a proxy tables from that server would have leaked about
8 bytes of memory for each remote column. Memory allocated at cursor open
time, to hold the indicator for each column, is now freed when the cursor
is closed.
================(Build #5165 - Engineering Case #368053)================
If a client application terminated while the last connection to the server
was in the process of being disconnected, the server may not have autostopped
when it should. In these cases, the server icon and window would still have
been active, and the database would have been stopped, but no connections
could have been made to the server. Pressing shutdown would have stopped
the server though. This problem has now been corrected.
Note, this scenario could have occurred in a multithreaded application if
all of the following conditions were true:
- the server was autostarted by a multithreaded application
- the main thread of the application signaled a child thread to shut down
- the child thread did a disconnect as part of shutting down
- the main thread did not wait for the child thread to complete before ending
This has been fixed so that the server will correctly autostop.
================(Build #5164 - Engineering Case #367935)================
When run on Unix systems, the server could have crashed when a non-DBA user
was connected, if auditing was enabled. This has been fixed.
================(Build #5164 - Engineering Case #367688)================
Support for textual options to the Transact-SQL statement SET TRANSACTION
ISOLATION LEVEL, have been added for compatibility with Sybase ASE and Microsoft
SQL Server. Applications can now issue the following variants:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
which correspond to setting the isolation level of the connection to 0,
1, 2, or 3 respectively.
================(Build #5164 - Engineering Case #366401)================
Rebuilding databases on Unix systems, using the Unload utility dbunload with
the -ar or -an command line options, would have failed during the rebuild
operation, if the source database had table or row constraints that specified
stored procedures. This has been fixed.
================(Build #5163 - Engineering Case #367716)================
The userid 'dbo' was unable to use EXECUTE IMMEDIATE to execute a string
containing a multi-statement batch. This restriction has now been removed.
================(Build #5162 - Engineering Case #367689)================
When running on NetWare systems, executing complex queries could have caused
the server to abend with CPU Hog Timeout. This has been fixed by adding more
yields, and checks for stack overflows, in the optimizer.
================(Build #5162 - Engineering Case #367663)================
The server could have failed to drop a temporary table on a database opened
read-only. This would only have occurred if the temporary table was declared
using Transact-SQL syntax, (ie "#table_name"). This has been fixed.
================(Build #5162 - Engineering Case #367661)================
Executing an INSERT ... ON EXISTING UPDATE statement could have caused a
deadlock in the server, if another transaction was updating the table that
was being modified, and a checkpoint (or DDL statement) was pending. This
has been fixed.
================(Build #5162 - Engineering Case #363767)================
Deleting or updating a large number of rows could have taken longer than
a comparable operation done with a server from version 8.0.1 or earlier.
This would only have been observed when using a database created with version
8.0.0 or later. This has been corrected.
================(Build #5161 - Engineering Case #367342)================
If the first byte of the DELIMITED BY string for a LOAD TABLE statement was
greater than or equal to 0x80, the LOAD TABLE statement would not have recognized
any delimiters in the input file. This is now fixed.
================(Build #5160 - Engineering Case #367252)================
If the get_identity() function was used to allocate an identity value for
a table, but the table itself was not modified by the current connection,
or any other connection, then the value of the SYSCOLUMN.max_identity column
was not updated at the next checkpoint. If the database was shutdown and
restarted, get_identity() would then have re-used values previously generated.
This has been fixed.
Note that the use of an empty table having an autoincrement column, together
with get_identity(), may still have resulted in values being re-used if the
database was not shut down cleanly and values were allocated since the last
checkpoint. Depending on how the values were used, it may have been possible
to correct the starting value in a DatabaseStart event by calling sa_reset_identity()
with the next value to use. For example:
declare maxval unsigned bigint;
set maxval = (select max(othercol) from othertab);
call sa_reset_identity('IDGenTab', 'DBA', maxval);
================(Build #5160 - Engineering Case #365953)================
If a user-defined function contained a COMMIT, calling the function in a
SELECT statement within a batch or procedure would have caused the cursor
for the batch or procedure to be closed if the cursor was not declared WITH
HOLD. This may have resulted in unexpected error messages like "Column '@var'
not found". Now these cursors will not be closed.
================(Build #5159 - Engineering Case #367116)================
Executing a LOCK TABLE ... IN EXCLUSIVE MODE statement on a table did not
prevent other transactions from subsequently obtaining exclusive locks on
rows in the table when executing INSERT ... ON EXISTING UPDATE statements.
Although it would have prevented explicit UPDATE statements from subsequently
updating rows. This could have resulted in applications deadlocking unexpectedly.
This has been fixed.
================(Build #5159 - Engineering Case #366920)================
Calling the DATEPART() function, with the date-part CalWeekOfYear, would
have returned the wrong week number if the year started with a Friday, Saturday
or Sunday and the day of the date-expression passed was a Sunday, but not
the very first one. For example: DATEPART( cwk, '2005/01/09' ) would have
incorrectly returned 2 instead 1. This has now been fixed.
================(Build #5159 - Engineering Case #366552)================
When making a remote procedure call to a remote server whose class was either
ASAJDBC or ASEJDBC, if the remote procedure was a Transact-SQL procedure,
with either an INOUT or OUT argument that returned a result set, then it
was likely that the rows in the result set will not have been returned. The
INOUT or OUT parameters were incorrectly being fetched first, prior to fetching
the result set. In JDBC, fetching the value of an OUT or INOUT parameter
will close all result sets. Now the values of OUT or INOUT parameters are
fetched only when the procedure has completed execution.
================(Build #5159 - Engineering Case #365603)================
Making an RPC call, or executing a FORWARD TO statement, may have failed
to return a result set, even though one was returned by the remote server.
Note that this problem only happened when the Remote Data Access class was
either ASAJDBC or ASEJDBC. This has been corrected.
================(Build #5158 - Engineering Case #366562)================
If the subsume_row_locks option is on and a table T is locked exclusively,
the server should not obtain row locks for the individual rows in T when
executing an UPDATE. This was not the case if T was updated through a join
(or if T had triggers, computed columns, etc.), or if T was modified via
a keyset cursor. Now, no locks are aquired in this situation.
================(Build #5158 - Engineering Case #366233)================
If a stored procedure contained a statement that performed a sequential scan
of a global temporary table, executing the statement could have caused the
server to crash. This problem would have occurred if the following conditions
held:
- The statement plan was cached
- The table was declared as "ON COMMIT DELETE ROWS"
- The table had more than 100 pages when the plan was cached
- COMMIT was called before the statement was executed
This problem has been fixed. The crash could be avoided by setting the option
'Max_plans_cached' to 0.
================(Build #5158 - Engineering Case #365453)================
Calling a wrapper procedure for a Java class which returned a result set
would have leaked memory and could have crashed the server. This has now
been fixed.
================(Build #5157 - Engineering Case #366532)================
Computed columns with JAVA expressions may have be incorrectly parsed, causing
the error: "ASA Error -94: Invalid type or field reference". This problem
occurred if the computed column belonged to a table B, and there existed
another table A, used in the same statement, having a column with the same
name as the JAVA class name. This has been fixed.
For example:
The following query returned the error "ASA Error -94: Invalid type or field
reference":
select * FROM A WHERE A.ID NOT IN ( SELECT B.ID FROM B );
Table B has the computed column "EntityAddressId" referencing the JAVA class
"Address", and table A has a base table column named "Address". Note that
the computed column doesn't have to be referenced in the query.
CREATE TABLE A
(
ID int,
"Address" varchar (10)
);
CREATE TABLE B
(
ID int,
"EntityAddressId" numeric(10,0) NULL COMPUTE (Address >> FindAddress(0,
'/', 0)),
);
================(Build #5157 - Engineering Case #366364)================
Database validation, using either the Validation utility dbvalid, or executinmg
the VALIDATE DATABASE statement, could have failed to detect some corrupted
LONG VARCHAR columns. Assertion 202000 is now generated when a corrupted
LONG VARCHAR is encountered.
================(Build #5157 - Engineering Case #366282)================
The CREATE SCHEMA statement was not being logged correctly in the transaction
log if it contained at least one CREATE VIEW statement. The statement logged
was the last CREATE VIEW statement, instead of the entire CREATE SCHEMA statement.
As a consequence, the CREATE SCHEMA statement was not recoverable. Also,
recovery could have failed with a "Failed to redo an operation" assertion,
if the logged CREATE VIEW statement could not have been executed, e.g., because
it referred to a table created within the original CREATE SCHEMA statement.
This problem has been resolved.
================(Build #5157 - Engineering Case #366167)================
Database recovery could have failed when mixing Transact-SQL and Watcom SQL
dialects for Create/Drop table statements. This has been fixed. The following
example could have caused database recovery to fail if the server crashed
before the next checkpoint.
create global temporary table #test (col1 int, col2 int);
drop table #test;
create global temporary table #test (col1 int, col2 int);
drop table #test;
A workaround is to only use #table_name for creation of local temporary
tables.
================(Build #5156 - Engineering Case #366150)================
The server could have become deadlocked when executing the system procedure
sa_locks. For this to have occurred, two connections must have been issuing
sa_locks calls concurrently, or the user definition for the owning connection
was not in cache, which is not a likely occurence. This problem has been
fixed.
================(Build #5156 - Engineering Case #365147)================
When using the -m command line option (truncate transaction log after checkpoint),
if a transaction log file was being actively defragmented or virus scanned
at the time a checkpoint occurred, then the server could have failed with
assertion 101201. The operating system will not allow the file to be recreated
until the virus scan or defragmentation has completed. As a result, the server
will now wait and retry the operation several times. A workaround would be
to remove the transaction log file from the list of files that are actively
scanned or defragmented.
================(Build #5156 - Engineering Case #358350)================
If a table was part of a publication, altering a trigger on that table could
have caused the server to fail with assertion 100905 on a subsequent INSERT,
UPDATE or DELETE. For this to have occurred, the table must have been referenced
in a stored procedure and the procedure must have been called at least once
before the ALTER and once after. This has been fixed.
================(Build #5154 - Engineering Case #365730)================
If a server attempted to start what appeared to be a valid database file,
and the database failed to start for any reason, then unexpected behavior
could have occurred on future requests to the same server. The unexpected
behavior could have included server crashes, assertions, and possibly database
corruption. This has been fixed.
================(Build #5153 - Engineering Case #365480)================
When running on NetWare systems, if the ASA server failed with a fatal error,
the NetWare server would have abended. This has been fixed.
================(Build #5153 - Engineering Case #365188)================
If a view was defined with the "WITH CHECK OPTION" clause and had predicates
using subqueries, then opening an updatable cursor or executing an UPDATE
statement might have caused a server crash. This has been fixed.
For example:
create view products
as select p.* from
prod as p
where p.id =
any(select soi.prod_id from sales_order_items soi KEY JOIN sales_order
so
where so.order_date > current date )
with check option
The following INSERT statement would have crashed the server:
insert into products (id) values ( 1001 )
================(Build #5151 - Engineering Case #365038)================
If a statement that modified a remote table was executed within a savepoint,
no error was given, even though remote savepoints are not supported. Now,
if an UPDATE, DELETE or INSERT statement attempts to modify a remote table
while inside a savepoint, it will fail with the error "remote savepoints
are not supported". Note that remote procedure calls within a savepoint will
also fail with this error, as there is a chance the remote procedure will
modify tables on the remote database.
================(Build #5149 - Engineering Case #362895)================
In some cases, the selectivity estimate for a predicate '<column> IS NULL'
could have been set incorrectly to 0. This could have lead the query optimizer
to select poor execution plans, for example, selecting an index to satisfy
an IS NULL predicate instead of another, more selective predicate.
For this problem to have occurred, a query must have contained a predicate
of the form:
T.col = <expr>
The expression <expr> must have been an expression whose value was not known
at query open time. For example, <expr> could have been a column of another
table, or it could have been a function or expression that was not evaluated
at optimization time. The predicate must have been the first predicate evaluated
for the associated table scan, and the table T must have been scanned once
with the value of <expr> being NULL. In these circumstances, the selectivity
of 'T.col IS NULL' would be incorrectly set to 0. This has been fixed.
If an application opened a cursor over a query that contained an index scan
with a single column in the index being searched for equality, the selectivity
estimate could have been lowered incorrectly if the application scrolled
the cursor using absolute fetches and did not visit all of the rows of the
result set but ended the scan after the last row of the result set. This
problem would have resulted in selectivity values being stored that were
lower than expected, and could have lead the query optimizer to select poor
execution plans by picking an index on this column instead of a better index.
This problem has also been fixed.
================(Build #5148 - Engineering Case #365508)================
When used with multiple Java Threads, and Synchronized Java Methods, Java
Stored Procedures could have produced unexpected results. This has niw been
fixed.
================(Build #5147 - Engineering Case #364246)================
A reference to a column not in the GROUP BY list, when made from an IN condition,
was not being reportrd as an error.
For example:
select if emp_id in (1) then 0 else 1 endif
from employee
group by state
This problem is now fixed.
================(Build #5145 - Engineering Case #363861)================
A query such as the following:
select 'a', 'a' from employee group by 'a', 'a'
where a constant string appears in both the select list and the GROUP BY
clause, could have caused the server to crash. This has been fixed.
================(Build #5143 - Engineering Case #363251)================
When using the FORWARD TO statement in interactive mode (i.e. issuing a "FORWARD
TO <server>" statement first and then issuing individual statements, all
of which are to be executed on the remote server), there was a pobbibility
that one, or all, of the statements would have been executed locally instead
of remotely. There was also a pobbibility that the statements will not have
been executed at all. This problem was most likely to have occurred when
connected via jConnect, or if the remote server name had a space in it, or
any character that would required quoting. This problem has now been fixed.
================(Build #5143 - Engineering Case #362311)================
When estimating the selectivity of a predicate of the form "column = (correlated
subselects)", the server was treating the predicate as "column = (single
value)". This assumption could have lead to overly low selectivity estimates
resulting in poor plans. In reality, the correlated subselect can cause a
large number of values to be compared with the column, resulting in many
rows.
Now, the server will use a "guess" estimate of 50% for the selectivity of
these predicates.
================(Build #5142 - Engineering Case #363397)================
The server could possibly have crashed on shutdown, if the Java VM had been
used. This would occurred if the VM needed to load additional classes during
shutdown. Now, failure to load a new class during shutdown is handled by
the Java's exception mechanism and the VM will still shutdown.
================(Build #5140 - Engineering Case #361512)================
If a stored procedure contained a CREATE VIEW statement, the second execution
of the CREATE VIEW statement may have failed with a "column not found". This
has been fixed.
A workaround is to use EXECUTE IMMEDIATE to create the view.
================(Build #5140 - Engineering Case #359745)================
If a stored procedure declared a temporary table and returnd a result set,
opening a cursor on the procedure would have made the temporary table visible
outside the procedure. This has been fixed.
================(Build #5139 - Engineering Case #362585)================
Queries that used a multi-column index could have returned incorrect results.
For this to have occurred, all of the following must have been true:
- The index must have been comparison-based.
- One of the first few columns being indexed must have been a short character
(or binary) column [the column must have been fully hashed]. This column
must not have been the last column being indexed.
- The query must have contained a comparison involving this column [say
with domain char(n)] and a string s with length > n whose n-byte prefix appeared
as a value in the column.
This problemhas now been corrected.
================(Build #5139 - Engineering Case #362220)================
If another transaction attempted to query or modify a table while a the fast
form of TRUNCATE TABLE was executing on the same table, the server could
have failed an assertion, and in some cases, possibly corrupted the database.
This was not likely to occur on single processor Windows platforms. This
problem has been corrected.
================(Build #5139 - Engineering Case #360885)================
Using variable assignments in a positioned update, (e.g update T1 set id
= @V1, @V1 = @V1 + 1 where current of curs), would have caused a server crash.
Now, variable assignments in a positioned update are supported.
================(Build #5138 - Engineering Case #362312)================
The restructuring of column statistics in the server could have caused memory
corruption, which can result in various symptoms, including server crashs
and assertions. The chances of this problem happening was remote. It could
only have occurred if the memory allocator returned the same memory as that
used in a previous invocation of the restructuring of the same histogram.
This problem has now been resolved.
================(Build #5138 - Engineering Case #354157)================
When starting a database "read-only" using the "-r" command line option,
the server could have failed with assertion 201851, if the database had been
created by ASA 8.0.0 or newer. This has now been fixed.
================(Build #5136 - Engineering Case #361999)================
Index statistics for SYSATTRIBUTE could have become out of date, resulting
in errors being found when running the Database Validation utility dbvalid.
This problem has now been resolved.
================(Build #5136 - Engineering Case #360237)================
Memory-intensive operations, such as a sort, hash join, hash group-by, or
hash distinct, could have caused the server to fail with a fatal memory exhausted
error, if they were executed in an environment where the operation could
not be completed completely in the available memory. This issue affected
all platforms, and has now been fixed.
================(Build #5135 - Engineering Case #360694)================
The server could have deadlocked, and appear to be hung, if a transaction
yielded to a checkpoint (by context switching, waiting for a lock or waiting
for network I/O) after rolling back to a savepoint. This has been fixed.
================(Build #5133 - Engineering Case #361184)================
A query with a large WHERE clause containing the conjunction and disjunction
of many literals could have caused the server to hang with 100% cpu usage
and eventually run out of memory and fail with a "Dynamic Memory Exhausted"
message. This has been fixed.
================(Build #5132 - Engineering Case #360311)================
A query with a large number of OR'ed predicates (about 20,000 on Windows
systems) may have caused the server to crash.
For example:
select T2_N.b
from T1, T2, T2_N
where T1.a = T2.b and T2.b = T2_N.b and
( T1.a = 1 or T1.a = 2 or T1.a = 3 or .... or T1.a = 20000)
The number of OR conditions to cause the crash depended on the available
stack size. This problem has been fixed. Now these queries return an error
"Statement size or complexity exceeds server limits".
================(Build #5132 - Engineering Case #359741)================
If a synchronization was performed prior to changing the database options
Truncate_timestamp_values to ON and Default_timestamp_increment to a value
which would disallow timestamp values with extra digits of precision, the
next synchronization would have caused a server crash. The server will now
display an assertion indicating that an attempt to store an invalid timestamp
value in a temporary table was made. The options must be changed before the
first synchronization is performed.
================(Build #5130 - Engineering Case #356739)================
If a trigger on table T referred to a column of T that had been dropped or
renamed, then the server could have crashed when processing a query referring
to T after the server was restarted. For the crash to have occurred, the
referencing query must have been sufficiently complicated to allow predicates
to be inferred. The cause of the crash has been fixed, and other changes
have already made it impossible to rename or drop columns referenced by triggers.
================(Build #5129 - Engineering Case #366267)================
When the database option Ansi_close_cursors_on_rollback was set to 'ON',
the Validation utility dbvalid would have failed to validate all the tables
in the database. The error 'cursor not open' would have been displayed. This
has been fixed.
================(Build #5129 - Engineering Case #360455)================
During unloading or rebuilding a database a non-clustered index may have
been recreated as a clustered index. This would have happened if there was
at least one table with a clustered index and subsequent unloaded table definitions
had a non-clustered index with the same index id as the clustered index.
This has now been fixed.
================(Build #5128 - Engineering Case #360464)================
If a grouped query had a base table's column T.A in its select list, and
the table T qualified to be eliminated by the join elimination algorithm,
then T.A might have been incorrectly renamed. This has been fixed.
For example:
The query below returned a result set which had the first column renamed
"A2":
create table T1 ( A1 int primary key);
create table T2 ( A2 int,
foreign key (A2 ) references T1(A1) );
select A1
from T1 key join T2
group by A1
Note that the query was rewritten by the join elimination algorithm into:
select A2
from T2
group by A2
Now, the query is rewritten into:
select A2 as A1
from T2
group by A2
A work around for this problem is to alias all the columns referenced in
the SELECT list with their own names.
For example, the query Q' below is not affected by this bug:
Q':
select A1 as A1
from T1 key join T2
group by A1
================(Build #5128 - Engineering Case #357965)================
Under certain conditions, SELECT DISTINCT queries with complex ORDER BY expressions
may have received an erroneous syntax error.
For example, the following query would have failed with SQLCODE -149:
Select distinct e.emp_lname + space(50) + '/' + e.emp_fname
from employee e, employee e2
where e.emp_id = e2.emp_id and e2.dept_id = 100 and (e.city = 'Needham'
or e2.city = 'Burlington' )
order by e.emp_lname + space(50) + '/' + e.emp_fname
This problem has been corrected.
================(Build #5124 - Engineering Case #359212)================
The debug server log, (output generated bu -z command line option), could
have contained extraneous "Communication function <function name> code <number>"
diagnostic messages. For example after stopping a database using dbstop.
Similarly there could have been extraneous instances of this diagnostic message
in the client LogFile.
These extraneous diagnostic messages have been removed. Please note that
this diagnostic message can still validly occur under some circumstances
and may be useful to help Technical Support diagnose other problems.
This change also prevents some spurious "connection ... terminated abnormally"
messages.
================(Build #5123 - Engineering Case #359242)================
The server could have crashed when attempting to get a db_property for a
database which is in the process of being started. When a database is being
started, a "stub" database object is added to the server's list of databases,
which has NULL pointers for a number of fields, including the statistics
counter array. Calling the db_property() function would have attempted to
get statistics for the stub database. Fixed by only getting properties for
active databases.
================(Build #5123 - Engineering Case #355292)================
Updating the version of jConnect to a newer version than the one shipped
with ASA (ie newer than 5.5), would have likely have resulted in postioned
updates failing with an exception. Versions of jConnect newer than 5.5 support
the new status byte that was added to allow distinguishing between a NULL
string and an empty string. When performing positioned updates, jConnect
sends the KEY column values so that the row being updated can be uniquely
identified. This status byte was not supported for KEY columns, but the server
was still expecting it, resulting in a protocol error. This has been fixed.
================(Build #5120 - Engineering Case #357683)================
When an application closed a cursor, the server was not freeing the cursor's
resources before dropping the associated prepared statement or when the connection
ended. This caused problems for applications that open many cursors on the
same prepared statement. These applications would get errors when attempting
to open a cursor, such as "Resource governor for 'cursors' exceeded", if
the option MAX_CURSOR_COUNT was not set, or "Cursor not open". Now the cursor's
resources are freed when a cursor is closed.
================(Build #5119 - Engineering Case #358497)================
If a database with auditing enabled required recovery, the server may have
indicated during recovery that the log file was invalid. If an audit record
in the transaction log was only partially written, the audit record would
have appeared corrupt. This is now ignored if the partial audit record is
at the end of the log.
================(Build #5119 - Engineering Case #356216)================
The creation or execution of a stored procedure may have caused a server
crash if the parameter list contained the special values SQLCODE or SQLSTATE,
and in the procedure body a Transact-SQL variable was declared (ie variables
that start with @). This has nowbeen fixed.
================(Build #5117 - Engineering Case #358040)================
In rare low-memory situations, the server could have crashed or quietly ended.
This has been fixed.
================(Build #5117 - Engineering Case #357689)================
A row limit (ie 'FIRST' or 'TOP nnn') could have been ignored on DELETE and
UPDATE statements. This was most likely to occur on smaller tables and for
very simple statements.
Example:
create table temp1 (nID int not null);
insert into temp1 (nID, nid2) values (1);
insert into temp1 (nID, nid2) values (1);
commit;
delete first from temp1; //deletes both rows
This problem has been corrected. A workaround is to attach a redundant ORDER
BY clause to the statement.
delete first from temp1 order by 1+1; //deletes just one row
================(Build #5117 - Engineering Case #357306)================
The server could have chosen an inefficient execution plan for a query, even
after freshly creating statistics for all tables involved in the query. The
poor plan was usually caused by poor join selectivity estimation. Now, the
server will have more information available after a CREATE STSTISTICS is
executed, which should improve the quality of plans chosen.
================(Build #5117 - Engineering Case #356795)================
On Windows 95, 98 or ME, if a network server had both TCP/IP and SPX connections,
the server could have hung with 100% CPU usage. This has been fixed.
Note if using a network server, Windows NT, 2000, XP or 2003 are recommended
over Windows 95, 98 or ME, to ensure better performance and reliability.
================(Build #5117 - Engineering Case #356762)================
A non-fatal assertion failure: 105200 "Unexpected error locking row during
fetch" could have been reported when executing an outer join with a temporary
table on the null-supplying side of an outer join. This would have appeared
to an application as error -300 "Run time SQL error" or -853 "Cursor not
in a valid state". This has now been fixed.
================(Build #5117 - Engineering Case #356595)================
If a RAISERROR or PRINT statement containted a subselect in the format string
or in a PRINT expresstion, the server may have crashed or returned an error.
This has been fixed.
================(Build #5117 - Engineering Case #355965)================
When run on SMP systems using processors from Intel's P6 family, (as well
as Pentium 4 and XEON), the server could have hung when receiving multi-piece
strings via shared memory connections. This problem also affected ODBC, OLEDB
and Embedded SQL clients as well. It has been fixed.
================(Build #5117 - Engineering Case #355831)================
Executing an ALTER TABLE statement which attempted to modify a column and
then drop the column in the same statement would have caused the server to
crash. Attempting to modify and drop a column in the same ALTER TABLE statement
will now generate the error "ALTER clause conflict". These changes must be
made with separate ALTER TABLE statements.
================(Build #5117 - Engineering Case #355527)================
If a server goes down dirty (eg. due to a power failure), there can be a
partial operation at the end of the log. If such a log was applied to a database
by using the -a (apply named transaction log file) server command line option,
restarting the server using that database and log file (without -a) could
have caused the server to fail to start with the message "not expecting any
operations in transaction log". The problem would only have occurred if the
incomplete operation was the first operation of a new transaction and there
were no other transactions active after all complete operations had been
applied. The problem has been fixed by removing the partial operation from
the log after the log is applied (or recovery is completed).
================(Build #5117 - Engineering Case #355459)================
When sending or receiving multi-piece strings on a heavily loaded system,
the server could have deadlocked, causing a hang. This has been fixed. A
work around would be to increase the number of tasks available to service
requests (-gn). Alternatively, a dba user could use a pre-existing connection
with the DEDICATED_TASK option set, to manually break the deadlock by cancelling
one or more executing requests.
================(Build #5117 - Engineering Case #355299)================
When the server ran databases created with ASA versions 4.x and earlier (or
databases upgraded from ASA versions 4.x and earlier), queries that made
use of index scans over fully hashed indexes could have returned incorrect
results. An optimization for index scans in older databases was incorrect.
This optimization has now been removed, so a drop in performance when using
older databases will likely be noticed. An unload/reload is recommended if
the resulting performance is not acceptable.
================(Build #5117 - Engineering Case #355098)================
An ALTER TABLE statement that added, modified or deleted a table's CHECK
constraint, a column's CHECK constraint or renamed a column, had no effect
on INSERT, UPDATE or DELETE statements inside stored procedures and triggers,
if the procedure or trigger was executed at least once prior to the ALTER
TABLE statement. This problem has been fixed.
================(Build #5117 - Engineering Case #355012)================
Calling any of the external mail routines (such as xp_sendmail or xp_startmail)
could have caused the server to crash. A problem with passing NULL parameters
has been fixed.
================(Build #5117 - Engineering Case #354381)================
When running on Windows 2003, the server could have crashed on startup, if
the machine had no TCP/IP address, or was unplugged from the network. This
has been fixed.
================(Build #5117 - Engineering Case #354096)================
The server would go into an infinite loop, with nearly 100 percent cpu usage,
when executing a query like the following:
select (select systable.first_page from systable where systable.table_id
= 1) as id0,
id0 as id1
from syscolumn
group by id1
The problem occurred under the following conditions:
- the query had a subselect in the select list or in the WHERE clause
- the subquery had an alias name ("id0" in the above query) and the alias
name was aliased by a second alias name ("id0 as id1" see above), so that
both alias names were syntaxtically identical
- the second alias name was part of a GROUP BY element
This problem has been fixed
================(Build #5117 - Engineering Case #353753)================
Running the Validation utility dbvalid to validate a read-only database would
have caused the error:
A write failed with error code: (5), Access is denied.
Fatal error: Unknown device error
This has been fixed.
================(Build #5117 - Engineering Case #353336)================
If connections were being made concurrently which required a server to be
autostarted, in rare timing dependent cases, the server could have crashed.
This has now been fixed.
================(Build #5117 - Engineering Case #353334)================
Calling the system extended procedures xp_scanf, xp_sprintf or xp_startsmtp
could, in very rare circumstances, have caused the server to crash. These
procedures have now been fixed.
================(Build #5117 - Engineering Case #353026)================
Calling the system procedure "sa_get_request_times", could have caused a
server crash. This has now been fixed.
================(Build #5117 - Engineering Case #352793)================
If the database option Truncate_date_values was set to OFF before populating
a row containing a DATE column with a value including both date and time,
and the Truncate_date_values option was subsequently set back to its default
of ON, updating the row in any way which caused the row to be moved to another
page, would have resulted in an assertion failure 105400. This has been fixed.
A workaround is to set the option to OFF, manually update any DATE values
to eliminate the time component, then set the option to ON. A statement such
as the following could be used:
update t set datecol = date(datecol)
where datepart(hour,datecol)<>0
or datepart(minute,datecol)<>0
Normally this option should be left at its default setting.
================(Build #5117 - Engineering Case #352471)================
When the server was run on multi-processor Windows plarforms, tasks for connections
where the database option Background_priority was set to 'On', would have
been scheduled by the OS such that only one was running at a time, even if
other processors were idle. This has been corrected.
================(Build #5117 - Engineering Case #351001)================
On CE devoces, queries could have failed with the error "Dynamic memory exhausted",
if a Join Hash operator was used in an access plan and the server cache size
was too small. This has been fixed by disabling the Join Hash operator for
CE devices during optimization if the available cache has less than 2000
pages. Thus resulting in access plans that do not contain such joins.
================(Build #5117 - Engineering Case #350058)================
When run on Windows 95, 98, ME or NetWare, the server could have crashed
when receiving BLOBs over TCP/IP or SPX connections. The probability of this
crash was a very slight and timing dependent. This has now been fixed.
================(Build #5117 - Engineering Case #349954)================
A new connection could have been refused, in very rare timing dependent cases,
when it should have been allowed. In order for this to have occurred the
network server must have been at its maximum number of licensed clients (Note,
each unique client address from a remote machine counts as one license),
and the last connection from a client machine must have been disconnecting
at the same time a new connection was being made at the same client machine
This has been fixed so that the server calculates licenses used accurately.
================(Build #5117 - Engineering Case #349901)================
The server would have crashed during execution of a query that used an index,
if all of the following conditions were true:
- the index was a compressed B-tree index
- the index contained a character or binary key column
- the length of the search value for this key column was almost the page
size, or was longer
- the search value for this key column was not a constant
This has now been fixed.
================(Build #5117 - Engineering Case #349450)================
After recovering a database which used no transaction log file, shutting
down the server before modifying the database could have caused assertion
failures 201810 "Checkpoint log: the database is not clean during truncate"
or 201117 "Attempt to close a file marked as dirty". If the server was killed
or the machine or server crashed before the database was modified, then subsequently
checkpointed, the database could have been corrupt. Only databases created
with 8.0.0 or later are affected. This problem has now been corrected.
================(Build #5117 - Engineering Case #348751)================
If the server had multiple, memory intensive transactions running concurrently,
it may have erroneously failed with an 'out of memory' error'. This would
only have occurred on multiprocessor systems. This has been fixed.
================(Build #5117 - Engineering Case #348610)================
Committing a transaction that deleted rows containing blobs, whose aggregate
size was larger than the current cache size, could have taken a very long
time. The time to do these deletes has been reducedsignificantly. As well,
blob columns created by servers containing this change, can be deleted even
more efficiently.
================(Build #5117 - Engineering Case #348512)================
When connected to the utility database, (utility_db), executing a SET OPTION
statement would have caused the next statement to fail with a "Connection
error". This has been fixed. SET OPTION statements will now return an error,
as they are not supported when connected to the utility_db. Subsequent statements
will work as expected.
================(Build #5117 - Engineering Case #347493)================
The server could have crashed on startup, if a large number of tasks were
specified, and they could not all be created due to lack of memory. This
problem was more likely to occur on Windows platforms using AWE, with 8.0.2
build 4076, or later. This has been fixed in the Windows server, (a fix
for NetWare and Unix servers to follow), now it will fail with an error indicative
of excessive memory usage.
================(Build #5117 - Engineering Case #346991)================
Assigning the result of a string concatenation to a variable could have caused
the size limit of the variable to be exceeded. This would only have occurred
with the bar ( || ) operator. This is now fixed, the concatenated string
is truncated to the maximum size.
================(Build #5117 - Engineering Case #346886)================
If a query contained an expression in the select list using the built-in
finction NUMBER(*), (such as NUMBER(*)+1000) or a non-deterministic function,
then a wrong answer could have been returned if the query also contained
an EXISTS style predicate (or ANY, ALL or IN), where the predicate was re-written
as a join with a DISTINCT operation. The wrong answer could have contained
more rows than expected or an incorrect value for the expression derived
from the NUMBER(*) or a non-deterministic function.
For example, the following query demonstrates the problem, depending on
the plan selected by the query optimizer:
select R1.row_num, rand(), number(*)+100
from rowgenerator R1
where exists ( select * from rowgenerator R2
where R2.row_num <> R1.row_num
and R2.row_num <= 2)
This problem has been fixed.
================(Build #5117 - Engineering Case #346766)================
The server could have faulted with an integer divide by zero error, when
executing a memory intensive query that causes the cache to grow dynamically
to the maximum allowed. This is now fixed, but a work around would be to
disable dynamic cache sizing (i.e. specifying -ca 0 on the command line).
================(Build #5117 - Engineering Case #346715)================
If a Watcom-SQL stored procedure or trigger containes a statement like:
execute var;
a syntax error should be reported, but was not. Instead, when the procedure
or trigger was executed, the message "Invalid prepared statement type" was
reported. A syntax error will now be given when the procedure or trigger
is created. If the intent of the original statement was to treat the contents
of the string variable "var" as a statement to be executed, EXECUTE IMMEDIATE
must be used instead.
================(Build #5117 - Engineering Case #346604)================
Calling the system procedure sa_locks, could have caused a deadlock in the
server. This problem was more likely to occur on multiprocessor and unix
systems.
================(Build #5117 - Engineering Case #346507)================
In databases using 1253ELL (Greek) collation, identifiers containing Greek
letters required double-quotes because the Greek letters were not properly
identified as alphabetic. This has been corrected, so that Greek letters
can now be used without quotes.
================(Build #5117 - Engineering Case #346245)================
Selecting blob data from a variable could have caused the server to stop
with an assertion failure. This has been resolved.
================(Build #5117 - Engineering Case #345734)================
If a predicate qualified to be pushed into a view, then the process of inferring
new predicates in the view query block, might not have used this pushed predicate.
This may have resulted in less than optimal access plans, due to the fact
that useful sargable predicates were not inferred. This has been fixed.
The following conditions must have been met for a query to have exhibited
this problem:
(1) the main query block was a grouped query on a view V1
(2) the main query block contained a local view predicate on V1 (e.g., "V1.col1
= constant")
(3) the view V1 contained other predicates that, with the help of the pushed
predicate, could have been used to infer sargable predicates on base tables
(e.g., "col1 = T.x")
(4) the view V1 was grouped as well
Example:
select V1.col2, count(*)
from V1
where V1.col1 = c
group by V1.col2
V1: select V2.col3, count(*)
from V2, T
where V2.col1 = T.x
group by V2.col3
V2: select *
form R1
UNION ALL
select *
from R2
================(Build #5004 - Engineering Case #386569)================
The index corruption caused as a result of Engineering Case 383145 could
not have been detected by database validation. Validation of an index with
this type of corruption will now generate an assertion failure, (such as
100305, 102300, or 201601). Dropping and recreating the index will solve
the problem.
================(Build #5003 - Engineering Case #361188)================
The collation 1250LATIN2 was missing case conversions for "Letter O with
double acute" and "Letter U with double acute". As a result, the functions
UPPER() and LOWER() would have failed to convert these letters to their corresponding
case, and comparisons would also have failed to match these characters with
other O and U characters when the case was different. This has now been fixed,
but existing databases will need to be rebuilt to get the new conversions.
================(Build #5003 - Engineering Case #356798)================
Messages sent from the server to the client application would have been corrupted
if the database character set was different from the client's connection
character set. This problem would most likely have occurred on multi-byte
character set systems with utf8 databases. This has been fixed.
================(Build #5003 - Engineering Case #355245)================
Attempting to unload a database created prior to SQL Anywhere version 5 would
have resulted in an error that user "dbo" did not exist. If the dbo user
was created, a different error would have been given, since the view dbo.sysusers
would not have existed. This has been fixed. A workaround is to run the Upgrade
utility dbupgrad before unloading the database.
================(Build #5003 - Engineering Case #354948)================
When the database option Optimization_goal was set to 'First-row', the optimizer
did not always respect it for queries with equijoins. This has been fixed.
================(Build #5003 - Engineering Case #354773)================
If an external procedure attempted to return a string longer than 65535 bytes,
via a single call to the set_value callback function, the string would have
been truncated. This has been fixed. A workaround is to call set_value multiple
times to build up the result in pieces, each being shorter than 65535.
================(Build #5003 - Engineering Case #353893)================
If an event was scheduled to execute only once, and the event completed at
the same time as the database was shut down, a server crash could have resulted.
This has been fixed.
================(Build #5003 - Engineering Case #353678)================
If either of the -ar or -an command line options was used with Unload utility
DBUNLOAD, column statistics from the unloaded database would not have been
preserved in the new database. This could have resulted in different query
execution plans being chosen when using the new database. This has been fixed.
================(Build #5393 - Engineering Case #429117)================
In the Details panel of a table's result set, if a row was selected and the
Delete key pressed, the row would have been successfully deleted. If the
Delete key was pressed again, an error would have been displayed. This has
been fixed.
================(Build #5391 - Engineering Case #428363)================
Both the SQL Anywhere and MobiLink plug-ins permitted pausing, and continuing
Windows services for dbsrv8 and dbsrv9, dbeng8 and dbeng9, dbremote, dbmlsync
and dbmlsrv8 and dbmlsrv9. This ability has been removed, as none of these
executables actually support pausing. Pause and continue operations are now
only permitted on dbltm Windows services.
================(Build #5219 - Engineering Case #380793)================
The background color of explanatory text was being set incorrectly in wizards
when Sybase CEntral was run on Linux. This has been fixed so that the wizard
backgrounds are now transparent.
================(Build #5189 - Engineering Case #373172)================
In the Translate Log File wizard, selecting 'Include trigger generated transactions'
and 'Include as comments only', would have included the trigger generated
transactions as statements rather than as comments. This has been fixed.
================(Build #5140 - Engineering Case #362803)================
If the help window was opened and closed, and then Sybase Central was minimized,
the help window would have been reopened when Sybase Central was then maximized.
Note, this same problem affected Interactive SQL dbisql, as well. This has
been fixed.
================(Build #5139 - Engineering Case #362728)================
Turning on or off warnings in the Preferences dialog from Tools menu, (Tools->Adaptive
Server Anywhere 9->Preferences - 'Confirm deletions when editing table')
would have had no effect when table data was being edited. This has been
fixed.
================(Build #5137 - Engineering Case #362226)================
On Linux systems, getting information about a table's primary key, by clicking
the 'Details' button on the Property dialog, would have caused a ClassCastException.
This is now fixed.
================(Build #5117 - Engineering Case #351546)================
When in the Code Editor, if Auto indent was set to Default or Smart, pressing
enter with text selected would have added a new line to the selection, rather
than replacing the selection with a new line. The problem does not occur
when tab Auto indent is set to none. This problem has now been fixed.
================(Build #5003 - Engineering Case #356390)================
The "do not ask again" checkbox, that is shown when deleting a table record,
could have been selected and then the "No" button clicked. This would have
resulted in the dialog never being shown again. In 8.x versions, "Yes" would
always have been assumed, and in 9.x versions, "No" would always have been
assumed. This has been changed so that the buttons now say "Ok" and "Cancel",
and the checkbox is ignored when "Cancel" is pressed.
================(Build #5003 - Engineering Case #355830)================
A NullPointerException could have occurred in the following situation:
- Open a second window, like when editing a stored procedure
- Make a change to the procedure, close the window, when it prompts to save
changes, respond NO
- The window closes, but the Sybase Central window does not have focus
- pressing TAB or CTRL-TAB will caused the exception
This has now ben corrected
================(Build #5413 - Engineering Case #430586)================
The Interactive SQL utility was failing to execute a ROLLBACK following a
failed statement if the Auto_commit option was ON. The behaviour has been
changed so that a ROLLBACK is now executed in this situation. Note, this
is now the same behavior as DBISQLC.
================(Build #5411 - Engineering Case #431876)================
The Interactive SQL utility could have reported a spurious table missing
error if a connection executed a SELECT statement, disconnected, then connected
to a different database and executed an INSERT statement. The problem was
due to the "Auto_refetch" option being on. This option was not sensitive
to the connection being closed, which has now has been fixed.
================(Build #5405 - Engineering Case #432032)================
When run on Unix systems with the -q "silent mode" command line option, dbisqlc
would have crashed if a MESSAGE statement was executed with a TYPE ACTION
TO CLIENT or a TYPE WARNING TO CLIENT clause. This has been fixed.
================(Build #5391 - Engineering Case #428508)================
The Interactive SQL utility could have reexecuted more than the last SELECT
statement if the "Auto_refetch" option was on. For this to have occurred,
DBISQL would also have to have been run in windowed mode, with multiple statements
executed at once, starting with a SELECT statement. e.g.
SELECT * FROM myTable;
OUTPUT TO 'c:\\test.txt';
Then an INSERT, UPDATE, or DELETE statement was executed, which would have
caused the statements following the SELECT statement (in this case, the OUTPUT
statement) to have been reexecuted. This has been fixed.
================(Build #5314 - Engineering Case #407408)================
If the full path to the active transaction log was greater than 57 characters,
calling the Backup utility with command line option -xo "delete and restart
the transaction log without backup", would have failed to erase the renamed
transaction log. The renamed transaction log file is now properly deleted.
================(Build #5311 - Engineering Case #406904)================
If the Translation utility was used with the command line option -it "specify
table names to include in output", it would have crashed when the transaction
log file contained transactions that continue from the previous transaction
log file. This has now been
fixed.
================(Build #5302 - Engineering Case #401948)================
The corruption caused by the problem fixed in Engineering Case 383145 could
have been undetectable by database validation. Engineering Case 386569 attempted
to address this, but the change may have caused validation of non-corrupted
indexes to fail. This has been fixed. Validation of a database affected by
the corruption in Engineering Case 383145 is now detected properly and results
in the error "Index 'index_name' on table 'table_name' has an invalid right
spine". This type of corruption should be fixed by rebuilding the index.
================(Build #5286 - Engineering Case #400332)================
Changes for Engineering Case 394722 to the iAnywhere JDBC Driver prevented
Interactive SQL from displaying, importing, or exporting unsigned numeric
types correctly. For example, when displaying unsigned values, an error dialog
could have been displayed saying that the value would not fit in an "n" byte
value, where "n" was a small whole number. This has now been fixed.
================(Build #5281 - Engineering Case #369150)================
If the server command line passed to the Spawn utility dbspawn contained
the @filename option, dbspawn would have expanded the contents of the file
and then spawned the server. This meant that the server command line would
have included the contents of the file. If the file contained certificate
passwords or database encryption keys, they would then be visible through
the 'ps' command or equivalent. This has been changed, dbspawn will no longer
expand the @filename parameter.
================(Build #5270 - Engineering Case #394136)================
If a table had been created with the PCTFREE clause (table page percent free),
the unload or reload of that table would not have used this PCTFREE in the
CREATE TABLE statement for the new database. This has been fixed.
================(Build #5270 - Engineering Case #391900)================
Intreactive SQL could have reported an 'internal error' if the Import Wizard
was used to read data from an ASCII file into an existing table, and there
were fewer columns in the file than columns in the table. This has been fixed.
================(Build #5258 - Engineering Case #391577)================
The global variables @@fetch_status, @@sqlstate, @@error, and @@rowcount
may have been set with incorrect values if a cursor's select statement contained
a user-defined function that itself used a cursor. This has been fixed.
================(Build #5256 - Engineering Case #386025)================
When attempting to rebuild a strongly encrypted database with the Unload
utility with the 'rebuild and replace database'command line option (-ar),
if the database was involved in replication or synchronization, the current
transaction log offset of the newly rebuilt database would not have been
set correctly. When rebuilding databases of this type, the Unload utility
will now assume that the encryption key specified with the -ek or -ep switch
is the encryption key for both the database being rebuilt, and the newly
rebuilt database. In addition, using the -ar option will now return an error
in the following situations, where previously the database would have been
rebuilt but the log offset would not have been set correctly :
1) The database is involved in replication or synchronization, but the encryption
key provided with the -ek or -ep switch is not a valid encryption key of
the database being rebuilt.
2) The database is involved in replication or synchronization, the database
being rebuilt is strongly encrypted, but the -ek or -ep switch was not provided
on the dbunload command line.
================(Build #5252 - Engineering Case #388837)================
When building a remote database from a reload file generated by extracting
from a consolidated database there could have been a failure. This would
only have occurred if there existed a publication on a subset of columns
in a table and there were also statistics on some columns in the table that
were not part of the publication. This has been fixed.
A workaround would have been to drop the statistics on the table or column
before extracting the user database.
================(Build #5239 - Engineering Case #383145)================
After deleting rows from the end of a large trie-based index, attempting
to insert or search for a value larger than any indexed value could have
resulted in fatal I/O errors (such as reading past the end of the file),
or assertions such as 102300, or 201601. For this to have occurred, the
index had to contain at least 3 levels (more than 50,000 rows with a 2k page
size), the value being deleted had to be the last value in the index, and
the leaf page containing the entry, (as well as the leaf's parent), must
have contained only a single entry. Rebuilding (or reorganizing) the index
would have fixed the problem.
Note, dbvalid would not have couaght this problem
================(Build #5221 - Engineering Case #381327)================
Some utilities, (ie dbltm.exe, ssremote.exe, dbremote.exe and ssqueue.exe)
would have crashed if they had been given a non-existent configuration file
on their command lines. The utilities now return the usage screen.
================(Build #5212 - Engineering Case #378613)================
The following problems related to the Import Wizard have been fixed:
1. When importing ASCII or FIXED files into an existing table, the column
data types were always being displayed as "VARCHAR" on the last page. Now,
the actual column types are displayed.
2. When importing FIXED data in an existing table, if fewer column breaks
were placed so that fewer columns were defined than appeared in the actual
table, the preview would still have shown columns for all of the columns
in the database table. This was incorrect, and clicking on these extra columns
would have caused Interactive SQL to crash. These extra columns are now no
long displayed.
3. If the Import Wizard was closed by clicking the close box, it could
still attempt to import the data. Now, clicking the close box is synonymous
with clicking the "Cancel" button.
================(Build #5204 - Engineering Case #377116)================
The reload.sql file created by the Reload utility did not double-quote the
login name for CREATE EXTERNLOGIN statements. This may have caused a syntax
error during reload. This has been fixed, the login name is now double-quoted.
================(Build #5196 - Engineering Case #374705)================
When the Unload utility dbunload, was run with the command line option -ar
(rebuild and replace database), the old transaction log file may not have
been deleted after the database was successfully rebuilt, even if there was
no replication/synchronization involved in the original database. This problem
has been fixed.
================(Build #5191 - Engineering Case #373477)================
Running the Unload utility dbunload, with both the -ar (rebuild and replace
database) and -ek (specify encryption key for new database) command line
options, would have failed when attempting to connect to the new database
with the error "Unable to open database file "<file>" -- Missing database
encryption key." The last step of dbunload -ar is to rename the log file,
but the encryption key was not specified when it should have been. This is
now fixed, te encryption key is now specified correctly.
================(Build #5189 - Engineering Case #373179)================
If two instances of the SQL preprocessor, sqlpp run at the same time, the
generated code may be invalid. The concurrently running preprocesor's could
attempt to use the other's temporary file, and silently generate invalid
code. This problem has been fixed by including the process pid in the temporary
file.
================(Build #5188 - Engineering Case #372897)================
If a database consisted of more dbspaces other than just the SYSTEM dbspace,
and an attempt was made to unload the data from this database to another
database with the same structure using the Unload utility dbunload:
DBUNLOAD -d -ac <connection-parameters-to-new-database>
The Unload utility would have attempted to create a dbspace for the new
database and would have reported an error if the dbspace already existed.
Now, dbunload will not attempt to create dbspaces when reloading into another
database if the -d command-line option is used.
================(Build #5178 - Engineering Case #370724)================
The Unload utility dbunload, would have silently placed old transaction log
files into the root directory, when the command line option -ar (rebuild
and replace database) was used and no log directory was specified, for databases
that were involved in synchronization/replication using RepAgent. Now, if
this situation occurs, the old transaction log file will be placed in the
log directory of the database.
================(Build #5177 - Engineering Case #370567)================
The Unload utility dbunload, may have crashed if the command line option
-ar (rebuild and replace database) was used with a database that had no online
transaction log. This problem has been fixed.
================(Build #5172 - Engineering Case #369491)================
When running on Unix systems, the Interactive SQL utility dbisql, would have
displayed the usage message when a full path to a SQL script file was given.
The leading '/' was was being interpreted as a command line switch on. This
has been fixed.
================(Build #5169 - Engineering Case #368825)================
If a datasource name was specified on the command line that contained an
encrypted password, dbisql would not have immediately connected to the database,
but would have first displayed the "Connect" dialog. Now an attempt is made
to connect immediately, without first displaying the "Connect" dialog.
================(Build #5167 - Engineering Case #368475)================
Very occasionally, dbisql could have reported an internal error (NullPointerException)
when scrolling down in the "Results" pane. This has been fixed.
================(Build #5156 - Engineering Case #365940)================
If the trantest sample application (PerformanceTransaction), was executed
with -a odbc -n <threadcount> and the thread count was higher than one it
may have crashed in NTDLL.DLL. This has been fixed.
================(Build #5150 - Engineering Case #364921)================
Interactive SQL dbisql could failed with an internal error when rows of a
table were selected and then the DELETE key was pressed to delete them. The
following conditions had to be true for the error to have occured:
- There must have been more than about 125 rows in the table
- The rows had to have been selected using the keyboard
- The table was scrolled while selecting, past the initial 125 rows
- The "Show multiple result sets" option was OFF.
This problem has been fixed.
In a related issue, if rows were selected, then CTRL+C was pressed to copy
them, extra lines of empty values would have been select after the last row.
All the table data would have been copied correctly; the error was the addition
of the blank rows. This has also been fixed.
================(Build #5139 - Engineering Case #362497)================
The OUTPUT statement could have failed to write any rows to a file, even
if there were rows to write, if the "Output_format" option was set to an
invalid value. Now, it is impossible to set the "Output_format" option to
an invalid value. When connecting to a database in which the option has been
set to a bad value, the bad value is ignored and the default (ASCII) is assumed.
================(Build #5134 - Engineering Case #361599)================
Executing a STOP DATABASE statement which attempted to stop a database running
on a server to which you were not currently connected, would have resulted
in dbisql failing with an internal error. This has been fixed.
================(Build #5132 - Engineering Case #361206)================
When run on Unix systems, the Data Source utility dbdsn, required write permission
on the .odbc.ini file, even if just listing or reading the DSNs. This has
been
fixed, now only read permission is required, unless the -d or -w options
are used.
================(Build #5126 - Engineering Case #360001)================
The context menu for Results tables could have appeared far from the mouse
pointer if the whitespace to the right or below the actual table data was
clicked on. This has been fixed so that the context menu always appears where
the mouse was clicked.
================(Build #5126 - Engineering Case #359828)================
When the Histogram utility dbhist, was run on non-English systems, mangled
characters would have been shown in the title, legend, etc, of the generated
Excel chart. This problem has now been corrected.
================(Build #5120 - Engineering Case #358151)================
The Interactive SQL utility dbisqlc, did not display help when the "Help
Topics" menu was selected. This has been fixed.
================(Build #5117 - Engineering Case #355787)================
An internal error could have been reported in response to pressing the DELETE
key when an uneditable result set was displayed in the "Results" panel and
the results table had the focus. This has been fixed.
================(Build #5117 - Engineering Case #354822)================
If the Histogram utility dbhist was not provided with connection arguments
(ie -c options), it would have assumed a default connection string of UID=DBA;PWD=SQL.
Now, dbhist will no longer assume any default connection arguments, which
is consistent with the behaviour of other ASA utilities.
================(Build #5117 - Engineering Case #354617)================
If dbisql reported an internal error, the password used in the current connection
(if any) was shown in clear text in the error details. It has now been replaced
by three asterisks. Note that passwords given as part of a "-c" command line
option are still displayed in clear text in the error details.
================(Build #5117 - Engineering Case #354337)================
An internal error (IllegalArgumentException) could have been reported by
dbisql, when an attempt was made to edit the result set of a stored procedure.
The result set should not have been editable in the first place. This has
now been corrected.
This problem would only have occurred when connecting using the iAnywhere
JDBC Driver.
================(Build #5117 - Engineering Case #351394)================
If a query had duplicate ORDER BY items, opening it in the Query Editor would
have caused its parser to generate an error.
For example:
SELECT emp_fname, emp_lname
FROM employee
ORDER BY emp_fname, emp_fname
SELECT emp_fname, emp_lname
FROM employee
ORDER BY 1, 1
This has now been fixed, the duplicate ORDER BY item will be ignored by
the Query Editor's parser.
================(Build #5117 - Engineering Case #349930)================
If incorrect options were used with the Unload Database utility dbunload,
it could have crashed after displaying the usage. This has been fixed.
================(Build #5117 - Engineering Case #348793)================
It was possible to edit the result set of a query, even though some, or all,
of the primary keys were not included in the result set. Now, the result
set can only be edited if all of the primary key columns are included, or
the table has no primary key. These conditions have been added in addition
to the existing conditions; that columns must all come from one table, and
no Java columns are included.
Updating rows without the entire primary key being in the result set, could
have inadvertently modified or deleted more than one row.
Some examples, using the sample database (ASADemo):
1. SELECT * FROM customer
The query include all primary key columns from the
"customer" table, so the results are editable.
2. SELECT year, quarter FROM fin_data
The query does not include all of the primary key columns
("code" is missing), so the results are not editable.
================(Build #5117 - Engineering Case #347779)================
The Unload utility dbunload, or Sybase Central's Unload Database wizard,
could have failed with a syntax error if a comment on an integrated login
id contained a double quote character. Unlike other types of comments, double
quotes are used to enclose the comment string, but any double quotes in the
string where not being doubled. Now the comment will be enclosed in single
quotes and any single quotee or escape characters will be doubled.
================(Build #5117 - Engineering Case #346263)================
The following problems could have been seen when launching or running the
graphical administration tools (ie, Sybase Central, DBISQL, DBConsole, MobiLink
Monitor)
1. A crash on startup -- The Java VM may have reported that an exception
occurred in the video card driver.
2. Painting problems -- On Windows XP, the task switcher that comes with
Windows XP Powertoys caused the administration tools to paint incorrectly
when switching through the list of tasks.
These problems have been fixed, but a workaround is to disable the computer's
use of DirectDraw and Direct3D acceleration.
================(Build #5117 - Engineering Case #345634)================
If the server issued an error message in response to committing changes,
the error message would not be displayed if the commit was a side-effect
of shutting down DBISQL. This situation could occur if the dbisql option
Wait_for_commit was 'On'. Now the message is always displayed.
================(Build #5117 - Engineering Case #313786)================
The Database Initialization utility dbinit, could have failed if the SQLCONNECT
environment variable specified a database file name (DBN). This has been
fixed so that the SQLCONNECT environment variable does not affect dbinit.
================(Build #5003 - Engineering Case #356790)================
An error would have been reported if an UNLOAD TABLE statement was executed
that included an APPEND clause. This has been fixed.
================(Build #5381 - Engineering Case #426153)================
If the MobiLink client was shutdown by clicking the shutdown button on the
window, using the stop method on the integrations component or sending a
WM_CLOSE message to the dbmlsync window, then it could have hung with 100%
CPU utilization. The behaviour would have occurred intermittently, and would
have been more likely on machines with multiple processors. This has now
been fixed
================(Build #5369 - Engineering Case #422828)================
If an sp_hook_dbmlsync_process_exit_code hook was defined and an invalid
dbmlsync extended option was specified, then the synchronization client would
have crashed. This has been fixed.
================(Build #5335 - Engineering Case #415767)================
If the MobiLink client was running on a schedule, and had successfully synchronized,
in very rare circumstances, it was possible for a failed synchronization
to result in the section of transaction log that was scanned during the failed
synchronization to not be scanned again in the next synchronization. This
has now been fixed.
================(Build #5322 - Engineering Case #400188)================
If a MobiLink client connected to a database where SQL Remote was already
running, it was possible for the MobiLink client to report "Cannot convert
scanner to a unsigned int". When this error was reported, the current synchronization
would fail. This has been fixed so that the error is no longer reported.
================(Build #5304 - Engineering Case #405360)================
If a value longer than 255 bytes was passed to a MobiLink client hook procedure,
through the #hook_dict table, then the MobiLink client could have become
unstable. This may have occurred when a communications address longer than
255 bytes was specified, there was a failure connecting to the Mobilink server,
and there was an sp_hook_dbmlsync_ml_connect_failed hook defined. This has
now been fixed.
================(Build #5292 - Engineering Case #400604)================
At the end of each synchronization dbmlsync reports a line like the following:
End synchronizing 'template_P1' for MobiLink user 'template_U1'
If more than one publication was specified on the MobiLink Client commandline
(either together as -n P1,P2 or separately as -n P1 -n P2), then the publication
reported in this message may have been incorrect, although processing of
the synchronization would have continued correctly. Only the message was
wrong. The message now reports the correct publication name or names.
================(Build #5236 - Engineering Case #385179)================
The MobiLink client may not have detected that a table had been altered,
and would have sent an invalid upload stream to the consolidated database.
The following operations in a log file demonstrate the problem, when the
client scanned the log during a single synchronization :
1) Data on table t1 is changed.
2) Table t1 is removed from the only publication it belongs to.
3) Data on table t1 is changed again.
4) Table t1 is altered.
5) Table t1 is added back to the publication it was removed from.
Now, the MobiLink client will report the error "Table 't1' has been altered
outside of synchronization at log offset X" when this situation arises.
================(Build #5234 - Engineering Case #385171)================
If an sp_hook_dbmlsync_logscan_begin hook was defined that modified a table
being synchronized, and the extended option Locktables was set to 'off',
then actions performed by the hook would not have been uploaded during the
current synchronization. Actions would have been uploaded correctly though
during the next synchronization. This has been changed so that any changes
made to synchronization tables by the sp_hook_dbmlsync_logscan_begin hook
will now be uploaded during the current synchronization regardless of the
setting of the Locktables option.
================(Build #5233 - Engineering Case #372331)================
During synchronization it was possible for a Windows CE device to go into
sleep mode. Now the MobiLink client makes system calls to ensure that this
does not happen. It is still possible for a device to go into sleep mode
during a delay caused by the sp_hook_dbmlsync_delay hook or during the pause
between scheduled synchronizations.
================(Build #5230 - Engineering Case #384141)================
When using the Database Tools interface to run the MobiLink synchronization
client, if the a_sync_db version field was set to a value that was 8000 or
greater, and less than the version supported by the dbtools library, then
the upload_defs field would have been ignored. If the database had more than
one subscription, then this would have caused the synchronization to report
the following error message:
Multiple synchronization subscriptions found in the database. Please specify
a publication and/or MobiLink user on the command line.
This behaviour could also be seen if the MobiLink client was used with a
later version of the dbtools library. This has been corrected.
================(Build #5225 - Engineering Case #382167)================
If event hooks were defined, the MobiLink client would not have recognized
and executed them if their procedure name was not entered entirely in lower
case. The case of the procedure name is now ignored.
Note, a similar problem with the Extraction utility and SQL Remote has also
been fixed.
================(Build #5225 - Engineering Case #382166)================
When the MobiLink client extended option TableOrder was specified, the synchronization
would have failed if the tables, or their owners, were specified in a different
case from the one in which they were defined in the database. This problem
occurred whether the database was case sensitive or not. The tables and owners
specified by this option are now always treated as case insensitive.
================(Build #5212 - Engineering Case #378115)================
The MobiLink ASA client may not have shut down gracefully when it was running
as a Windows service. This may have caused resources such as temporary files,
not to have been cleaned up before shutting down. This problem has now been
fixed.
Note, this problem applied to the MobiLink synchronization server and SQL
Remote for ASA as well, and have also been fixed.
================(Build #5209 - Engineering Case #377878)================
The MobiLink client could have crashed, or behaved irratically, when when
the value of the 'charset' database property for the remote database was
longer than 29 bytes. In particular, this was the case for databases with
the EUC_JAPAN collation, although there may be other such collations. This
issue has been fixed.
================(Build #5206 - Engineering Case #377036)================
If the MobiLink client was run with the -vn option, ('show upload/download
row counts'), but not the -vr option, ('show upload/download row values'),
or the -v+ option, ('show all messages'), then the upload row counts reported
for each table would be cummulative, that is each rows count would include
not just the rows uploaded from that table, but also those uploaded for all
previous tables.
================(Build #5194 - Engineering Case #374490)================
If the environment variables TMP or TEMP were not set, the MobiLink client
dbmlsync, would have given the error:
"Unable to open temporary file "MLSY\xxxx" -- No such file or directory"
and refused to start. This problem is now fixed.
================(Build #5193 - Engineering Case #374070)================
The ASA client dbmlsync, could have crashed, either while creating the upload,
or at the end of the synchronization. This was more likely to occur with
very large uploads. This behaviour has been corrected.
================(Build #5184 - Engineering Case #372085)================
When the Synchronization Client dbmlsync crashed it would have left behind
a temporary file, which would never have been deleted. Now, a check is made
at startup for any temporary files left from previous runs and deletes them
if they exist.
================(Build #5178 - Engineering Case #370609)================
The MobiLink Client dbmlsync, would have complained of an "invalid option
...", if it was started with a configuration file, @filename, and filename
contained any extended options specified as
-e opt1="val1";opt2=val2;...
even if all the extended options were valid. This problem is fixed now.
================(Build #5172 - Engineering Case #369238)================
If the schema of a table outside of a publication was altered (for example,
table "t1"), and a synchronizing table existed, whose name started with this
table's name (for example, table "t1_synch"), that had outstanding changes
to synchronize, then dbmlsync would incorrectly report that the schema of
the synchronizing table had been altered outside of synchronization. This
has now been fixed.
================(Build #5161 - Engineering Case #367528)================
Autodial was not responding, even when the network_name parameter was specified.
Autodial functionality has now been restored.
================(Build #5128 - Engineering Case #360258)================
The total accumulated delay caused by the sp_hook_dbmlsync_delay hook was
being calculated incorrectly when a synchronization was restarted using the
sp_hook_dbmlsync_end hook. As a result the sp_hook_dbmlsync_delay hook might
not be called or the delay produced by the hook might be shorter than specified.
Following are the specific conditions required to see this problem:
- Both an sp_hook_dbmlsync_end hook and an sp_hook_dbmlsync_delay hook have
been coded.
- During a synchronization the delay hook was called one or more times.
Those calls resulted in a total delay D and the maximum accumulated delay
parameter was set to some value M.
- When the end hook is called it sets the 'restart' parameter to 'sync'
or 'true' to restart the synchronization.
When the above conditions are met, the sum of delays caused by the delay
hook was not being reset before the synchronization was restarted. As a
result, on the restarted synchronization, the delay hook would not be called
if D >= M. If D < M then the maximum delay that would be allowed before
the synchronization occurred would be M - D when it should have been M.
The sum of delays is now reset before the synchronization is restarted so
that the delay hook will have the same behavior on a restarted synchronization
as it does on a first synchronization.
================(Build #5119 - Engineering Case #358388)================
When the ADDRESS specified for the ASA client dbmlsync, to connect to a Mobilink
server, contained the 'security' parameter and the cipher specified was not
recognized, dbmlsync would have reported an error indicating that it could
not load a DLL (usually dbsock?.dll or dbhttp?.dll). A more meaningful error
message is now displayed.
================(Build #5117 - Engineering Case #346769)================
If the Synchronization client dbmlsync was set to synchronize on a schedule,
and the MobiLink server was shut down when the upload stream was being sent
and then started up
again, dbmlsync could have ontinuously failed to synchronize until it was
shut down and restarted. The MobiLink server would simple have reported
"Synchronization Failed", with no more information. This has now been fixed.
================(Build #5003 - Engineering Case #357701)================
Synchronizing to an ASA remote database using the UTF8 collation could fail
with errors or put corrupted data into the database.
The same problem would affect any application using ODBC or OLEDB, including
Java-based applications using the JDBC-ODBC bridge (8.0) or iAnywhere JDBC
Driver (9.0), including DBISQL and Sybase Central.
The bug was introduced in the following versions and builds:
8.0.2 build 4409
9.0.0 build 1302
9.0.1 build 1852
This problem has been fixed.
================(Build #5117 - Engineering Case #345236)================
Microsoft Windows, for Asian (multi-byte) languages, allows a user to define
their own characters, including the glyph that is displayed. As part of defining
a character, the user picks an unused code point in the character set. MobiLink
and ASA were not aware of this new code point, and character set conversion
would substitute the "invalid character" for any user-defined characters.
Now, the mappings for user-defined characters in cp950 (Traditional Chinese)
are included.
================(Build #5117 - Engineering Case #358138)================
Connecting to a database in the MobiLink plug-in by opening the Connect dialog
and specifying the connection information, would have caused the information
specified (excluding the password) to be saved in the user's .scUserPreferences
file under the default connection key "DefConn". The information was saved
so that the next
time the Connect dialog was opened, it would contain the previous connection
information. For security reasons, this feature has been removed. Now, this
information is no longer implicitly saved in the user's .scUserPreferences
file. Instead, it is persisted in memory for the current Sybase Central session
only. Note that the user can still use
connection profiles to explicitly persist connection information in the
.scUserPreferences file.
This change also fixes a problem which could have caused the password to
be incorrectly persisted as "***".
================(Build #5154 - Engineering Case #365731)================
When a dialog was opened from another dialog (rather than from the main window),
closing the topmost dialog would have returned focus to the main window,
instead of the initial dialog. This has been corrected.
================(Build #5136 - Engineering Case #362053)================
The MobiLink Monitor was reporting the wrong number of uploaded bytes. The
Monitor would most often have reported the actual value plus one, but it
could also have reported even larger values. This has been corrected.
================(Build #5397 - Engineering Case #430086)================
After a deadlock occurred, the upload data for the tables prior the current
table that caused the deadlock may not have been applied to the consolidated
database by the MobiLink synchronization server if the following situations
applied:
1) the MobiLink synchronization was running with the rowset size greater
than 1: the command line option -s X (X > 1) was used or the command line
contained no -s option;
2) the deadlock occurred when the MobiLink synchronization server was applying
the upload operations in multi-row mode and there was no deadlock when the
server was retrying to apply the same operations for the current table in
single-row mode.
This problem is now fixed.
================(Build #5339 - Engineering Case #415985)================
The documented order of events in the prepare_for_download transaction has
been incorrect since the feature first appeared in 7.0.1. The correct order
is:
------------------------------------------------------
prepare_for_download
------------------------------------------------------
modify_last_download_timestamp
prepare_for_download
if( modify_last_download_timestamp script is defined
or prepare_for_download script is defined ) {
COMMIT
}
Previous documentation had the order of the scripts reversed. The above
order was chosen because the last-download timestamp (LDT) affects the content
of the download. If the LDT is being modified, it must be modified before
the download logic kicks in (ie. in the prepare_for_download script or in
the download scripts).
================(Build #5321 - Engineering Case #409377)================
If the -clrVersion or -clrFlavor .NET command line options were specified
when starting the MobiLink server, the MobiLink server would have reported
that the options were invalid, or not specified correctly. The MobiLink server
will now properly parse and accept these two .NET options.
================(Build #5303 - Engineering Case #402359)================
The MobiLink Synchronization Server was not able to synchronize multi-byte
characters (e.g. Chinese characters) between ASA with the UTF8 collation
and Microsoft SQL server, if the columns were defined as CHAR (varchar, char,
or long varchar) in ASA and NVARCHAR in Microsoft SQL server. A new command
line option, -hwH+, has been added to the MobiLink server. By default, it
is on for Microsoft SQL server and off for all other consolidated databases.
When it is on, the MobiLink server will call SQLDescribeParam (this function
may not be supported by all ODBC drivers, but it is available in the driver
for the Microsoft SQL Server) to find out the data types of the parameters
in the consolidated database and binds the parameters with the consolidated
data types for all the columns with a CHAR datatype.
================(Build #5302 - Engineering Case #404448)================
When connected to an Adaptive Server Enterprise server whose version was
greater than or equal to 12.5.1, the MobiLink Synchronization Server could
have uploaded incorrect data from ASA TIME columns into ASE DATETIME columns
when a cursor-based upload was used. The date part of the uploaded rows
in the ASE database would have been the current date, instead of '1900-01-01'.
This problem is now fixed.
================(Build #5299 - Engineering Case #403187)================
When synchronizing against DB2, the synchronization could have been blocked.
MobiLink uses the table sysibm.systables to select current timestamp, thus
would have been blocked if another application was updating sysibm.systables
at the same time. To solve this problem, the table sysibm.sysdummy1, which
only has one row and is used for compatibility, is now used instead.
================(Build #5272 - Engineering Case #394331)================
When scanning the transaction log to determine which changes need to be uploaded,
if dbmlsync first found a DML statement on a table (for example, an insert),
and then later found a DDL statement on the same table (for example, an ALTER
TABLE), dbmlsync should have failed and reported an error similar to "Table
't1' has been altered outside of synchronization at log offset X". If the
table in question (or it's owner) had a name that required double quotes
around it in the log file (such as reserved words or numeric names such as
"42"), then dbmlsync would not have detected that the schema of the table
had changed and would not have reported the error. Also, if the ALTER TABLE
statement that was executed included a comment either before the ALTER TABLE
statement or between "ALTER TABLE" and the table name, dbmlsync would also
have failed to detect that the schema of the table had changed and would
not have reported the error. Dbmlsync will now report the error "Table 't1'
has been altered outside of synchronization at log offset X" when either
of these situations arise.
================(Build #5249 - Engineering Case #388904)================
The MobiLink server could have failed to gather the synchronization scripts
for a given table and reported the error "Error fetching table script t1.begin_synchronization",
even though table t1 did not have a begin_synchronization script. This problem
was more likely to have occured when using an Oracle consolidated database
and the "iAnywhere Solution 9 - Oracle Wire Protocol" ODBC driver. This
problem has now been fixed.
================(Build #5249 - Engineering Case #388858)================
When synchronizing an ASA MobiLink Client that had multiple publications,
MobiLink would have allocated new memory to store the publication information
on each synchronization. Now, the memory is freed after the synchronization
completes.
================(Build #5225 - Engineering Case #381111)================
When used with a case-sensitive database, the MobiLink client could have
behaved incorrectly if MobiLink user and publication names were not specified
in the same case as they were defined in the database. These identifiers
might have been specified in any of the following places:
- the CREATE/ALTER SYNCHRONIZATION USER statement
- the CREATE/ALTER SYNCHRONIZATION SUBSCRIPTION statement
- the dbmlsync command line
- the CREATE/ALTER PUBLICATION statement
The incorrect behaviour could take one of the following forms:
- MobiLink client could have crashed
- synchronizations could have failed inappropriately
- if a MobiLink user was subscribed to more than one overlapping publication,
operations that belonged to both publications might have been uploaded more
than once, resulting in server side errors during synchronization.
MobiLink user and publication names are now treated case-insensitively
================(Build #5212 - Engineering Case #376840)================
The MobiLink Server could have crashed when using the HTTP link for synchronization,
if the remote stopped communicating with the server at a specific point in
the synchronization. The crash actually occurred when the server timed out
the connection. This has been fixed.
================(Build #5209 - Engineering Case #377471)================
Attemps to synchronize proxy tables would have failed with the error message
"Feature 'remote savepoints' not implemented". This has been fixed.
================(Build #5191 - Engineering Case #367198)================
In some very rare cases, an UltraLite application may have marked a column
as a primary key, as well as an index column, thus causing the MobiLink server
to crash when the application synchronized. This problem has been fixed.
Now, the MobiLink server will give a protocol error when this situation is
detected. To avoid the protocol error, the table will need to be dropped
and recreated.
================(Build #5188 - Engineering Case #372262)================
Connecting to the MobiLink server immediately after a successful autodial
could have failed with error WSAEHOSTUNREACH (10065). The fix is to repeatedly
attempt to open the session until it is successful, or the network_connect_timeout
expires, default 2 minutes.
================(Build #5188 - Engineering Case #372098)================
In rare circumstances, an upload could have failed with the error "Unknown
Client Error n", where n was some random large number. This error was usually
followed by another error reporting that "A protocol error occurred when
attempting to retrieve the remote client's synchronization log". Although
there are circumstances where this is a valid error to report, an instance
where this error was incorrectly reported has now been fixed.
================(Build #5172 - Engineering Case #369479)================
The MobiLink server could have crashed if all the following had occured on
the same worker thread:
- an error was handled on upload on the last table
- a download cursor was opened for the first time on any table
- a subsequent sync used the download table script without having an upload
error handled, and there were multiple rows to download
This is now fixed.
================(Build #5163 - Engineering Case #367764)================
If the consolidated and remote databases had different collations, the MobiLink
Synchronization server may not have respected the column width defined in
the remote database for columns defined with char or varchar datatypes. This
may have caused the ASA client to crash. Now, the MobiLink server will display
an error, and abort the synchronization, if the length of the column value
is greater than the column width defined in the remote database.
================(Build #5138 - Engineering Case #362597)================
It was possible for the MobiLink server to have crashed while doing secure
synchronizations. This has been fixed.
================(Build #5136 - Engineering Case #362015)================
The MobiLink Server would not have detected update conflicts, if the server
was running with the command line option -s n (where n was greater than 1)
or without -s at all, and the consolidated database was a Microsoft SQL Server
or Oracle database. Also, the synchronization had to have been a statement-based
upload, with no upload_fetch script, and an upload stream had to have contained
updates that had an unsatisfied WHERE clause in the consolidated database.
These updates would have failed due to the unsatisfied WHERE clause, but
the MobiLink server would have ignored these failures without giving any
error or trying to resolve these conflicts. Now if the batch contains updates
and the number of affected rows doesn't match the number of rows applied,
the server will roll back the operations and try them again using single-row
mode.
================(Build #5122 - Engineering Case #359575)================
When using the iAnywhere Solutions 8 Oracle WP driver for Win32 to connect
to Oracle 9i, NCHAR data was not correctly returned from the server. This
problem is fixed in version 4.20.0081 of this driver. To update the driver,
the following files need to be updated:
wqora1919.dll
wqora19r.dll
wqora19s.dll
wqbas19.dll
wqbas19r.dll
wqicu19.dll
wqutl19.dll
wqutl19r.dll
================(Build #5122 - Engineering Case #359568)================
When using the iAnywhere Solutions ODBC driver for DB2 on Windows systems,
(wqdb219.dll), and fetching BLOB data back in chunks, the trailing byte of
the buffer would have been set to 0x00. This problem has been corrected.
================(Build #5119 - Engineering Case #343460)================
If a combination of inserts and updates existed in the upload stream for
a given table, and an error occurred when MobiLink was applying these operations,
it was possible for the wrong script contents to be written to the MobiLink
log when the error context information was being written. The correct script
contents are now written to the log.
================(Build #5117 - Engineering Case #357506)================
Using statement-based scripting, uploaded updates can be ignored by not providing
the upload_update script. When an actual update row was encountered on the
upload, an error would have been generated indicating the missing script,
and the synchronization would have been aborted. This has now been corrected.
A workaround would be to provide any of the conflict resolution scripts (resolve_conflict,
upload_insert_new_row, or upload_insert_old_row).
================(Build #5414 - Engineering Case #433281)================
If a transaction log was renamed and a new one started, and there were transactions
that spanned the old and new transaction logs, then it was possible for any
of the log scanning tools (dbmlsync, dbremote, dbltm or dbtran) to crash
while processing the two transaction logs in sequence. This problem has now
beem fixed.
================(Build #5343 - Engineering Case #416772)================
The Unload utility may not have properly unloaded a remote database that
was involved in synchronization, causing the reload file generated to contain
syntax errors. This would have occurred if any of the options for synchronization
users or synchronization subscriptions in the remote database had been completely
dropped with:
ALTER SYNCHRONIZATION USER ml_username DELETE ALL OPTION
or
ALTER SYNCHRONIZATION SUBSCRIPTION
TO publication-name
[ FOR ml_username, ... ]
DELETE ALL OPTION
This problem has now been fixed.
================(Build #5139 - Engineering Case #362725)================
Some SSL clients could have rejected certificates generated with the Certificate
Generation utility gencert. Beginning with version 8.0.2, gencert added
an extended key usage field to certificates it generated. Since this does
not seem to be accepted universally, it has been removed.
================(Build #5133 - Engineering Case #363625)================
A situation where a client request could have potentially crashed the ISAPI
redirector as been fixed.
================(Build #5117 - Engineering Case #348166)================
An application using the DBSynchronizeLog function of the Database Tools
interface could have crashed, if the msgqueuertn function pointer in the
a_sync_db structure was set to NULL. Now, if the pointer is set to NULL,
the function default implementation is to sleep for the requested time period.
================(Build #5117 - Engineering Case #344018)================
The MobiLink user authentication utility dbmluser would have crashed if it
couldn't determine the default collation from the locale setting. The documentation
is incorrect, the default collation does not fall back to 1252LATIN1 on single-byte
machines and 932JPN on multi-byte machines. The default collation actually
would have become 'unknown', (or in some cases 'ISO_BINENG'), which was a
collation that dbmluser did not expect.
Now the problem in determining the default collation has been corrected,
as well as the cause of the crash.
================(Build #5369 - Engineering Case #421399)================
It was possible for dbremote to fail to send or receive messages, but no
line was written to the SQL Remote log that started with "E", indicating
that an error had occurred. This has been corrected so that an error line
is now always written to the log after the send or receive of a message fails.
================(Build #5319 - Engineering Case #397776)================
When using the FILE based messaging system for SQL Remote, there is a maximum
of 47,988 possible file names that SQL Remote will use to generate messages.
If all of these file names were already in use, then SQL Remote would have
looped forever trying to find a file name to generate a message. SQL Remote
will now detect that it has tried every single file name and break out of
the loop, reporting that "The maximum number of messages in the messaging
system has been reached". Note that although SQL Remote will now no longer
loop infinitely in this situation, to allow for the creation of new messages
for this remote user, all the files in the user's inbox may need to be manually
deleted, and possibly, the maximum message size increased using the -l switch
on the SQL Remote command line.
================(Build #5117 - Engineering Case #355574)================
The SQL Remote Message Agents dbremote and ssremote, the SQL Remote Open
Server ssqueue, the Log Transfer Manager dbltm, and MobiLink Client dbmlsync,
could have hung when attempting to write a message to the output log that
was greater than 64Kb in size. This has now been fixed.
================(Build #5400 - Engineering Case #430594)================
If a passthrough session contained "create variable" or "drop variable" statements,
it was possible for SQL Remote to crash when applying the passthrough session
at the remote database. A case sensitive string comparison for "VARIABLE"
was being done, but if a create or drop variable command was executed in
lower case during a passthrough session, the receiving side would fail to
find the name of the variable, leading to the crash. The string is now converted
to upper case before doing the comparison.
================(Build #5311 - Engineering Case #404450)================
If dbremote were forced to retry an operation that involved a long varchar
or long binary, then dbremote would have reported an error in the dbremote
log similar to "There is already a variable named 'n1'". Dbremote will no
longer report this error.
================(Build #5307 - Engineering Case #406125)================
If the sending and receiving phases of dbremote were being run in separate
dbremote processes, and the sending phase was processing a SYNCHRONIZE SUBSCRIPTION
request, it was possible for the two phases of dbremote to deadlock, resulting
in the sending phase being rolled back. The sending phase of dbremote will
now lock all the tables that are about to be synchronized if it detects that
the receiving phase of dbremote is also active. If the sending phase fails
to acquire the table locks, it will attempt to get the locks 5 times, with
a increasing delay between each attempt ( 100ms, 200ms, 400ms, 800ms). If
all five attempts to acquire the locks fail, the sending phase will attempt
to process the SYNCHRONIZE SUBSCRIPTION without acquiring the locks, which
could still result in a deadlock. To further reduce the possibility of deadlock,
the receiving phase of dbremote should run with "-g 1" to prevent the grouping
of transactions.
================(Build #5304 - Engineering Case #405375)================
If the sending phase of the SQL Remote Message Agent was the victim of a
deadlock, it could have crashed. Note that in order for the sending phase
of dbremote to have been the victim of deadlock, it would have to have been
in the process of satisfying multiple SYNCHRONIZE SUBSCRIPTION commands,
and the send and receive phases of would have had to be running on separate
dbremote processes. The Message Agent will no longer crash in this situation,
but will now report that the sending of messages has failed and will shut
down as expected.
================(Build #5206 - Engineering Case #376895)================
When the database option "Delete_old_logs" was set to "on", SQL Remote, (as
well as MobiLink, and the ASA RepAgent), may reported "missing transaction
log(s)...". This would have occurred in the following situation:
1) the online transaction that contains the last replication/synchronization
offset, had been renamed, say to X;
2) the offline log X contained transactions that started from an early log,
say Y; and
3) the log Y contained transactions started from an earlier log, say Z.
Transaction log Z may have already been deleted. This problem is fixed now.
================(Build #5166 - Engineering Case #354147)================
When the log scanning tools were looking for the log file with the desired
starting log offset, if that log file had a transaction in it which began
in an earlier log file, but the log file that contained the start of the
transaction could not be found, an error would have been reported similar
to "Missing transaction log(s) in between file AC.log (ending at offset X)
and file AD.log (starting at offset Y)". The offsets reported would have
been incorrect, and upon inspection, the ending log offset of AC.log would
have likely been the same as the starting log offset of AD.log. The correct
error is now reported, "Missing transaction log(s) before file AA.log".
================(Build #5127 - Engineering Case #360190)================
SQL Remote (dbremote) may have hung when scanning transaction logs if the
server logged a checkpoint that has a previous checkpoint pointing to itself.
This problem has been fixed.
Note, this problem also affected Mobilink's Synchroniztion Client and the
ASA Replication Agent
================(Build #5423 - Engineering Case #403525)================
When large databases were synchronized, UltraLite on Palm with the file-based
data store, could have become very slow and would eventually time out on
conduit synchronization. This has been fixed by allowing users to specify
a larger default cache size, which significantly improves the synchronization
time by cutting down file I/Os.
Now, when the UltraLite conduit is loaded by the HotSync Manager (on the
desktop), it attempts to set the cache size using the value "CacheSize" if
it is specified in the following registry key:
Software\Sybase\Adaptive Server Anywhere\<version>\Conduit\<CRID>
where <CRID> is the creator ID used in an UltraLite Palm application on
remote. The size of "CacheSize" is in bytes, or the suffix k or K to indicate
kilobytes or m or M to indicate megabytes, can be used. If this value is
invalid, it is ignored and the old default cache size of 128KB is used. If
the "CacheSize" value is not set, the UltraLite conduit will use the new
default cache size of 4MB.
================(Build #5324 - Engineering Case #409958)================
When synchronizing with the Palm HotSync conduit, the HotSync connection
could have been timed out if the UltraLite application (with record-based
data store) required a long synchronization. The error: -309 - 'Memory error
-- transaction rolled back' would then be signalled. This has been fixed.
Now the worker thread handles all communication between HotSync and MobiLink,
which prevents any device timeouts due to the long communication between
HotSync and the MobiLink server.
In addition, the HotSync dialog is now properly updated - the "Exchange"
arrows indicator is kept progressing while the connection is kept alive.
================(Build #5363 - Engineering Case #420496)================
On Palm T|X devices running the latest Palm OS v5.4.9, the OS can leave the
record busy bit turned on when the device is reset. This could have caused
UltraLite applications to fail on startup in the call ULAppLaunch(). A work
around has now been implemented.
================(Build #5301 - Engineering Case #401161)================
UltraLite can now be run on Palm NVFS devices, and it supports both the record-based
and file-based data stores.
The following is a summary of UltraLite clarifications for the Palm OS platform.
A. Database
An internal file volume may also be present on some NVFS devices, such as
Tungsten T5 handhelds, which you can access and use similar to an SD slot.
Therefore, call ULEnableFileDB to store UltraLite databases on either an
internal drive (built-in storage card) or external slot (expansion card).
Furthermore, use the existing connection parameter palm_db=CRID:n to specify
the data store on either an internal drive or external slot. n is the volume
ordinal from enumerating the mounted volumes, and the default value of n
is 0. Tungsten T5, for example, has two volumes, the internal file volume
and the SD slot volume. If the device cannot locate a volume with the user-specified
volume ordinal, the SQLCODE -82 is reported when opening the database.
B. Synchronization Conduit
The conduit synchronizes the UltraLite database which is first found on
the device. The database lookup sequence is first the record storage, then
the file system including internal drive and external slot. Make the UltraLite
database which you intend to synchronize available as the first one found
in that sequence.
================(Build #5198 - Engineering Case #373035)================
Performing a synchronization that downloaded rows to a table on which a cursor
had been opened, could have resulted in the cursor being positioned on an
incorrect row following an ABSOLUTE fetch.
UltraLite tries to optimize ABSOLUTE fetches by replacing them with RELATIVE
ones (if it thinks it will be more efficient). The algorithm first verifies
that no tables in the cursor have been modified, otherwise it does not attempt
the optimization. The problem occurred due to the synchronization code not
marking the tables as modified. Now they are.
================(Build #5125 - Engineering Case #359409)================
When a java.sql.Timestamp type value was inserted into a column of type TIMESTAMP,
the conversion of the nanosecond portion of the java timestamp to the microsecond
portion of the UlDateTime was incorrect. Thus timestamps like 01:20:60 were
possible. This has now been corrected.
================(Build #5117 - Engineering Case #349254)================
If a download conflict error occurred during synchronization, the UltraLite
application could possibly have crashed at some point in the future. This
has been fixed
================(Build #5189 - Engineering Case #373248)================
When creating a foreign key with the UltraLite Schema Painter, the table
for which the foreign key is being created (the table doing the referencing)
was not listed as a table that could be referenced. This has been fixed,
self-referencing foreign keys can now be created.
================(Build #5186 - Engineering Case #372513)================
The UltraLite Schema painter could have crashed when creating a foreign key.
This has been fixed.
================(Build #5127 - Engineering Case #360361)================
In some circumstances, the UltraLite Schema Painter would have crashed when
browsing the File menu. This has been fixed.
The following could have caused this behaviour:
- Start the UltraLite Schema painter and create a new schema
- Right click on the schema in the left pane but do not choose any menu
items
- Select the File menu and choose Close (when asked to save, click No)
- Select the File menu again and move over the items in the menu. The application
could crash at this point.
================(Build #5193 - Engineering Case #372615)================
When the UltraLite Initialization utility ulinit, was run against a blank
padded ASA database, each object (table, column, index, publication etc.)
in the generated UltraLite schema was padded by many spaces. Since the ?z
switch on ulinit (specify a table ordering) required a list of tables, ulinit
could not properly handle this switch. This has been fixed.
The workaround is to unload and reload the ASA database into a non-blank
padded reference database.
================(Build #5189 - Engineering Case #373250)================
When creating a foreign key with one of the UltraLite utilities (Schema Painter,
ULXML, ulload, ulcreate, ulconv or ulisql), the application could have crashed,
if the foreign key referenced a table that had a self-referencing foreign
key. This is now fixed.
================(Build #5117 - Engineering Case #347352)================
When using the UltraLite initialization utility ulinit, to create a .usm
file for an UltraLite application, the -t option was ignored. This option
is used to specify the file containing the root certificates that the application
should accept in an SSL/TLS synchronization. This has been fixed. The -t
option will now be respected.