Loading Java Classes from a Different Schema

I had the need of dynamically loading Java classes installed in a different schema than the class loading it. I found this description in the Oracle Database Java Developer’s Guide. It refers to the class oracle.aurora.rdbms.DbmsJava. This class is found in the JVM of the Oracle Database, but I was not able to locate any further documentation on it. Extracting information from DBA_JAVA_METHODS DBA_JAVA_ARGUMENTS, and DBA_JAVA_THROWS reveals a little more information. I will not list everything, but just the methods relevant for loading classes:

package oracle.aurora.rdbms;
import oracle.sql.CHAR;
public class Schema {}
public class DbmsJava {
    public static Class classForNameAndSchema(String name,String schema)
        throws ClassNotFoundException{}
    public static Class classForNameAndSchema(String name,Schema schema){}
        throws ClassNotFoundException{}
    public static Class classForNameAndSchema(String name,CHAR schema){}
        throws ClassNotFoundException{}
    public static Class classForNameInSystemSchema(String name){}
        throws ClassNotFoundException{}

Note that the example using classForNameAndSchema(String,String) in the documentation is wrong. The example passes the schema as the first argument and the name as the second, but it should be the other way round.

In case you wonder what aurora in package name means, the JVM inside the Oracle Database was known as “Aurora JVM” when it was introduced back in 8i.


ORA-24757 Duplicate transaction identifier on Read-only Database

Today was the first time I saw that error. The documentation is not very helpful (and apparently some tech writer changed bad tech speak in the action into something completely useless):

ORA-24757: duplicate transaction identifier

Cause: An attempt was made to start a new transaction with an identifier already in use by an existing transaction.

Action: Verify that the identifier is not in use.

This seems to apply mainly to OCI programming when creating an autonomous transaction. However, this was PL/SQL code using PRAGMA AUTONOMOUS_TRANSACTION, where the handling of TXID’s is done automatically, so how was this possible?
It turned out that by mistake, I had connected to the standby database, which is in read-only mode. The code in question does logging over a database link, and to avoid problems with “ORA-02049 Timeout: Distributed Transaction Waiting for Lock” in operations that are inherently local, the logging API use autonomous transactions. In this case, there was actually no DML, so I did not get an error telling me that I could not do X on a read-only database, but only that strange message when the logging API got called.
Apparently, there is a fixed value of the TXID on a read-only database. So, when attempting to start an autonomous transaction, the new TXID is equal to the old, and we get this error.


Returning Results from External Procedures by Reference

In the documentation for writing external C procedures (e.g. in the Database Development Guide, http://docs.oracle.com/database/121/ADFNS/adfns_externproc.htm#ADFNS726), it is stated that values must be returned from an external procedure by pointer, except for scalar types, such as int. That leaves it open which object to point to.

  • If you use a local variable, it will be allocated on the stack, and be gone when the procedure returns.
  • A static (or even global) variable is not good programming practice as it is not thread safe. That might mean that it could be overwritten by another session if you use a multi-threaded extproc agent.
  • If you use malloc, you have no way of releasing the space.

The documentation for OCIExtProcAllocCallMemory states: “Any memory allocated by the function is freed automatically as soon as control returns to PL/SQL.” If this means that the memory is still allocated when the result is passed back, you should be OK. The only documentation I could find on this, is this old example from 9i http://docs.oracle.com/cd/B10501_01/appdev.920/a96590/adg11rtn.htm#1003308 . This indeed suggests that this is the way to do it. I can find not sensible alternatives to this, and if you assume that logically there must be a way to do it (otherwise there is a design fault from Oracle’s side, and we know that that never happens 🙂 ), this must be it.

It is possible to have 40 significant digits in a NUMBER

I have known for a long time that the maximum number of digits in a NUMBER is 38. Indeed, according to the Oracle Documentation (e.g. http://docs.oracle.com/cd/E11882_01/server.112/e40402/limits001.htm#REFRN0041), the maximum precision of a NUMBER is 38 digits:

NUMBER  999…(38 9’s) x 10^125 maximum value Can be represented to full 38-digit precision (the mantissa)
-999…(38 9’s) x 10^125 minimum value Can be represented to full 38-digit precision (the mantissa)

I just tried to insert 2^128 into a table, and to my surprise the full precision of the number was maintained, despite 2^128 having 39 decimal digits. So I decided for a test:

create table t (n number(3,0),x number(*,0));
select column_name,data_type,data_length,data_precision
from user_tab_columns where table_name='T';


An interesting point here is data_length=22. One byte is for column length and one for exponent, leaving 20 bytes for the mantissa, not the 19 expected if the precision was 38 digits.

Lets try inserting some data:

for i in 0 .. 140 loop
insert into t values (i,power(2,i));
end loop;

And look at the rows

select n,to_char(x,'fm9999999999999999999999999999999999999999999') xx,
length(to_char(x,'fm9999999999999999999999999999999999999999999')) lx
from t
order by n;

0 1 1
1 2 1
2 4 1
126 85070591730234615865843651857942052864 38
127 170141183460469231731687303715884105728 39
128 340282366920938463463374607431768211456 39
129 680564733841876926926749214863536422912 39
130 1361129467683753853853498429727072845824 40
131 2722258935367507707706996859454145691648 40
132 5444517870735015415413993718908291383296 40
133 10889035741470030830827987437816582766600 41

Note that we only start losing precision when the length of the number exceeds 40 digits, 2^133 should end with 592, not 600.

In other words, it is indeed possible to have 40 digits in a NUMBER(*,0).

However, if you try to create a table with a NUMBER(40,0) column you get:
ORA-01727: numeric precision specifier is out of range (1 to 38)

If you have been told that NUMBER(38), NUMBER(38,0), INTEGER, and NUMBER(*,0) are equivalent, you have been lied to. Let’s try:

create table u (x integer,y number(*,0),z number,w number(*),p number(38,0),q number(38));
select column_name,data_type,data_length,data_precision,data_scale
from user_tab_columns where table_name='U';


INTEGER and NUMBER(*,0) are the same. NUMBER(38) and NUMBER(38,0) are the same. But these are different from each other and from NUMBER (and NUMBER(*)).

NUMBER(38) can hold integers up to 38 digits.

NUMBER(*,0) can hold any integer, but with loss of precision if more than 40 digits.

NUMBER can hold any number, but only accurate to 40 significant digits.

Freeing Empty Index Blocks with Automatic Segment Management

This post is a work in progress, just some observations with no conclusion (yet).

We have a table with an ID column generated from a sequence and an index on the ID column. From time to time, cleanup is done using basically

delete from t where id<x;

where x is a number that represents the oldest entry that should still be kept.

This keeps the table small, space gets reused. This also used to work fine for the index. After we switched to automatic segment space management, it seems that this does not work for the index, the index just continues to grow. Also, the delete statement seems to take longer and longer. I checked the execution plan, it uses an index range scan.

Quoting “Database Concepts” (version 11.2), “Logical Data Structures”, “Reuse of Index Space”:

The database can reuse an index block itself. Unlike a table block, an index block only becomes free when it is empty. The database places the empty block on the free list of the index structure and makes it eligible for reuse.

This obviously describes what happens when you use free list segment space management, implying that the empty block is unlinked from the b-tree.

Now the open question (and what I need to dig into) is: When using automatic segment space management, are empty index blocks left linked in the b-tree, and only recovered after the index is coalesced?

Releasing Space to the File System

You no doubt have run into

ORA-01653: unable to extend table MY.TBL by 128 in tablespace TBS

or similar error messages to the same effect, that you are out of space.

In the worst scenario, not only the tablespace is full, but it is not possible to extend a data file, because the file system is full. It often happens that other tablespaces have plenty of free space, but that does not help you, as you need it in a different tablespace. You need that space released to the file system, such that you can extend your full tablespace.

Resize Datafile

So, what can you do? Short of dropping tablespaces completely, there is only one thing that reclaims space to the file system, the resize datafile command:

ALTER DATABASE DATAFILE {id|file name} RESIZE newsize;

A data file can be identified either by its ID or name. In the following discussion, I will use the ID exclusively. You can shrink a data file, only to the point of the end of the last extent of the data file. If you want to trim all your data files to their minimum size, you can find which value to substitute for new size above, by the following query


This of course online releases space at the end of each data file, but does not release free space in the middle of the file.

Moving Data

To free more space, you will need to move data at the end of the data file into the free space.

Note that this can be rather disruptive to your users, but on the other hand they can not work anyway, because the file system is full. Then again, the users having trouble because of the full file system, may very well be different from the ones being disturbed, after all the latter have plenty of space in their tablespace.

After moving the data, you can resize the data file and reclaim the space. The only way this can be done is to move the entire segment into new extents. Depending on the type of segment, the following commands do that:


To move a LOB segment, LOB index, or nested table, you will need to move the table itself. You find the name of the table by the relevant of the these queries:

 WHERE OWNER=lobowner
   AND SEGMENT_NAME=lobsegment;
 WHERE OWNER=lobowner
   AND INDEX_NAME=lobindex;
 WHERE OWNER=nsttabown
   AND TABLENAME=nsttab;

I have seen in various posts references to an “ALTER TABLE MOVE LOB” command. This is a misunderstanding, it is really an “ALTER TABLE MOVE” command with a “LOB” option. This option allows to specify different storage, such as a separate tablespace for the LOB column, but does not allow moving just the LOB.

An index-oriented table shows up as an index in DBA_EXTENTS, but if you try to rebuild it, you will get

ORA-28650: Primary index on an IOT cannot be rebuilt

If you run into this, look up the table name by

 WHERE OWNER=indowner
 AND INDEX_NAME=indname;

The remaining type of segment is cluster. It is not possible to move clusters.

Also be aware that moving a table makes its indexes unusable, so they will have to be rebuild afterwards.

Reclaiming Space by Moving Objects to Another Tablespace

The most efficient way of reclaiming space is to create a new tablespace, move all the objects into it, drop the old tablespace, and rename the new. Be aware that there may be segment-less objects, which are supposed to be created in the tablespace. They will have to be moved as well. Also, remember to change the default tablespace of partitioned tables and indexes.

Reclaiming Space by Moving Objects in the Same Tablespace

If your file system is full, the above may not be possible.

Another strategy is to move extents near the end of the tablespace into free space in the same tablespace. There is no way to do this directly, but it can be achieved indirectly by clever manipulation. It depends on second guessing the strategy for allocation of blocks for new extents. From what I can deduce from observation, one of the rules is that a new extent will be put in existing free space of one of the data files of the tablespace if possible. “Possible” not just means that there is free space, but also that the space is large enough to accommodate the intended size of the new extent.

The technique consists of moving the segments at the end of the data file into free space closer to the beginning, and trimming the data file. More precisely, it consists of repeatedly trimming the data file to its minimum size and moving the last segment. Since the data file is trimmed just prior to the movement, there is no free space after the segment, the segment will (hopefully) end up further from the end.

It will often mean that the same segment is moved several times, so this can be costly in terms of I/O.

Querying the data dictionary to find the segments to move may actually be even more costly. I have found the following approach, that will save some rounds of data dictionary queries. The basic query is


You loop over the result set, and for each round, you

  1. Trim the data file to the point given by the TOP column of the query.
  2. Move the segment using the appropriate command as described above.

The hope is that each segment is moved below the other ones, but sooner or later, one extent of the moved segment will be put above the highest extent of the next segment to process. This happens because of free holes in the data file, some of which actually appear only when you move the segment. When that happens you run into

ORA-03297: file contains used data beyond requested RESIZE value

Your response is simply to rerun the query, and start over again. Sometimes, this error happens even for the first segment. I do not completely understand why, but I have found that adding a bit to the resize value (64k worked for me) before the data file was resized the first time, and then moving the segment as usual made it possible to continue.

The next question is “when to stop?”. You may have an idea when you start what data file size you may be able to achieve by looking at the total size and the amount of free space. But this can be misleading for two reasons, pulling in opposite directions:

  • Some of the free space may be small holes, too small for the allocation of new extents. It may not be possible to free this space.
  • When you move segments, you will reclaim free space in the data blocks, meaning that the segment will contain fewer data blocks when you are done. This means that it may be possible to free more space from the data file than the original free space.

If you automate the process, I will recommend monitoring the process (i.e. watching the size of the data file and the position of the last extent), and simply kill the job when it seems to not get anywhere.

If the tablespace have more than one data file, the data files are handled one by one.

Partition Exchange

Oracle lets you exchange a partition (or subpartition) of a table with another (non-partitioned) table. It even lets you exchange a subpartitioned partiiton with a partitioned table. The DDL statement to use is


Partition exchange can be useful in a number of situations, especially in the area of data warehousing. For example:

  • Data cleanup
  • Data warehouse refresh
  • Move data to a different schema (as the two tables may be in different schemas).

The basic workflow is that you create a table with the new contents of the partition and then exchange them. The advantage of this approach over e.g. truncating the table and inserting the rows is that the table is available to queries during the process. Several conditions need to be fulfilled for this mechanism to work.


The first is that the columns (in physical order) have the same type and size. The column names are irrelevant. This can usually be achieved by creating the table to be used for the new contents from the target table using


(“CTAS*”) The caveats here are hidden and virtual columns. A hidden column (e.g. from SET UNUSED) in the target table will not be part of the new table. Conversely, a virtual column will become a physical column in the new table. Both these situations will prevent partition exchange. If you plan to use partition exchange for maintenance of a table, don’t give it virtual columns, and don’t remove columns with SET UNUSED.


The next is that constraints are the same. Again names are irrelevant, but state is important, i.e. whether the constraint is enabled and/or validated. Constraints that are neither enabled nor validated are ignored. It applies to

  • Not null constraints
  • Other check constraints
  • Primary key constraints
  • Unique constraints
  • Foreign key constraints

If the new table was created with CTAS*, then not null constraints that are enabled and validated are created automatically. Note that primary key and unique constraints are not interchangeable, if a constraint is a primary key on one table, it can not be a unique key on the other, even if the columns are not null. The existence of a unique index on the columns is not sufficient either. The partition key(s) must be part of the primary/unique key.

Incoming foreign keys is another issue. If the table has any referencing foreign keys, then they will be checked as part of partition exchange operation. Rumor has it that if the child table is partitioned the same way, and if the corresponding partition is empty (i.e. has been truncated, or never had any row inserted), then this step is skipped. I have done any investigations into this matter. Partition by reference may also be of help here, but again I have no experience with this.


If the target table has indexes they have to handled as well. The options depend on whether an index is local (i.e. partitioned the same way as the table) or global.

Local Indexes

For local indexes, the simplest way is to add indexes to the new table, corresponding to the local indexes on the target table and then specify INCLUDING INDEXES on the EXCHANGE statement. Using this approach, the indexes are available immediately when the exchange takes place. The indexes must be created identically to the ones on the target table. This includes:

  • Columns and their order, including function based indexes.
  • Key compression.
  • Whether the index is unique, non-unique, or bitmap.

For bitmap indexes, there is further the complication that the Håkan factor must match. See my post Bitmap Indexes and the Håkan factor for what this means and for how to achieve the match. The solution requires the use of CTAS*, so it works only for tables with no virtual or hidden columns.

Global Indexes

For global indexes, there are basically two options: Either you rebuild them as part of the exchange operation, or you invalidate them. Either option results in some degree of unavailability. You also have the option to specify exactly what to rebuild; please refer to the Oracle documentation if you want to explore this.


You also have the option of checking whether you want to check that the partition key of every row during the exchange. If you use partition exchange as part of a maintenance process, you will normally know that this is the case, and you can specify WITHOUT VALIDATION.

If you are not absolutely sure about it, you may specify WITH VALIDATION.


If you plan to use partition exchange as part of day-to-day operations of a system,  you should plan the database such that you (if at all possible):

  • Avoid using virtual columns.
  • Use only local indexes only.

Also, do not use SET UNUSED for structure changes.

One Little Trick

You may use the technique for a non partitioned table as well. In this case, create the new table as a partitioned table with one partition. Be sure to put at least one column from all unique indexes into the partition key.