Returning Results from External Procedures by Reference

In the documentation for writing external C procedures (e.g. in the Database Development Guide, http://docs.oracle.com/database/121/ADFNS/adfns_externproc.htm#ADFNS726), it is stated that values must be returned from an external procedure by pointer, except for scalar types, such as int. That leaves it open which object to point to.

  • If you use a local variable, it will be allocated on the stack, and be gone when the procedure returns.
  • A static (or even global) variable is not good programming practice as it is not thread safe. That might mean that it could be overwritten by another session if you use a multi-threaded extproc agent.
  • If you use malloc, you have no way of releasing the space.

The documentation for OCIExtProcAllocCallMemory states: “Any memory allocated by the function is freed automatically as soon as control returns to PL/SQL.” If this means that the memory is still allocated when the result is passed back, you should be OK. The only documentation I could find on this, is this old example from 9i http://docs.oracle.com/cd/B10501_01/appdev.920/a96590/adg11rtn.htm#1003308 . This indeed suggests that this is the way to do it. I can find not sensible alternatives to this, and if you assume that logically there must be a way to do it (otherwise there is a design fault from Oracle’s side, and we know that that never happens :-) ), this must be it.

It is possible to have 40 significant digits in a NUMBER

I have known for a long time that the maximum number of digits in a NUMBER is 38. Indeed, according to the Oracle Documentation (e.g. http://docs.oracle.com/cd/E11882_01/server.112/e40402/limits001.htm#REFRN0041), the maximum precision of a NUMBER is 38 digits:

NUMBER  999…(38 9’s) x 10^125 maximum value Can be represented to full 38-digit precision (the mantissa)
-999…(38 9’s) x 10^125 minimum value Can be represented to full 38-digit precision (the mantissa)

I just tried to insert 2^128 into a table, and to my surprise the full precision of the number was maintained, despite 2^128 having 39 decimal digits. So I decided for a test:

create table t (n number(3,0),x number(*,0));
select column_name,data_type,data_length,data_precision
from user_tab_columns where table_name='T';

COLUMN_NAME DATA_TYPE DATA_LENGTH DATA_PRECISION
N NUMBER
22
3
X NUMBER
22

An interesting point here is data_length=22. One byte is for column length and one for exponent, leaving 20 bytes for the mantissa, not the 19 expected if the precision was 38 digits.

Lets try inserting some data:

begin
for i in 0 .. 140 loop
insert into t values (i,power(2,i));
end loop;
end;
/

And look at the rows

select n,to_char(x,'fm9999999999999999999999999999999999999999999') xx,
length(to_char(x,'fm9999999999999999999999999999999999999999999')) lx
from t
order by n;

N XX LX
0 1 1
1 2 1
2 4 1
..
126 85070591730234615865843651857942052864 38
127 170141183460469231731687303715884105728 39
128 340282366920938463463374607431768211456 39
129 680564733841876926926749214863536422912 39
130 1361129467683753853853498429727072845824 40
131 2722258935367507707706996859454145691648 40
132 5444517870735015415413993718908291383296 40
133 10889035741470030830827987437816582766600 41
..

Note that we only start losing precision when the length of the number exceeds 40 digits, 2^133 should end with 592, not 600.

In other words, it is indeed possible to have 40 digits in a NUMBER(*,0).

However, if you try to create a table with a NUMBER(40,0) column you get:
ORA-01727: numeric precision specifier is out of range (1 to 38)

If you have been told that NUMBER(38), NUMBER(38,0), INTEGER, and NUMBER(*,0) are equivalent, you have been lied to. Let’s try:

create table u (x integer,y number(*,0),z number,w number(*),p number(38,0),q number(38));
select column_name,data_type,data_length,data_precision,data_scale
from user_tab_columns where table_name='U';

COLUMN_NAME  DATA_TYPE  DATA_LENGTH  DATA_PRECISION  DATA_SCALE
X NUMBER
22
0
Y NUMBER
22
0
Z NUMBER
22
W NUMBER
22
P NUMBER
22
38
0
Q NUMBER
22
38
0

INTEGER and NUMBER(*,0) are the same. NUMBER(38) and NUMBER(38,0) are the same. But these are different from each other and from NUMBER (and NUMBER(*)).

NUMBER(38) can hold integers up to 38 digits.

NUMBER(*,0) can hold any integer, but with loss of precision if more than 40 digits.

NUMBER can hold any number, but only accurate to 40 significant digits.

Freeing Empty Index Blocks with Automatic Segment Management

This post is a work in progress, just some observations with no conclusion (yet).

We have a table with an ID column generated from a sequence and an index on the ID column. From time to time, cleanup is done using basically

delete from t where id<x;

where x is a number that represents the oldest entry that should still be kept.

This keeps the table small, space gets reused. This also used to work fine for the index. After we switched to automatic segment space management, it seems that this does not work for the index, the index just continues to grow. Also, the delete statement seems to take longer and longer. I checked the execution plan, it uses an index range scan.

Quoting “Database Concepts” (version 11.2), “Logical Data Structures”, “Reuse of Index Space”:

The database can reuse an index block itself. Unlike a table block, an index block only becomes free when it is empty. The database places the empty block on the free list of the index structure and makes it eligible for reuse.

This obviously describes what happens when you use free list segment space management, implying that the empty block is unlinked from the b-tree.

Now the open question (and what I need to dig into) is: When using automatic segment space management, are empty index blocks left linked in the b-tree, and only recovered after the index is coalesced?

Releasing Space to the File System

You no doubt have run into

ORA-01653: unable to extend table MY.TBL by 128 in tablespace TBS

or similar error messages to the same effect, that you are out of space.

In the worst scenario, not only the tablespace is full, but it is not possible to extend a data file, because the file system is full. It often happens that other tablespaces have plenty of free space, but that does not help you, as you need it in a different tablespace. You need that space released to the file system, such that you can extend your full tablespace.

Resize Datafile

So, what can you do? Short of dropping tablespaces completely, there is only one thing that reclaims space to the file system, the resize datafile command:

ALTER DATABASE DATAFILE {id|file name} RESIZE newsize;

A data file can be identified either by its ID or name. In the following discussion, I will use the ID exclusively. You can shrink a data file, only to the point of the end of the last extent of the data file. If you want to trim all your data files to their minimum size, you can find which value to substitute for new size above, by the following query

SELECT FILE_ID,MAX((BLOCK_ID+BLOCKS)*BYTES/BLOCKS)) FROM DBA_EXTENTS GROUP BY FILE_ID;

This of course online releases space at the end of each data file, but does not release free space in the middle of the file.

Moving Data

To free more space, you will need to move data at the end of the data file into the free space.

Note that this can be rather disruptive to your users, but on the other hand they can not work anyway, because the file system is full. Then again, the users having trouble because of the full file system, may very well be different from the ones being disturbed, after all the latter have plenty of space in their tablespace.

After moving the data, you can resize the data file and reclaim the space. The only way this can be done is to move the entire segment into new extents. Depending on the type of segment, the following commands do that:

ALTER TABLE tab MOVE {TABLESPACE tbs}
ALTER TABLE tab MOVE PARTITION part {TABLESPACE tbs}
ALTER TABLE tab MOVE SUBPARTITION subpart  {TABLESPACE tbs}
ALTER INDEX ind REBUILD {TABLESPACE tbs}
ALTER INDEX ind REBUILD PARTITION part {TABLESPACE tbs}
ALTER INDEX ind REBUILD SUBPARTITION subpart {TABLESPACE tbs}

To move a LOB segment, LOB index, or nested table, you will need to move the table itself. You find the name of the table by the relevant of the these queries:

SELECT TABLE_NAME
  FROM DBA_LOBS
 WHERE OWNER=lobowner
   AND SEGMENT_NAME=lobsegment;
SELECT TABLE_NAME
  FROM DBA_LOBS
 WHERE OWNER=lobowner
   AND INDEX_NAME=lobindex;
SELECT PARENT_TABLE_NAME
  FROM DBA_NESTED_TABLES
 WHERE OWNER=nsttabown
   AND TABLENAME=nsttab;

I have seen in various posts references to an “ALTER TABLE MOVE LOB” command. This is a misunderstanding, it is really an “ALTER TABLE MOVE” command with a “LOB” option. This option allows to specify different storage, such as a separate tablespace for the LOB column, but does not allow moving just the LOB.

An index-oriented table shows up as an index in DBA_EXTENTS, but if you try to rebuild it, you will get

ORA-28650: Primary index on an IOT cannot be rebuilt

If you run into this, look up the table name by

SELECT TABLE_NAME
  FROM DBA_INDEXES
 WHERE OWNER=indowner
 AND INDEX_NAME=indname;

The remaining type of segment is cluster. It is not possible to move clusters.

Also be aware that moving a table makes its indexes unusable, so they will have to be rebuild afterwards.

Reclaiming Space by Moving Objects to Another Tablespace

The most efficient way of reclaiming space is to create a new tablespace, move all the objects into it, drop the old tablespace, and rename the new. Be aware that there may be segment-less objects, which are supposed to be created in the tablespace. They will have to be moved as well. Also, remember to change the default tablespace of partitioned tables and indexes.

Reclaiming Space by Moving Objects in the Same Tablespace

If your file system is full, the above may not be possible.

Another strategy is to move extents near the end of the tablespace into free space in the same tablespace. There is no way to do this directly, but it can be achieved indirectly by clever manipulation. It depends on second guessing the strategy for allocation of blocks for new extents. From what I can deduce from observation, one of the rules is that a new extent will be put in existing free space of one of the data files of the tablespace if possible. “Possible” not just means that there is free space, but also that the space is large enough to accommodate the intended size of the new extent.

The technique consists of moving the segments at the end of the data file into free space closer to the beginning, and trimming the data file. More precisely, it consists of repeatedly trimming the data file to its minimum size and moving the last segment. Since the data file is trimmed just prior to the movement, there is no free space after the segment, the segment will (hopefully) end up further from the end.

It will often mean that the same segment is moved several times, so this can be costly in terms of I/O.

Querying the data dictionary to find the segments to move may actually be even more costly. I have found the following approach, that will save some rounds of data dictionary queries. The basic query is

SELECT OWNER,SEGMENT_NAME,PARTITION_NAME,SEGMENT_TYPE,
    MAX((BLOCK_ID+BLOCKS)*BYTES/BLOCKS) TOP
  FROM DBA_EXTENTS
 WHERE FILE_ID=fid
 GROUP BY OWNER,SEGMENT_NAME,PARTITION_NAME,SEGMENT_TYPE
 ORDER BY TOP DESC;

You loop over the result set, and for each round, you

  1. Trim the data file to the point given by the TOP column of the query.
  2. Move the segment using the appropriate command as described above.

The hope is that each segment is moved below the other ones, but sooner or later, one extent of the moved segment will be put above the highest extent of the next segment to process. This happens because of free holes in the data file, some of which actually appear only when you move the segment. When that happens you run into

ORA-03297: file contains used data beyond requested RESIZE value

Your response is simply to rerun the query, and start over again. Sometimes, this error happens even for the first segment. I do not completely understand why, but I have found that adding a bit to the resize value (64k worked for me) before the data file was resized the first time, and then moving the segment as usual made it possible to continue.

The next question is “when to stop?”. You may have an idea when you start what data file size you may be able to achieve by looking at the total size and the amount of free space. But this can be misleading for two reasons, pulling in opposite directions:

  • Some of the free space may be small holes, too small for the allocation of new extents. It may not be possible to free this space.
  • When you move segments, you will reclaim free space in the data blocks, meaning that the segment will contain fewer data blocks when you are done. This means that it may be possible to free more space from the data file than the original free space.

If you automate the process, I will recommend monitoring the process (i.e. watching the size of the data file and the position of the last extent), and simply kill the job when it seems to not get anywhere.

If the tablespace have more than one data file, the data files are handled one by one.

Partition Exchange

Oracle lets you exchange a partition (or subpartition) of a table with another (non-partitioned) table. It even lets you exchange a subpartitioned partiiton with a partitioned table. The DDL statement to use is

ALTER TABLE … EXCHANGE PARTITION … WITH TABLE …

Partition exchange can be useful in a number of situations, especially in the area of data warehousing. For example:

  • Data cleanup
  • Data warehouse refresh
  • Move data to a different schema (as the two tables may be in different schemas).

The basic workflow is that you create a table with the new contents of the partition and then exchange them. The advantage of this approach over e.g. truncating the table and inserting the rows is that the table is available to queries during the process. Several conditions need to be fulfilled for this mechanism to work.

Columns

The first is that the columns (in physical order) have the same type and size. The column names are irrelevant. This can usually be achieved by creating the table to be used for the new contents from the target table using

CREATE TABLE … AS SELECT * FROM …

(“CTAS*”) The caveats here are hidden and virtual columns. A hidden column (e.g. from SET UNUSED) in the target table will not be part of the new table. Conversely, a virtual column will become a physical column in the new table. Both these situations will prevent partition exchange. If you plan to use partition exchange for maintenance of a table, don’t give it virtual columns, and don’t remove columns with SET UNUSED.

Constraints

The next is that constraints are the same. Again names are irrelevant, but state is important, i.e. whether the constraint is enabled and/or validated. Constraints that are neither enabled nor validated are ignored. It applies to

  • Not null constraints
  • Other check constraints
  • Primary key constraints
  • Unique constraints
  • Foreign key constraints

If the new table was created with CTAS*, then not null constraints that are enabled and validated are created automatically. Note that primary key and unique constraints are not interchangeable, if a constraint is a primary key on one table, it can not be a unique key on the other, even if the columns are not null. The existence of a unique index on the columns is not sufficient either. The partition key(s) must be part of the primary/unique key.

Incoming foreign keys is another issue. If the table has any referencing foreign keys, then they will be checked as part of partition exchange operation. Rumor has it that if the child table is partitioned the same way, and if the corresponding partition is empty (i.e. has been truncated, or never had any row inserted), then this step is skipped. I have done any investigations into this matter. Partition by reference may also be of help here, but again I have no experience with this.

Indexes

If the target table has indexes they have to handled as well. The options depend on whether an index is local (i.e. partitioned the same way as the table) or global.

Local Indexes

For local indexes, the simplest way is to add indexes to the new table, corresponding to the local indexes on the target table and then specify INCLUDING INDEXES on the EXCHANGE statement. Using this approach, the indexes are available immediately when the exchange takes place. The indexes must be created identically to the ones on the target table. This includes:

  • Columns and their order, including function based indexes.
  • Key compression.
  • Whether the index is unique, non-unique, or bitmap.

For bitmap indexes, there is further the complication that the Håkan factor must match. See my post Bitmap Indexes and the Håkan factor for what this means and for how to achieve the match. The solution requires the use of CTAS*, so it works only for tables with no virtual or hidden columns.

Global Indexes

For global indexes, there are basically two options: Either you rebuild them as part of the exchange operation, or you invalidate them. Either option results in some degree of unavailability. You also have the option to specify exactly what to rebuild; please refer to the Oracle documentation if you want to explore this.

Validation

You also have the option of checking whether you want to check that the partition key of every row during the exchange. If you use partition exchange as part of a maintenance process, you will normally know that this is the case, and you can specify WITHOUT VALIDATION.

If you are not absolutely sure about it, you may specify WITH VALIDATION.

Recommendations

If you plan to use partition exchange as part of day-to-day operations of a system,  you should plan the database such that you (if at all possible):

  • Avoid using virtual columns.
  • Use only local indexes only.

Also, do not use SET UNUSED for structure changes.

One Little Trick

You may use the technique for a non partitioned table as well. In this case, create the new table as a partitioned table with one partition. Be sure to put at least one column from all unique indexes into the partition key.

Query for Validation of Check Constraint

I have had some cases where validation of an enabled NOT NULL constraint did not use the bitmap index on the column. This is not that easily checked in advance, as EXPLAIN PLAN does not work on DDL statements. But the DDL statement actually runs an ordinary query behind the scenes, and you can use EXPLAIN PLAN on that. You will have to know exactly which query in order to get a reliable execution plan.

While validating a NOT NULL constraint on an 11.2 Oracle Database, I looked into V$SQL to find the actual query being run.

The constraint had been added non-validated to the table using

ALTER TABLE OWN.TAB MODIFY (COL CONSTRAINT C_TAB_COL_N NOT NULL ENABLE NOVALIDATE)

Then it was validated using the DDL statement:

ALTER TABLE OWN.TAB MODIFY CONSTRAINT C_TAB_COL_N VALIDATE

Resulting query in V$SQL:

select /*+ all_rows ordered */ A.rowid, :1, :2, :3 from “OWN”.”TAB” A where not ( “COL” IS NOT NULL)

The text inside the parentheses is exactly the contents of the  SEARCH_CONDITION column in DBA_CONSTRAINTS, so I expect that this form is used for the validation of all enabled check constraints. Of course, this may be different on other versions of Oracle Database.

Bitmap Indexes and the Håkan Factor

ORA-14642: Bitmap index mismatch

You created a new table with the same column structure as the existing one, you have filled it with data, and you have created indexes to match the local ones on the existing table, and then attempted ALTER TABLE … EXCHANGE PARTITION … WITH TABLE … INCLUDING INDEXES, but failed with that error. So, what is mismatching?

The likely answer is that the Håkan factor is different between the two tables. Håkan factor?? What is that, and what has that to do with bitmap indexes?

In Oracle Database, the term “Håkan factor” (sometimes “Hakan factor”) is used to denote the maximum  number of rows in a data block of a table. This is relevant for bitmap indexes, because they represent each data block by a number of bits, one for each row in the block. So, the Håkan factor is the number of bits allocated for each data block. This number is the same for all bitmap indexes on a table, allowing bit operations (e.g. AND/OR) between indexes.

For partition exchange to work with bitmap indexes, the Håkan factor of the two tables must be the same.

When the first bitmap index is created on a table, Oracle calculates how many rows could possibly be in a data block. This is done by finding the minimum size of a row, determined by things like the data types and NULL/NOT NULL settings of the columns. Even if you later e.g. add columns to the table, the number will not change.

ALTER TABLE MINIMIZE RECORDS_PER_BLOCK

There is a different way of calculating the Håkan factor. The ALTER TABLE … MINIMIZE RECORDS_PER_BLOCK command finds the actual maximum number of rows in a data block in the table, and sets the Håkan factor accordingly. If done this way, the Håkan factor becomes a limit that will be enforced for future INSERT statements. Note that this command can be used only when no bitmap index exists on the table.

The purpose of doing this is to make bitmap indexes take up fewer bits. If the Håkan factor is too large, a number of zero bits will at the end of each block. As zero bits are compressed when the bitmap index is stored, these do not matter much for the disk space usage. But there may be an impact on the memory usage when doing bitmap operations during bitmap index access. I do not know about the actual code, so I do not know this for sure.

SYS.TAB$.SPARE1

The Håkan factor is not found in any standard data dictionary views, such as DBA_TABLES. You can find it in the SPARE1 column (a NUMBER) of SYS.TAB$. This is what I have been able to deduce about its contents by experimenting on a 10g database. It consists of a number of bit fields:

  • Lower bits (at least 12, perhaps as many as 15): Håkan factor. The largest number I have seen was just above 2k, on a database with 32k block size. I have never seen any table where the next 3 bits of this column were non-zero. Since 32k is the largest possible block size, and since a row must be at least 1 byte long, there is no way the number can exceed 15 bits. On the other hand, I have come across another limit of 8k, so perhaps 2 bits have a different meaning.
  • 0x08000: MINIMIZE RECORDS_PER_BLOCK in effect
  • 0x10000: Seems to mean that the Håkan factor has been fixed higher than the value calculated from non-null columns. This could be e.g. if set by event 14529 (see below) or if non-null columns have been added after a bitmap index has been created.
  • 0x20000: Table compression is enabled

The SYS.TAB$ table uses the object ID as key, the OBJ# column. It matches the column found in the xxx_OBJECTS data dictionary views. For example, you can use a query like the following for getting the Håkan factor and the above bits:

select object_name,to_char(spare1,’0xxxxx’) spare1_hex,bitand(spare1,32767) hakan_factor
from user_objects o,sys.tab$ t
where object_name =’MYTABLE’ and object_type=’TABLE’
and o.object_id=f.obj#;

The Solution: Event 14529

So, how do you make sure that the table you want to exchange for a partition has the right Håkan factor? There is no way you can set the Håkan factor manually, but there is a trick: You can copy the Håkan factor from the original table to the new one when creating it. To do so:

  1. Make sure that event 14529 is set in the session.
  2. Create the table as SELECT * from the original table.

For example:

ALTER SESSION SET EVENTS ‘14529 TRACE NAME CONTEXT FOREVER, LEVEL 1′;
CREATE TABLE TMP AS SELECT * FROM PART_TBL WHERE 0=1;
ALTER SESSION SET EVENTS ‘14529 TRACE NAME CONTEXT OFF';

This construct ensures that the Håkan factors of the two tables are equal. Then go on filling the table, creating indexes, etc. Now the partition exchange will work.

Oracle states that:

  1. The event must be set only during a CREATE TABLE … AS SELECT * …
  2. The event has no effect outside a CREATE TABLE … AS SELECT * …