Comment 1 for bug 1417337

Revision history for this message
Anoop Sharma (anoop-sharma) wrote :

Insert into table with indexes are transactional inserts even if load syntax is used.
They are running into some kind of dtm cache/memory limit due to
the large number of rows being inserted within a transaction.
It is never recommended to insert so many rows within a transaction.

This issue has been seen before and logs can provide more info
on what is happening but there isn't going to be any quick remedy to this.
It ultimately gets stuck on underlying system, region splits, data flush,
memory overflow and other system related issues.

To insert so many rows, one should first insert them
into the table without the index which will use load to insert.
Once that insert is done, one should create the index on it which
will then internally use load to populate the index.

Or one can disable that index, load the table and enable index after
that (disable index support is not yet in). 'upsert using load' can
also do that internally. There may be an unexternalized option to
do that. Khaled can comment on that.

When we have support for incremental loads, that may alleviate loading
a table with incremental data in non-transactional mode.

We can probably give a warning or an error is one tries to do large
inserts on table with index.