The dump file must be on a disk big enough to hold all the data being written. If there is insufficient space for all of the data, then an error is returned for the CREATE
TABLE
AS
SELECT
statement. One way to alleviate the problem is to create multiple files in multiple directory objects (assuming those directories are on different disks) when executing the CREATE
TABLE
AS
SELECT
statement. Multiple files can be created by specifying multiple locations in the form directory:file
in the LOCATION
clause and by specifying the PARALLEL
clause. Each parallel I/O server process that is created to populate the external table writes to its own file. The number of files in the LOCATION
clause should match the degree of parallelization because each I/O server process requires its own files. Any extra files that are specified will be ignored. If there are not enough files for the degree of parallelization specified, then the degree of parallelization is lowered to match the number of files in the LOCATION
clause.
Here is an example of unloading the inventories
table into three files.
SQL> CREATE TABLE inventories_XT_3 2 ORGANIZATION EXTERNAL 3 ( 4 TYPE ORACLE_DATAPUMP 5 DEFAULT DIRECTORY def_dir1 6 LOCATION ('inv_xt1.dmp', 'inv_xt2.dmp', 'inv_xt3.dmp') 7 ) 8 PARALLEL 3 9 AS SELECT * FROM oe.inventories; Table created.
When the ORACLE_DATAPUMP
access driver is used to load data, parallel processes can read multiple dump files or even chunks of the same dump file concurrently. Thus, data can be loaded in parallel even if there is only one dump file, as long as that file is large enough to contain multiple file offsets. The degree of parallelization is not tied to the number of files in the LOCATION
clause when reading from ORACLE_DATAPUMP
external tables.