Release Notes Oracle Data Integrator
Release Notes Oracle Data Integrator
1
Oracle Data Integrator Marketplace 14.1.2.0.x Issues and
Workarounds
Use this information to understand the known issues related to Oracle Data Integrator
(ODI) Marketplace 14.1.2.0.x and their workarounds.
ODI MP 14.1.2.0.x repositories must be on Oracle Database. MySQL-based ODI
repositories are not supported.
Currently, options to create embedded repository is not supported in ODI MP
14.1.2.0.x Image.
This section contains information on the following issues:
• Connection to Data Server Fails if Existing ODI 12.2.1.4 MP Repository is used
During Provisioning
• Data Server is not Created by Default if Oracle Database 23ai is used for
Repository Creation
• Default Data Server JDBC URL has Incomplete Connection Details
• Agent does not Start Automatically if Instance is Created Using Existing
Repository
• ODI Agent Fails to Update Schedule Time
• Discover ADBs Feature Fails to Fetch List of Available Autonomous Database
Instances
• Oracle Object Storage Data Server Missing Post Upgrade
As a workaround,
1. Login to ODI Studio.
2. In Topology navigator click Technologies -> Oracle and select the data server.
3. Click the JDBC tab and delete the following from the Properties table:
oracle.net.wallet_location
oracle.net.ssl_server_dn_match
4. In the Definition tab, click the browse icon present beside the Credential File text
box and browse to select the wallet for the data server from the /u01/oracle/mwh/
wallets directory. The Connection Details text box appears.
5. Choose the connection URL from the Connection Details drop down list.
2
6. Enter the credentials required to open the wallet file.
The JDBC URL and JDBC driver details are auto-populated with the credentials
retrieved from the wallet file.
7. In the JDBC tab, validate whether the JDBC connection URL is auto-populated.
8. Click Save to save the Data Server details.
9. Click Test Connection to test the connection.
The test connection will complete successfully. [38015158]
3
When you create an ODI instance, the default data server is created automatically with
pre-populated connection details. You only need to provide the username and
password for the created instance to connect to the data server.
However, the default data server that is created for an ODI MP 14.1.2.0.x instance has
jdbc:oracle:thin:@null in the JDBC URL. Note that this issue does not occur for
newly created data servers.
To populate the correct JDBC URL,
1. Login to ODI Studio.
2. In Topology navigator -> Technologies -> Oracle, select the data server.
3. In the Definition tab, click the browse icon present beside the Credential File text
box and browse to select the wallet for the data server from the /u01/oracle/mwh/
wallets directory.
The Connection Details text box appears.
4. Choose the connection URL from the Connection Details drop down list.
5. Enter the credentials required to open the wallet file.
The JDBC URL and JDBC driver details are auto-populated with the credentials
retrieved from the wallet file.
6. In the JDBC tab, validate whether the JDBC connection URL is auto-populated.
7. Click Save to save the Data Server details.
8. Click Test Connection to test the connection.
The JDBC URL details will be saved correctly. [37993161]
4
ODI Agent Fails to Update Schedule Time
When you change the schedules for running any mappings, packages, or load plans,
the ODI agent fails to update the schedule time.
To fix this,
1. Close ODI Studio.
2. Stop any Agent service that is running.
a. Log in to the provisioned ODI instance on Oracle Cloud Marketplace using
SSH as opc user:
ssh opc@<IP Address>
b. Run the stop functionality using systemctl.
sudo systemctl stop manageappsodi.service
3. Under /u01/oracle/mwh/odi/common/ create the path odi-ff/MP.
4. Create an ffdefinition.config file in /u01/oracle/mwh/odi/common/odi-ff/MP.
5. Add the following content to the ffdefinition.config file:
#Features config
repo-misfire-schedule-retry=false
6. Start the Agent service.
a. Log in to the provisioned ODI instance on Oracle Cloud Marketplace using
SSH as opc user:
ssh opc@<IP Address>
b. Run the start functionality using systemctl.
sudo systemctl start manageappsodi.service
7. Start ODI Studio using the following command:
Windows: odi.exe -clean -initialize
UNIX: ./odi.sh -clean -initialize
The schedule time is updated successfully. [37987790]
5
AddVMOption -Djavax.net.ssl.trustStore=$JAVA_HOME/lib/security/cacerts
AddVMOption -Djavax.net.ssl.trustStorePassword=changeit
6
a. Log in to the ODI MP 14.1.2.0.x instance.
b. Open ODI Studio.
c. From the Topology Navigator toolbar, Select Import....
d. In the Import Selection dialog, select Smart Import.
e. In the File Selection field, enter the location of the Smart Export file to import.
f. Click Next.
g. In the Enter Export Key dialog provide the export key that you used for the
export process.
h. Click Finish to start the import process.
Connect to ODI Studio -> Topology -> Technologies -> Oracle Object Storage to verify
that the data server is created. [38037301]
7
• Enable GIT/Subversion
• Enable wallet
• Create connection to GIT/Subversion
• Add mapping to VCS
• Modify mapping
and then terminate ODI studio and start again to create a version for mapping
including dependencies, you get a null pointer error.
To overcome this issue, as a workaround
• Navigate to Team -> Settings - > Edit Connection and click OK
The wallet password dialog appears.
• Enter the wallet password and then create version with dependency.
You can successfully create version for mapping including dependencies. [25168395]
For example :
import org.apache.commons.lang.ArrayUtils;
to
import org.apache.commons.lang3.ArrayUtils;
8
JSON Payload must not contain null as a value. As a workaround, replace:
• string with "null", "n/a" or any logical value in double quotations
• integer with the value "0" (zero)
9
studio. This behavior is observed when more number of child nodes are associated to
a parent node. As a workaround, avoid multiple refresh on save operation. Also limit
the number of child nodes inside a folder to a maximum of 200, to avoid performance
issues. [27395959]
10
• Log Level and Log File Not Displayed in the Complex File Dataserver Properties
• BinaryType Data Type Not Supported in Spark 1.1
• Hive Complex Datatypes Not Supported by LKM Spark to Hive
• Spark Execution Supports only YARN Deployment
• Spark-Cassandra: Permission Errors in YARN-client mode
• Known Datatype Issues using Spark 1.6
• Unable to Store Alias Error in Pig
• KMs Replaced During Repository Upgrade
• Erroneously Published SDK API Classes Removed from the 12c Javadocs
• CKM Fails with XML and Complex Files When Database is Set to External
• Flexfields Tab of KM Editor May Not Display Newly Created KMs
This is caused due to Hive bugs, HIVE-5672 and HIVE-6410, which cause the INSERT
OVERWRITE statement to fail when writing to HDFS. Please note that these Hive
bugs are already fixed and the issue is resolved when upgraded to a recent version of
CDH and Hortonworks. [21529011]
Log Files are Deleted Even in Case of Failure when Using the
OdiOSCommand on Oozie
Many KMs that use OdiOSCommand use the OUT_FILE/ERR_FILE parameters to
redirect output into log files.The directory for such files is based on the KM option
TEMP_DIR, which uses a default value of System.getProperty("java.io.tmpdir").
This causes ODI on Oozie to use an Oozie job temporary directory, which gets
cleaned up on job completion, irrespective of whether the job was successful. This
results in the log files not being available after execution.
As a workaround, when executing on Oozie, overwrite the KM option TEMP_DIR to a
specific temporary directory. [21232650]
11
The issue occurs on pure CDH5.4.0+ pseudo/multi node clusters.
As a workaround,
1. Make sure that oozie share lib is already created using the following command:
oozie-setup sharelib create -fs hdfs:///user/oozie -locallib <path to local
folder [oozie-sharelib-yarn]>
Note:
Folder oozie-sharelib-yarn is local to the oozie setup. After creating the
sharelib, you can verify the sharelib on HDFS at the location hdfs:///user/
oozie/share/lib/lib_<timestamp>
2. Add the following properties to oozie-site.xml. These properties are needed for
Oozie to obtain the hadoop configuration files to access HDFS. In the first property
value, add the path after "*="
<property>
<name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
<value>*=<replace_this_with_path_to_hadoop_configuration_folder For
Example:/etc/hadoop/conf></value>
<value>*=<replace_this_with_path_to_hadoop_configuration_folder
For Example:/etc/hadoop/conf></value>
</property>
<property>
<name>oozie.service.WorkflowAppService.system.libpath</name>
<value>hdfs:///user/oozie/share/lib</value>
</property>
For example:
hdfs dfs -mkdir -p /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars
hdfs dfs -copyFromLocal
12
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/* /opt/cloudera/parcels/
CDH-5.3.0-1.cdh5.3.0.p0.30/jars
13
XKM SQL Distinct Limitation
When a mapping is created with Oracle as source and Oracle as target using a
Distinct component and the XKM SQL Distinct is selected in the DISTINCT node, the
mapping fails and the following error is displayed:
The physical node DISTINCT_ cannot be supported by technology Oracle on
execution unit src_UNIT of mapping Mapping New_Mapping[11] owning
folder=ODIOGG.First Folder
To resolve this issue, upgrade the topology information so Support Distinct Operator is
set to True. [20234590]
Log Level and Log File Not Displayed in the Complex File
Dataserver Properties
When creating a Complex File dataserver, the log level (ll) and log file (lf) properties
are not displayed in the Properties tab. [20377218]
14
It is recommended to run Spark applications on YARN, as ODI supports only yarn-
client and yarn-cluster mode executions along with a runtime check. Please switch to
YARN execution, if you have been using other Spark execution modes. [24846472]
If switching to YARN execution mode is not possible or you wish to continue with
unsupported Spark execution modes, the following DataServer property must be
added to the Spark DataServer:
odi.spark.enableUnsupportedSparkModes = true
Also, please note that no Support Requests can be raised regarding the unsupported
Spark execution modes.
py4j.protocol.Py4JJavaErrorpy4j.protocol.Py4JJavaError: An error
occurred while calling o140.jdbc.
: java.sql.SQLException: [FMWGEN][Cassandra JDBC Driver]
[Cassandra]Unable to create local database file: $$ The cause: $$
This error is often caused by the driver not having write access to the target directory.
[24928801]
• Use of extended TIMESTAMP and INTERVAL datatypes such as: TIMESTAMP WITH
TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL DAY TO SECOND,
INTERVAL YEAR TO MONTH will cause the following errors:
py4j.protocol.Py4JJavaErrorpy4j.protocol.Py4JJavaError:
An error occurred while calling o43.jdbc.:
java.sql.SQLException: Unsupported type -101
15
If the mapping execution in Pig fails and the Unable to store alias error is
displayed, the pig.optimizer.rules.disabled property for the Pig server should be
set to FilterLogicExpressionSimplifier. [20520865]
16
• IModelObjectChange
• IModelObjectChange.ChangeType
• IObjectAdapterFactory
• LocationAdapterBase
• MapAttribute.ConnectionTypeInfo
• MapAttribute.ConnectionTypeSelector
• MapAttribute.DefaultConnectionTypeSelector
• MapComponent
• MapComponentOwner
• MapComponentType.uidef
• MapPhysicalDesign.ContextualComponentTreeNode
• MapPhysicalDesign.ExecutionUnitConfiguration
• MapPhysicalDesign.ExecutionUnitGraph
• MapPhysicalDesign.ExecutionUnitGraphNode
• MapPhysicalDesign.MapPhysicalDesignConfig
• MapPhysicalDesign.NodeConfiguration
• MapPhysicalDesign.PushDirection
• MapPhysicalNode.RMCStackPropertyManager
• MapRootContainer
• MappingGenericTechnology.MappingLanguage
• MappingGenericTechnology.MappingLanguageElement
• MappingGenericTechnology.MappingSubLanguage
• NamedObject
• OdiComponent
• OdiInterface.IPersistenceComparable
• PropertyOwner
• ResourceLoader
• ResourceLoader.ResourceCandidate
• ReusableMappingComponent.RMCConnectorPointDelegate
• Root
• RootIssue.TextPos
• TargetLoadOrderException
17
CKM Fails with XML and Complex Files When Database is Set to
External
Flow control steps (CKM) fail with ORA-00904: "NOW": invalid identifier errors
when CKM is used with XML and Complex Files. Mapping is defined to load data into
a Complex File target Datastore, while the Complex File Data Server is defined to use
an external database.
You get the following error message:
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:495)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:447)
The problem is due to ODI not being able to pick the right DATE function when the flow
or static control is run on a XML ( or Complex File ) Data Server defined to use an
external database.
One of the main reason behind this limitation is, the CKM code being executed on the
external database technology ( for example Oracle ), should use DATE function
specific to that technology. Instead it gets the information from the definition of the
XML or Complex File technology and the resulting function does not apply to the
external database technology. As a result ODI is not able to run static or flow control
( CKM ) on technologies such as XML and Complex Files when the Data Server is set
to use an external database.
So, the workaround is to edit the CKM Insert PK errors, Insert AK errors, Insert
FK errors and Insert CK errors tasks' target commands by replacing
OdiRef.getInfo("DEST_DATE_FCT")with the date function of the used external
database technology. For example — sysdate, if you are using Oracle external
database. [28641256]
When you re-open the KM Editor and go to Flexfields tab, the newly created flexfields
may not be displayed, though they are already saved. Refreshing the Tree on save
when multiple editors are open may result in performance issues. To avoid
performance issues, refresh the parent of the KM before you re-open it.[28561299]
18
You can find out more information on the post-installation patches for Oracle Data
Integrator 14c (14.1.2.0.0).
After installing Oracle Data Integrator 14c (14.1.2.0.0), perform the following steps:
1. Make a backup of your ODI repository schema.
2. Upgrade all ODI repositories associated with the installation using the Upgrade
Assistant. See your Upgrade documentation for detailed upgrade instructions.
Note:
Once the ODI repository is upgraded it cannot be reverted back even if
you remove the patch. So make sure you make a proper backup of your
existing ODI repository so that it can be restored if you remove this patch
in the future for any reason.
3. For setting up new domains with this patch, follow the instructions in Installing and
Configuring Oracle Data Integrator.
4. Clearing of the JDev cache is required for all installations where the ODI Client is
to be launched:
• For UNIX platforms:
Locate system14.1.2.0.0 in your Home directory and remove it.
For example: rm -rf $HOME/.odi/system12.2.1.0.0
• For Windows platforms:
Locate system14.1.2.0.0 in your Home directory and remove it.
For example: delete C:\Users\<username>\AppData\Roaming\odi
5. Start ODI Studio.
6. Depending upon the installation type, start Standalone Agent or all servers
(AdminServer and all Managed server(s)).
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
19
Oracle Fusion Middleware Release Notes for Oracle Data Integrator, 14c (14.1.2.0.0)
G10244-03
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws.
Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit,
perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related documentation that is delivered to the U.S. Government or anyone
licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed, or activated on delivered
hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are
"commercial computer software," "commercial computer software documentation," or "limited rights data" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i)
Oracle programs (including any operating system, integrated software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
The terms governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous
applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take
all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by
use of this software or hardware in dangerous applications.
Oracle®, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks
of SPARC International, Inc. AMD, Epyc, and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open
Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates
are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable
agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-
party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
20