Data relationship management batch client

Oracle Data Relationship Management - ppt download

data relationship management batch client

Oracle Hyperion Data Relationship Management Application related information. Runs DRM Batch Client for Exports and Action Script. Oracle® Data Relationship Management User's Guide for Oracle Data . Enter, followed by the parameters as described in this chapter. Using The Data Relationship Management Batch Client Setup Demos at your Convenient Time; Training; Batch Start Dates; Customize Course Content.

Hyperion Data Relationship Management Online Training

Request level actions are performed on the instance tab for each request. Browse—— The Browse pages are used to browse, search, and edit versions, hierarchies, nodes, and properties. Batch validations are run and their results are returned on the Browse pages. The administration of versions and hierarchies including creating, deleting and assigning access is also handled on these pages. When you browse a hierarchy for a version, each hierarchy opens on a separate instance tab.

Hierarchy node level actions are performed on the instance tab for each hierarchy. Query——The Query pages are used to manage and run property queries and work with the results of a query process. The details and results of each query are displayed on a separate instance tab. The results of each query can be edited so you can make changes to hierarchy nodes and properties without navigating to the Browse pages. Compare——The Compare pages are used to manage and run compares and work with the results of the comparison process.

The details and results of each compare are displayed on a separate instance tab. The results of each compare can be edited so you can make changes to hierarchy nodes and properties without navigating to the Browse pages.

You can have only one action script and its results open at a time. An action script that has been loaded from another source can be edited directly from the Script page. Import——The Import pages are used to manage and run imports and view the results of an import process.

The details and results of each import are displayed on a separate instance tab. Blend——The Blend pages are used to manage and run blenders and view the results of a blender process. The details and results of each blender are displayed on a separate instance tab. Export——The Export pages are used to manage exports and books, run export processes, and view the results of an export process. The details and results of each export or book are displayed on a separate instance tab. Audit——The Audit pages are used to query and view history for transactions, jobs, and external requests.

Only one type of history can be selected and viewed at a time. Administer——The Administer pages are used to administer metadata, workflow, and security for theData Relationship Management application. The details of each metadata, workflow, or security object are displayed on a separate instance tab.

They are not exported to the downstream application and therefore you will have to look at the export to know what hierarchies are exported and then look at the property to check if it is being exported. This may ultimately have more values set in the database, but has positive logic and is typically easier to understand. An export property should be created for each subscribing application even if the application should get the combination of two other applications.

Cube3 gets all the accounts from Cube1 and Cube2. This may be cut and Dry for the Accounts hierarchy, but then when you get to the Departments or another hierarchy, you may find that this logic does not work.

Then you have to add the Cube3Export property anyway and then you have one hierarchy that looks at the property and another hierarchy that looks at two other properties. Submitted — A submitted version can only be edited by users with the Data Manager role or the version owner. Finalized — A finalized version cannot be edited by any users.

The idea is that the version is complete and ready to be published to downstream systems. Expired — A version will typically be set to expired once the new version for the current fiscal period has been creating and is in the working status. No users can edit an expired version. For example, it is common practice to create a Current and Previous version variables. The Current variable would be utilized in most if not all system exports. If you do not have a version variable, you would have to open each export each month to switch them to the newest version.

With a version variable, you can set the new version once, and then all of your DRM objects using that variable will be updated automatically. A version can be assigned to multiple variables, but a variable can point to only one version. A hierarchy can only be associated with a single group for each hierarchy group property. The core Hierarchy Group property may be used for default grouping purposes. Additional hierarchy group properties may be added to handle alternate grouping requirements. When browsing hierarchies, use the Group By drop-down list to select a different hierarchy group property to use for grouping.

Quick links

A node in DRM is the equivalent to a member in Essbase. Master or reference data records used to describe, qualify, or summarize transactional data in a subscribing system are managed in Data Relationship Management as nodes.

For example, within a hierarchy that represents an organizational structure, a node might represent a department or a cost center. Nodes in a version can have properties called global node properties. Nodes in a hierarchy can have hierarchy-specific properties called local node properties. Within a version, a node may be a part of more than one hierarchy. A node can have many user-defined properties that store information about the node and control the use of the node within the information systems of an organization.

If the node is a domain node whose domain prevents deletion, you cannot delete the node. Delete Deletes the node from all parents in all hierarchies. This option is only available if the UseMerge system preference is enabled. Annul Removes the node from all hierarchies where it has the same parent, and also deletes the node if it is then an orphan.

Scope IT Solutions

For example, if a node has the same parent in all hierarchies in which it exists, then Annul removes the node from all hierarchies and then delete it because it is an orphan.

If the node does not have the same parent in all hierarchies, the Annul produces the same result as Remove. Annul All Below Removes all the children of the selected node. If any of the children become orphans, they are also deleted.

Remove Removes the node from the current parent but not from other parents in other hierarchies. The node is not deleted from the version and is available for reinsertion. Inactivate Flags the node as inactive so that it can be filtered from an export. When a domain node is inactivated in a version, you can provide a domain inactive date value.

The default domain inactive date is the current date. Because the global query runs against a set of nodes without reference to any specific hierarchy, only version and global node properties may be included in the criteria.

Local Queries are defined and performed against a node and its descendants within a hierarchy. They can be run from the top node of a hierarchy or from a node within the hierarchy.

data relationship management batch client

Local queries can reference any property including version, hierarchy, global node, and local node properties. Namespaces Namespaces are used in property definitions to avoid conflicts where properties from different sources have similar names and need to remain separate for data integrity purposes.

Property names are differentiated using a namespace prefixing convention. There are special rules in Data Relationship Management that apply to namespaces to ensure that conflicts do not occur: Other namespaces are reserved for use by Data Relationship Management application templates for other Oracle products.

Auditability of a data element, its metadata, and its relationships in the enterprise is significantly easier and more transparent with an MDM solution.

Master Data Management Implementations The implementation of an MDM initiative is commonly completed through a set of processes by which the data elements of an organization are collected or created and then managed and distributed to external systems.

The goal is to have all downstream systems use the same data elements, definitions, hierarchical constructs, and metadata through central management and governance. Another key aspect to an MDM implementation is the ability to enforce business rules to ensure that the master data conforms to predefined rules prior to exporting to downstream systems.

Implementing MDM within an organization requires more than just technology, and governance processes are undoubtedly already in-place at organizations that facilitate successful MDM strategies for data elements.

Most of these processes are tied to job functions and roles. For example, the creation of a new General Ledger GL account requires approval by different groups within the organization. This simple example provides a scenario that can illustrate different levels of master data management maturity within an organization.

At Level 0 of MDM maturity, the process is siloed and each system has the ability to manage their accounts and account structures separately. At Level 1 of MDM maturity, the process is coordinated in a manual fashion and an email notification is sent to the administrators of the different systems; there is no proactive way to know the Account Structures are in synchronization across the different systems. Properties for each downstream application are managed separately in the MDM product and business rules are implemented to validate that the account meets all predefined data governance and formatting rules.

The account structure is synchronized to downstream systems at the required frequency for each system. At Level 3 of MDM maturity, the process is coordinated through a workflow tool and the account is added to a central MDM tool after being approved and enriched by all of the relevant stakeholder groups.

At each step of the enrichment process, business rules are implemented to validate that the account meets all predefined data governance and formatting rules. Lastly, the approved account structure is synchronized to downstream systems at the required frequency for each system.

Data Governance Data Governance crosses a large set of functional areas within an organization and consists of the appropriate mix of people, processes and technologies necessary to centrally manage data across the enterprise.

Data Governance initiatives drive the need for technical MDM solutions, but there are also scenarios where the implementation of technical MDM solutions are the starting point for a data governance initiative. While data governance initiatives can create bureaucratic and political issues within the organization, it is important to appropriately and efficiently scope the rules for the governance of master data in the organization.

The diagram below outlines the potential negative impact to the output of systems when the master data is managed in silos and the organization does not have a sound overall data governance strategy. With the knowledge that governance initiatives can break down the effectiveness of master data or the functionality of the systems themselves, it is important to design the leanest approach for the creation and management of master data in the organization. This efficient governance approach should be applied to both the data elements as well as the properties that are collected for each data element.

This concept allows for the seamless governance of data elements directly on the master data platform through a configurable workflow. The Oracle DRG platform is discussed separately in chapter 8 of the book. DRM software provides a significant number of capabilities and functionality to aid in the management of master data inside an organization. Upon installation of the DRM application, the following components are automatically installed on the server: The feature also contains the ability to check application logs and is the module where updates are applied to the database when DRM is upgraded or patched.

The DRM application backend database utility is located in the console, and the console is used during the initial install of DRM. It is the user interface that permits access from system administrators down to the end-user. The module allows for the configuration of each of the system components, modification of hierarchies, and the addition of data elements and metadata.

The component is a very useful tool for the ongoing maintenance of DRM, especially if there is interest in creating new DRM objects in a development or test environment prior to moving to the production environment.

All of the functions that can be performed in the web-client may be performed using this module, but the MDM Connect component allows for functions to be completed in a bulk and automated fashion.

A scheduling tool can even be used to schedule the batch scripts. Additionally, in order to be truly effective with a DRM implementation, users must be able to translate DRM terminology into language used by administrators of subscribing systems. The content in the next paragraph describes some of the most commonly used terminology to provide a solid foundation for working with the product. A useful capability of the DRM product is the ability to create and maintain separate Versions of master data.

Versioning capability provides the ability to take snapshots of master data at a specific point in time or perform what-if analyses.

data relationship management batch client

The Hierarchy in DRM is a collection of nodes that are organized to represent a business structure within an organization. A typical hierarchy within an organization may be the structure of organizational departments or a global chart of accounts.

The additional hierarchies commonly include alternate rollups of the same structure for reporting purposes. A Node in DRM represents a single element of master data. Nodes are the core elements in DRM. A single node is structured inside a DRM hierarchy and is described by attributes. An example of a node includes a single account in a chart of accounts or department in an organizational structure. Nodes are broken into two distinct categories through a system attribute to define the role of the node in the structure.

The first is a Leaf, which is a node that only exists at the bottom-most level of the structure. A leaf may not have any nodes underneath it in the structure. In an account structure, a leaf node would be the natural account segment, such as It is possible to allow limb nodes to be at the bottom-most level of the structure, but it is more effective to assign bottom-level members as leaf nodes.

Assigning leaf nodes to the bottom of the structure allows DRM developers to perform specific checking into the integrity of structures, especially for integrations with subscribing systems that require the distinction.

The attributes to a node data element are called properties. Properties provide significant flexibility in managing information about a node as well as specific node formats for subscribing systems. Properties can be sourced as well as derived, allowing for the use of logic to generate content about a node or the position of the node in the hierarchy.

DRM provides the ability to perform a check on the data entered into any area of the application as it applies to constants or other values in the application. Each constraint is referred to as a validation, and validations can be run during data importing, user interactions, and even during a batch export cycle. These validations are key to keeping clean and accurate data in the application. It is important to understand that in Oracle DRM, metadata is a term that refers to configuration objects e.

The term metadata in DRM is different from common definitions, where the nodes, properties, and hierarchies are referred to as metadata and the financial numbers at the intersections of these dimensions are referred to as data.

DRM software starts with the concept of a hierarchy, which are business definitions of the relationship of data elements to one another in a business construct. For example, the organizational structure of departments is a common hierarchy seen across many environments. These hierarchies contain multiple levels which are commonly ragged, where some parts of the hierarchy contain different numbers of levels than others.

Hierarchies in DRM are defined through the creation of parent-child relationships between data elements. Multiple hierarchies can be setup depending on the business need for a business hierarchy structure or subscribing downstream system. In addition to hierarchies, each data element in the hierarchy contains a set of system-defined and administrator-defined fields for capturing attributes.

The attributes are defined at a central level in the application and can be manually populated or derived from other values in the application. Attributes can also be controlled with security, allowing controls on the properties to be used across the application. Integration Capabilities Another core capability of DRM is the ability to obtain and broker data in an organization. DRM provides functionality for importing from source systems as well as exporting to subscribing systems.

DRM serves as a platform-agnostic hub in the organization, allowing for the import and export from any system. The first way is to utilize import functionality, which can accept either a file or a database connection as input.

Once the hierarchy is loaded into DRM, it can be blended with the existing hierarchy to bring the DRM hierarchy in synchronization with the source system. The second method is to use the action script module, which has a specific format and allows bulk changes. Both processes are executed in an automated fashion using batch scripts or directly in the DRM Web Client.

Additionally, many organizations impart a change data capture CDC process in a relational database when loading to DRM, where only changes are loaded into the application to obtain performance benefits. The import and change data capture process are described in more detail in Chapter 6 of the book. Standard and commonly used exports allow for the target to be a staging database or text file, but the DRM software also contains web services integration along with some native integration with the Oracle General Ledger and Oracle Enterprise Performance Management Architect EPMA.

Exporting from DRM provides one of the most useful features of the product: More specifically, most subscribing systems only contain parts of the business data element structure of interest.

In some cases, these structures are part of a larger structure and adding that entire structure to the application results in extra maintenance and may potentially have a negative impact on performance. In these specific circumstances, DRM shines with its export functionality.

The DRM application can generate the specific needs for each application based on the requirements of the subscribing system while maintaining the entire structure of the business.

In specific cases, many large businesses maintain multiple financial systems and these systems may require different levels of granularity within the accounts dimension. For example, the planning and budgeting system may require that data is inputted and reported upon at the natural account level, while a consolidation system may only require the parents of the natural accounts level of detail.