Sap Signavio Process Intelligence User Guide en
Sap Signavio Process Intelligence User Guide en
2025-03-24
2 Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1 Prepare a process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Creating A Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Process Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Editing and Deleting a Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Set the merge strategy for data uploads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Define Access with Process Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Custom attributes for event-level analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Roles and user management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Manage API access to a process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
OData Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2 Provide Data as Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Upload process data files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Data file types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Delete process data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
5.1 Action Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Send E-Mails. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Send Messages to Microsoft Teams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Integrate Using Webhooks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Send Messages to SAP Cloud Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Start Business Processes and Automations in SAP Build Process Automation. . . . . . . . . . . . . . 542
Send Messages to SAP Event Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
5.2 Managing Actions and Their Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
5.3 Running Actions Manually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
5.4 Viewing Action Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
5.5 Monitoring Action Performance in History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
5.6 Activating and Deactivating Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
5.7 Action Configuration for Workspace Administrators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Setting Up the Integration with SAP Build Process Automation. . . . . . . . . . . . . . . . . . . . . . . . . 557
Setting Up the Integration with SAP Event Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Setting Up the Integration with SAP Cloud Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .560
6 Integrate SAP Signavio Process Intelligence and SAP Data Intelligence Cloud. . . . . . . . . . . . 563
SAP Signavio Process Intelligence is a tool that enables in-depth process analysis, optimizing operations,
improving customer service, and identifying compliance violations. It provides insights into process flow, root
causes of issues, and performance bottlenecks, making it valuable for various scenarios and organizations.
The processes that run in your organization continuously leave traces of data behind in places such as your
ERP and CRM systems. SAP Signavio Process Intelligence is about analyzing this data to gain insights into the
exact flow of your processes.
The following scenarios serve as example use cases for SAP Signavio Process Intelligence:
• You manage some production facilities. Although they use the same equipment, resources, and processes,
some facilities are more productive than others. You want to find out why that is the case and optimize
operations based on your findings.
• You run a customer service center. You notice that more customers with apparently trivial problems are
waiting long for a response from the support team. You need to find out why your support process is going
badly in these cases.
• You're a risk manager in a financial organization. In the past, rare cases with risky transactions were
overlooked. Now, you want to identify such cases before the transactions are carried out. Unfortunately,
you don't have the resources to identify cases with non-compliant behavior.
SAP Signavio Process Intelligence can be the solution for all scenarios. It's best suited to evaluate process data
according to your needs and gain deep insights into your processes.
Related Information
This guide explains how to use the SAP Signavio Process Intelligence application.
Audience
Key Steps
On a high level, these are the key steps for your journey through the application:
1. Clearly define the business challenge that you want to solve, for example:
• Identify the root causes of poorly performing processes.
• Detect and visualize compliance violations.
• Monitor process performance and act on critical cases and performance bottlenecks.
2. To deliver actionable insights, SAP Signavio Process Intelligence needs your process data. To provide this
data, do either of the following:
• Use the features for process data management to extract, transform, and load data into a process:
You can connect your source system with SAP Signavio Process Intelligence and extract data. Using
process data pipelines, the data can be transformed and loaded into a process. For more information,
see Process Data Management [page 48].
• Create a process:
Create a process and upload your event and case data to it. For more information, see Creating A
Process [page 20] and Upload process data files [page 41].
3. Analyze the process data, using these features:
• Generate insights to learn about data correlations and process anomalies, see Insights [page 454].
• Build investigations and dashboards with widgets that visualize your process and mining results, see
Process Mining [page 301].
• Use pre-defined or custom metrics to accelerate the time to insight, see Work with Metrics [page 473].
• Set up actions to inform others about mining results or further process the results in other SAP or
non-SAP applications, see Actions [page 522].
Related Information
Understand the prerequisites for access to SAP Signavio Process Intelligence and how to grant access.
Prerequisites
• Users must have an account for SAP Signavio Process Transformation Suite, see Signing Up.
• The workspace a user logs in to must have a valid Process Intelligence license.
Grant Access
Note
If your workspace has a valid Process Intelligence license, the SAP Signavio Process Intelligence feature set
is active for all users in this workspace by default.
To restrict access to SAP Signavio Process Intelligence, a workspace administrator needs to deactivate
the Process Intelligence feature set for users who aren't meant to access the product. This is done in SAP
Signavio Process Manager, see Activate feature sets.
The following feature sets are available for SAP Signavio Process Intelligence by default:
SAP Signavio Process Intelligence Allow users to access SAP Signavio Process Intelligence
SAP Signavio Process Intelligence - Create Process Allow users to create processes in SAP Signavio Process
Intelligence
Find an overview over all feature sets of SAP Signavio Process Transformation Suite in section Activate feature
sets.
Note
The feature sets for process data management are only available on request.
Read more in section Access Requirements for Process Data Management [page 52].
How to log in to SAP Signavio Process Transformation Suite and get access to all SAP Signavio products. How
to log in when you received a shared link.
Browser Compatibility
SAP Signavio supports all popular browsers. For a detailed description of the supported browsers, see Browser
Compatibility.
After you've created your SAP Signavio Process Transformation Suite account (see Signing Up), use your
account email and password to log in.
In the case that your workspace administrator has created the account for you, you received an email to reset
your password. When SSO is enabled for your workspace, you log in using a shared link.
When SSO is enabled for your workspace, you log in through a shared link. The link is shared with you, for
example, in an invitation email or on a wiki page.
We recommend that you bookmark the shared link for future logins. Depending on your workspace
configuration, you might only be able to log in to SAP Signavio Process Transformation Suite through
the shared link.
Related Information
Learn to navigate SAP Signavio Process Intelligence, access features for process data management, process
analysis, and automation.
This image is interactive. Hover over each area for a description. Select highlighted areas for more
information.
2 - Side Bar
Use the side bar to access available feature areas:
• Process Data Management [page 48] to get process data from source systems into SAP Signavio Process
Intelligence, transform raw data into process data models, and load them into a process for analysis and
mining.
• Processes [page 19] to do a deep-dive analysis on process performance and how your processes are
actually run.
• Actions [page 522] to create automations that query process data and act on the results, for example, by
performing a task or starting a process.
3 - Navigation Bar
The navigation bar provides options to search, check notifications, open help resources, check the workspace
ID, switch to other SAP Signavio solutions, and open your user profile. For more information, see Navigation
Bar [page 13].
• Breadcrumbs for navigating to items or searching for them, see Breadcrumb search [page 17].
• Renaming by selecting the title of the item that is currently open.
• Tabs for navigating to the elements of an opened item, for example:
• Connections, source data, data views, business objects, and more for process data pipelines
• Dashboards, investigations, metrics, variables, insights, actions, and more for a process
• Results and the history for actions
5 - Top Menu
The top menu provides options like create, search, filter, sort, and more. Which options are available depends
on the context that you're currently in.
6 - Work Area
Manage and configure your process data pipelines, processes, and actions including everything that they
contain here, like source data and connections, dashboards, investigations and metrics, or tasks and
integrations, respectively.
Related Information
Discover the actions you can take with the SAP Signavio Process Intelligence navigation bar at the top of your
view.
This navigation bar displays the label of SAP Signavio Process Intelligence. This identification of your current
working product is useful if you navigate back and forth between other SAP Signavio products and workspaces.
The navigation bar allows for quick access to common functions across SAP Signavio products. Each product
displays a navigation bar with its own product-specific available icons.
Note
• App Switcher: Choose this icon to view a list of SAP Signavio products to which you can navigate.
Your view reflects the products you are able to access.
If you are a workspace administrator, you can also choose User Management here.
• User Profile Menu: This icon displays your own initials as the logged-in user. Choose this icon to view
a drop-down menu with the following options:
• Profile settings: Select your email address from the drop-down menu to open the My Profile tab. Make
changes to your personal data, language preference, or password. Here, you can also view your groups
and licenses.
• Personal settings: Choose this option to toggle your notifications.
• Workspaces: Choose this option to change workspaces if you are a member of more than one
workspace.
• Logout: Choose this option to log out of SAP Signavio Process Intelligence.
Related Information
1.1.3.2 Notifications
Read about the notifications available in SAP Signavio Process Intelligence. Select the links below to learn
more.
Related Information
In the header, (Notifications) shows you the number of new notifications. Select the icon to open
notifications.
Actions
You're notified about new results for an action you created or are assigned to.
Insights
Note
In order to be notified about a new comment on an insight, you must have previously commented on the
insight.
Learn which email notifications are available, and how to enable them.
Context
You can choose to receive email notifications for the following events:
Insights:
Note
In order to be notified about a new comment on an insight, you must have previously commented on the
insight.
Procedure
1. In the Navigation Bar, open your User Profile Menu by selecting the icon displaying your initials.
2. Select Personal Settings.
3. Use the toggles to enable or disable email notifications for each event type.
Related Information
1.1.3.3 Search
Get to know the different search options in SAP Signavio Process Intelligence.
Global Search
With the search option in the top navigation bar, you can search for content from the complete SAP Signavio
Process Transformation Suite.
To find content specific for SAP Signavio Process Intelligence, limit your search to the content types PI
Processes or Investigations. The search results open in SAP Signavio Process Collaboration Hub.
Object-Specific Search
When you select a process, a process data pipeline, a dashboard, or any other object, you've the following
options to find other objects of the same type:
Understand how to identify and provide process data in general, and learn how to view the volume of uploaded
process data as well as the last upload date.
• Based on a process
Define the beginning and end of your process and identify all tasks and events in between.
• Based on business objects
Identify business documents involved in the process and track their lifecycle. For example, business
documents can be orders and invoices.
Once the scope of the data is determined, extract the data from your applications and provide it to SAP
Signavio Process Intelligence.
A workspace administrator can view the number of processes, cases, and events in the data store.
To view this information, choose Processes in the sidebar. In the header menu, the number of cases and
events are displayed. The number of processes is in the column header of the processes list.
When you open a process, you can view when the last data upload took place in the header menu.
The date and time are displayed according to your browser's timezone setting.
• With each new data upload, whether data was uploaded manually or using the API or a process data
pipeline
• When the merge strategy is changed
• When uploaded data is deleted
Related Information
Learn the steps and prerequisites for creating a process in SAP Signavio Process Intelligence.
Prerequisites
You need the feature set for process creation in SAP Signavio Process Intelligence to use these functions. A
workspace administrator can assign this feature set to you.
Context
Create processes in SAP Signavio Process Intelligence for the end-to-end processes you want to analyze.
The creator of a process has the manager role for this process automatically. To grant access to a process,
users must be assigned a role. Read more in section Roles and user management [page 31].
1. Open SAP Signavio Process Intelligence and if the process overview isn't opened by default, choose
(Processes) in the sidebar.
2. Choose Create, enter a process name, and confirm with Create.
Results
The process is created and the settings page opens. You can configure the process now or later.
For example, you can select a system type and a process type. When you choose the system type, it sets
system-specific default values for variables in metrics. If you select the process type, the system suggests
process-specific metrics when you set up the metrics collection.
Continue with providing event log data to the process. For example, upload an event log manually or link the
process to a process data pipeline.
Related Information
Learn about process settings: data upload, API connection, custom attributes, process views, as well as user
and role management.
Note
The settings available to you depend on the role your have been assigned for the process. Read more in
section Roles and user management [page 31].
1. Open the drop-down menu on the top-left corner of the screen. The menu slides out from the side.
2. Select Processes. The list of processes is displayed.
3. Open your process and select in the header menu.
The settings page opens and displays the different settings in tabs. The settings are described below.
Data
To be able to run an investigation, you need to provide data for your process. You have the following options:
• Upload process data as files. Read more in section Upload process data files [page 41].
• Link a process data pipeline. A data pipeline defines how to extract data from a data connection. Read
more in section Manage process data pipelines [page 229].
You can select a source system to provide default values for metric variables. Read more in section Assigning
Values to Variables [page 503].
If you select a set of process types, the metric library can recommended metrics matching those choices. The
Select process type dropdown contains a list of process types, each with a checkbox. You can select multiple
process types.
You can check or delete the data that has been uploaded to a process under Event log history:
• To delete imported data, click and confirm. Read more in section Delete process data [page 46].
Define how to process existing and new data when uploading more or new data to a process. Read more in
section Set the merge strategy for data uploads [page 25].
API
Create and manage the access token of the process. With the access token, you can connect to a process using
the API, for example to upload process data. Read more in section Manage API access to a process [page 35].
You find supported API requests in section SAP Signavio Process Intelligence API.
Data Views
Define which process data is accessible to users. Read more in section Define and grant access to process data
with process views [page 26].
Users
Grant users access to the process. You can assign them roles and process views. Read more in section Roles
and user management [page 31].
Learn how to rename a process, edit process settings, and delete a process in SAP Signavio Process
Intelligence. A workspace administrator can activate the necessary feature sets for you.
Prerequisites
• You have the feature sets for process creation and process editing in SAP Signavio Process Intelligence. A
workspace administrator can activate these feature sets for you.
• You have the manager role for the process, see Roles and user management [page 31].
Deleting a Process
Note
Related Information
If a process already has data and you want to upload more or new data, you need to specify how new and
existing data is processed. This is done with the merge strategy.
Note
The merge strategy applies only to event logs. For the upload of case attribute logs, the following applies:
• The merge strategy doesn't affect the upload of case attribute logs.
• When you upload a new case attribute log, data is added to the existing data.
• When you upload additional case attribute logs, data is replaced for existing case IDs.
1. Select the process on the Processes overview and choose Process settings.
The settings page opens.
2. Under Process views, click Merge strategy and choose one of the following options:
Update and Append Existing events that match with an incoming event on case
ID, event name, and timestamp are replaced.
Related Information
With process views, you either define which process data your users can view, or you use them to grant access
to dashboards.
For each process, separate process views must be created by a user with the manager role. Using a process
view across several processes isn't possible.
Users with the manager role have access to all process views. If users with the analyst role are assigned to
more than one process view, they can switch between the process views, for example, when creating an action
or viewing a dashboard.
SAP Signavio users viewing widgets and investigations in other applications of the SAP Signavio Process
Transformation Suite must also be assigned to a process view.
• Control which data users can view investigations and dashboards, generated insights or value cases
• Grant access to a specific dashboard
Control access to process data You specify which portion of the process data users are al-
lowed to access by hiding or filtering attributes. Also, you
assign the users or groups who are allowed to view or work
with the data.
Control access to dashboards You specify the users or groups who are allowed to access
the dashboard to which the process view is assigned.
Note
Process views of this type aren't available for selection in
the process view switcher on dashboards.
The process view Complete attribute set (default) is available in all processes by default. This process view
grants access to the complete data set of a process and can't be changed. Filtering or hiding data using this
process view isn't possible.
To restrict access to the process data for users or groups, additional process views need to be created, usually
one process view for each stakeholder group.
To ensure that users only access the process data for which they're authorized, the system requires assigning a
process view whenever an action, investigation, or a dashboard is created.
If users are assigned to multiple process views, they can switch between the process views on a dashboard.
This changes the dashboard data only for the user. However, the assigned process view of the dashboard isn't
changed.
Switching the process view is done using the drop-down menu in the upper right corner of a dashboard. The
selection is saved to the user's browser storage. When users switch the browser or clear the browser storage,
the dashboard opens again with the assigned process view.
Only process views that control data access are available for selection in the process view switcher.
Read how to create process views that define which process data users can view or that grant users access to
dashboards.
Note
You can only select a group if you're part of the group. Workspace administrators can select all groups
without being part of the groups themselves.
Note
Process views of this type aren't available for selection in the process view switcher on dashboards.
Note
You can only select a group if you're part of the group. Workspace administrators can select all groups
without being part of the groups themselves.
Related Information
Note
If insights exist for a process view, changes to the process view aren't applied to existing insights with data
snapshots. So, any data snapshot is still visible to anyone with access to the process view. If you want to
restrict a process view, we recommend creating a new process view and re-assign the users. Read more on
insights in section Insights [page 454].
• Investigations, dashboards, actions, and insights created with this process view are deleted.
• The users added to the process view lose view access to the investigations or dashboards.
Note
How to create your own attributes that count events or measure the duration between 2 events.
When you configure a widget, you can use the attributes from the event log to specify what data is displayed.
Also, you can create your own attributes for event-level analysis.
Note
Related Information
Learn how to assign roles to users for process access in SAP Signavio Process Intelligence. Managers have full
data set access, analysts have limited access, and consumers can only view. Workspace administrators can
access all processes. Use it to efficiently manage user access and roles in your organization.
To grant other users access to your process, you assign a role to them. The role determines whether a user can
access the complete data set, create and customize process analysis, or only consume the result of a process
analysis.
• The user who creates a process is automatically given the manager role.
Roles
Role Description
Manager Users with the manager role have access to the complete
data set in the process. They can do the following:
Note
Dedicated feature sets are available for the work
with data pipelines. Read more in section Access to
process data management features [page 241].
Analyst Users with the analyst role can only access the process data
that has been determined for them in a process view. They
can do the following:
• view dashboards
• view investigations
• create, generate, edit, and delete insights, and com-
ment on insights
• view value cases or link and remove an initiative from a
value case
They can't edit the defined data set or any process view.
Workspace Administrators
Workspace administrators can see all processes of a workspace, regardless of whether they have a role in a
process.
When workspace administrators want to interact with a process, for example to upload data, they need to grant
themselves access to the process by assigning themselves a role.
To grant access to a process, you add a user or user group to a process and assign a role.
Users with a license for SAP Signavio Process Intelligence are automatically added to the group All SAP
Signavio Process Intelligence Users (default).
This group is a transitional measure to reduce disruption for existing users. It exists only in SAP Signavio
Process Intelligence and is not part of the SAP Signavio user management. The group provides users with
unrestricted access to processes created before August 14, 2019. New license holders are added to this group
automatically. For processes created after August 14, 2019, the group can't be added. These newer processes
are only visible to users who have been given access.
When you remove this group from a process, the process is only visible to users who have been given access.
Note
• If you create process views for processes created before August 14, 2019, anyone has access as long as
the All SAP Signavio Process Intelligence Users (default) is listed as a user.
• When you don't have the manager role and remove All SAP Signavio Process Intelligence Users
(default), you lose access to the process.
• The group can only be removed and not added to processes.
Related Information
How to create and manage the access token of a process. With the access token, users can connect to the
process using API requests, for example, to upload process data.
Note
You need the manager role for your process to use this function.
You can connect to a process using the API, for example, to upload data to a process. For that, you need to
create an access token.
An access token is created for each process and is thus valid only for this one process.
When you renew the access token, the existing access token is replaced by a new one. Applications or features
using the previous access token can no longer access the process.
Note
The previous access token is replaced by a new one. To copy it to the clipboard, click .
When you invalidate the access token, the existing access token is deleted. Applications or features using the
access token can no longer access the process.
Note
Related Information
OData views allow SAP Signavio Process Intelligence users to store SIGNAL queries providing relevant
analytical results. These results can then be shared in third-party systems, for example SAP Analytics Cloud,
using the SIGNAL OData API.
Use Cases
For analytical results derived in process analysis, this feature enables their secure and timely distribution
outside of SAP Signavio Process Intelligence. Users can develop dashboards in third-party systems based on
the underlying analytical results from process analysis. Depending on the third-party system used, the data
behind these dashboards could be configured to update on a regular basis, such as daily or weekly.
Again depending on the third-party system used, access rights to the dashboard could be managed to
configure roles. For example, the author might be allowed to edit or perform manual data refreshes, whereas a
viewer could be restricted to reading a static version of the dashboard based on the last refresh.
The SIGNAL OData API is used to retrieve the result from an OData view in SAP Signavio Process Intelligence,
which makes it accessible to external systems without exposing the underlying data model.
For more information about what prerequisites are required to use the API, how access to OData views is
controlled, and what API requests are available, see SIGNAL OData API.
Learn how to create OData tokens, user-scoped authentication tokens possessing the same permissions as the
users who create the tokens. These tokens are needed to access OData views using the SIGNAL OData API.
Prerequisites
The feature set “SAP Signavio Process Intelligence – Signal OData API” is activated for you. This is explained
further in User Access.
You have the analyst or manager role for the process, see Roles and user management [page 31].
Context
To expose your OData views to any third-party tool, you need to create an OData token.
Note
This token is used to securely authenticate a user to external systems. Its scope is limited exclusively to the
secure authentication of SIGNAL OData API requests for OData views.
OData tokens are user-based, but not specific to processes. The tokens you create in one process are available
in any other processes where you have the analyst or manager role.
Each token is set up with an expiration time, ranging from one day up to a year.
Restriction
You can create a maximum of 100 tokens. Exceeding this limit causes requests to return a 422 error.
Procedure
Results
Caution
Make sure to copy the token now. If you leave this page or start another task here, you're not going to see
the token again.
Related Information
Prerequisites
You have the analyst or manager role for the process, see Roles and user management [page 31].
Context
With an OData view, you save the result of a SIGNAL query. The result can be accessed by external applications
via the SIGNAL OData API.
Note
The following restrictions apply to the SIGNAL query used for an OData view:
• Selecting all columns (SELECT *) is not supported.
• All columns must be given an alias name. This name must start with a letter or underscore and be
followed by at most 127 letters, underscores, or digits. Even when enclosed in double quotes, alias
names must not contain white spaces or special characters.
• Applying the same alias to more than one column in the same query is invalid and causes an error.
• The use of the process ID in the FROM statement is mandatory. It's not possible to use
THIS_PROCESS as a process ID. You find the process ID in the SIGNAL view in the query editor
of a saved widget or action.
Results
Related Information
Aliases
The SIGNAL code editor
Editing and Deleting OData Views
Learn how to edit or delete OData views in SAP Signavio Process Intelligence.
Prerequisites
You have the analyst or manager role for the process, see Roles and user management [page 31].
Context
OData views produce data that is shared in third-party applications via the SIGNAL OData API. Modifications,
whether editing or deletion, made to an OData view affect the shared data in these third-party applications.
Procedure
Note
The following restrictions apply to the SIGNAL query used for an OData view:
Results
Learn how to upload data files, the supported data types and formats, and how to view your upload status and
uploaded data.
For the upload of your process data files, the following applies:
• When using zipped CSV or XES files, each zipped file can only contain one file.
• To update the data in your process, upload new files. You can't edit uploaded data.
• Process data is only uploaded in full. If the upload fails at any point, no data is uploaded at all.
Read more on supported files types in section Data file types [page 44].
• SAP Signavio Process Intelligence detects the mandatory columns Case ID, Activity, and End
Timestamp and checks if the data has the required qualities.
• For other columns, a data type is suggested. Accept the suggestion or select a different type.
• For columns with timestamps, select a format or enter your own.
• If SAP Signavio Process Intelligence doesn't detect the mandatory columns of an event log, you're asked to
select the mandatory columns in the data set or to change the log type.
• A data type is suggested for all columns. You can accept the suggestions in bulk or select different types.
Data types
Choice For columns that contain multiple options, for example dif-
ferent suppliers or locations.
Text For text attributes only shown in the case table, for example
free text fields which exist only once in the data set.
MM month 01 02 ... 11 12
HH hour 00 01 ... 22 23
mm minute 00 01 ... 58 59
ss second 00 01 ... 58 59
Format Example
YYYY-MM-DDTHH:mm:ss 2021-05-07T09:04:07
YYYY-MM-DDTHH:mm:ss.SSS 2021-05-07T09:04:16.300
YYYY-MM-DDTHH:mm:ssZ 2021-05-07T09:04:44+02:00
YYYY-MM-DDTHH:mm:ss.SSSZ 2021-05-07T09:05:02.516+02:00
Upload status
Under Process settings > Data > Event log history, you can check the status of uploaded logs.
• File not imported: The file upload failed. Hover over to get more details.
• Deleted: This process data has been deleted from the process.
To view uploaded process data, open Process settings > Data and click View imported data.
Related Information
Learn about the supported file types, and the requirements for event logs and case attribute logs, for the
upload of process data to SAP Signavio Process Intelligence.
• XES
• CSV
• ZIP (CSV or XES)
• GZ (CSV or XES)
Column names in process data files can contain only alpha-numeric and the following special characters:
• a-z
• A-Z
• 0-9
• §±!@#$%^*()_-+=[]{}'`~\|/.>? äöüÄÖÜß
Uploading process data files will fail if any column name contains other characters.
XES files
CSV files
Event log
This log contains the core properties of all events that occurred during the specified process.
Case ID;Activity;Timestamp
100430031000112060012015;Create FI invoice by vendor;2021-01-12T00:00:00.000
100430031000112060012015;Post invoice in FI;2021-01-08T14:26:02.000
This log contains properties that provide more details about the cases. Properties can be, for example, invoice
due date, customer type, user name, country, order amount, or type of goods.
The properties apply to a case in general, and aren't related to specific events.
Note
Related Information
Learn about extracting raw data from source systems into SAP Signavio Process Intelligence, transforming the
data into an event log, and preparing it for process analytics.
The intended audiences are those who acquire data from various supported source systems, perform data
modeling, manage, and prepare data for process analytics.
Using the Process Data Management features, you can extract and transform large volumes of your data, and
load it into a process. The transformed data can then be used for further process analytics. The goal is to help
organizations optimize the use of data for making strategic decisions to improve business outcome.
1. Systematically extracting data from your source systems, hosted in the cloud or on premise
2. Performing transformations on the data
3. Generating an event log (process data) for the transformed data, and
4. Loading the transformed data into a process for analysis and process mining
For process analytics, it’s essential to have an event log where each event corresponds to a case, an activity,
and a time-stamp. Essentially, an event log can be considered as a collection of cases (traces), with each case
representing a sequence of events.
Using event log, you can perform conformance checking, for example, validate if reality, as recorded in the
event log, conforms to the defined process data model.
Note
It's important to have a basic understanding of SQL to add or customize the extraction and transformation
rules. This knowledge helps you to fine-tune the transformation rules and tailor them to your specific
business requirement.
Features
Following are the features that provide a comprehensive solution for effective process data management:
Feature Description
Source Data Using Source Data, you define which tables data is extracted
from a connected source system and loaded into SAP
Signavio Process Intelligence.
Process Data Pipelines A Process Data Pipeline carries out the data processing
tasks - extract, transform, and load. Using this feature, you
can define data transformation rules and connect to a proc-
ess in which you want to load the transformed data.
Processes With Processes, you provide data for the analysis of a busi-
ness process. You can load the data into a process using any
of the following ways:
As a user, you can take advantage of help topics, organized into categories that focus on what you want to
achieve.
Use the decision tree diagram to get guidance on how to navigate through Process Data Management features.
Hover over each area for a description. Click highlighted areas to open the topics.
Find how to get access to the feature sets for process data management in SAP Signavio Process Intelligence.
Before you start requesting access to the feature sets for process data management, make sure that you’re
aware of the SAP Signavio Process Intelligence access requirements.
Note
The feature sets for process data management are only available on request.
Please contact our SAP Signavio service experts from the SAP for Me portal .
Once the features sets for process data management are available in a workspace, workspace administrators
have access to them by default.
For other users, workspace administrators need to enable the feature sets for process data management in the
SAP Signavio Process Manager under Setup Manage Access Rights User Groups Feature Sets .
• View the list of tables and fields that are already ex-
tracted from the source data
• Create and delete business objects, event collectors in a
process data pipeline
• Run data transformation and load transformed data
into a process
ETL - Reader Role User can't access connections and source data.
The following flow chart shows how to get access to the process data management feature set.
The following table provides the regions and their allowed IP addresses.
Note
The provided IP addresses are applicable only for the following connectors:
NAT IPs (Egress, IPs for re- IPs (Ingress, Incoming re-
quests from SAP Signavio quests from customer sys-
Process Intelligence to cus- tem to SAP Signavio
Region tomer system) Process Intelligence ) Application URLs
Connections establish a link between your source systems and SAP Signavio Process Intelligence. This enables
the transfer of data from your source data system and SAP Signavio Process Intelligence.
It allows you to access and analyze your data within the SAP Signavio Process Intelligence platform.
Connections also define from where a data pipeline can extract the data. You can create new connections as
well as manage existing ones.
Here you can find the list of available connectors for data pipelines. The connectors link the source systems to
SAP Signavio Process Intelligence.
SAP Signavio Process Intelligence enables you to connect with both cloud and on-premises systems. This
allows you to access data from various source systems, regardless of their hosting environment.
List of Connectors
Note
Except for SAP Datasphere and SAP Cloud Integration, all the connectors listed can establish connection
with source systems hosted on both cloud and on-premises environments. SAP Datasphere and SAP Cloud
Integration support only cloud environment.
Enterprise systems
SAP ERP (SAP RFC) RFC compatible SAP systems. For a list of supported sys-
tems, see Supported Systems and RFC Usage [page 58].
SAP SuccessFactors
ServiceNow
Jira Software
AWS Athena Read more in the Usage Recommendations [page 88] sec-
tion.
Google BigQuery
Snowflake
SAP Datasphere
Database
SAP HANA
MySQL
PostgreSQL
MongoDB
Other
Elastic Search
Learn about the high-level steps involved while connecting with SAP ERP through RFC function module.
The following interactive image provides the high-level steps to connect with the SAP ERP system hosted
on-premises through RFC function module, and extract the data. Hover over each area for a description.
Click highlighted areas to open the topics.
Setting Up a New User to Access SAP Tables with RFC [page 61]
Set up a user to access SAP Tables through the RFC function module.
Related Information
Find out the SAP ERP supported systems and their versions, and RFC usage.
You can establish a connection with any ABAP system version that’s compatible with an RFC function
module, RFC_READ_TABLE.
Using the standard RFC function module, RFC_READ_TABLE, you can extract data from the SAP database
tables. The RFC function module can be used for ABAP systems with SAP Basic component version 7.40 and
above.
For instance, the following non-complete list contains product versions with minimum SAP Basis 7.40.
Note
Ensure that your SAP ERP (SAP ECC) version is compatible with SAP Notes listed in the Prerequisites
section. Review the information about supported packages included in each SAP Note.
Related Information
Get a list of required and recommended SAP Notes for working with SAP ERP (SAP RFC) connector.
Caution
Your SAP Basis version must include the functionality of the following list of SAP Notes. If not, install the
necessary SAP Notes. If you are not compliant, you will not be able to extract your process enterprise data.
• 2246160
• 3297175
• 3139000
For instructions on how to install SAP Notes, see Note Assistant | SAP Help Portal.
Note
Make sure that you implement correction instructions in all SAP Notes. Also, check the supported packages
information available in each SAP Note.
Note
The following SAP Notes are optional, but need to be installed based on the specific use case.
Learn how to create an SAP service account and the access required for SAP service user.
Get an SAP service user account to work with the RFC function. For information about SAP service user
creation, see Setting Up a New User to Access SAP Tables with RFC [page 61].
• S_TABU_NAM
• S_TABU_DIS
Related Information
Set up a user to access SAP Tables through the RFC function module.
To familiarize yourself with SAP authorization, see the SAP Authorization Concept.
1. Create a new role (TEST_EXTRACTION) by following the steps in SAP Guide and Assigning SAP
Authorizations to the RFC User.
7. On the Authorization tab, select to manually add the following Authorization Objects, which
enable the authorization checks for different functions and tables:
• S_RFC
• S_TABU_NAM
• S_TABU_DIS
8. Choose the role name at the top of the list and select to display the lowest level of each entry:
• If you're editing an existing role with an existing profile, then click to update the profile.
11. Go to /nSUPC.
12. Enter the role name, for example SIGNAVIO_EXTRACTION, and execute.
14. Click .
A dialog window opens.
15. Click Online.
16. To apply the roles and profiles to a new user, go to /nSU01.
17. Enter the user name, for example SIGNAVIO, and click Create.
18. In the Logon Data tab, select User Type based on your organization’s roles and authorization policy. For
example, Dialog, system, communication, service. For more information, see User types.
Note
• SAP_BC_JSF_COMMUNICATION
• SAP_BC_JSF_COMMUNICATION_RO
• SAP_BC_JSF_COMMUNICATION__NAMED
19. On the Roles tab, add the role SIGNAVIO_EXTRACTION or the name of the role you created.
20.If one of the entries is in a red status, click User master record.
The entry changes to a green status:
21. On the Profiles tab, check if the following profiles are added automatically:
• SIGNAVIO: This is added automatically when the SIGNAVIO_EXTRACTION role is added in the Roles
tab.
• T-I3550107: Only if SAP_BC_JSF_COMMUNICATION_RO role is added in the Roles tab.
• T-I3551007: Only if SAP_BC_JSF_COMMUNICATION role is added in the Roles tab.
Related Information
Authorization in RFC_READ_TABLE
Related Information
Learn how to connect with your SAP ERP source system through RFC function module.
The following table shows the list of credentials to connect with the SAP ERP source system through RFC
function module.
System Number The number by which the target system is defined. Used
when setting the host connection property.
1 Authentication only
8 Default protection
9 Maximum protection
Example:
SNCQop=8
• SNCPartnerName
The application server's SAP Secure Network Connec-
tion name. This is a required input when using SAP Se-
cure Network Connection.
• SNCLibPath
Example
Example with an application server's hostname
(ldcsqm7.wdf.sap.corp) and port number (19363):
MessageServer=/H/
ldcsqm7.wdf.sap.corp/S/
19363;SystemId=QM7;Group=PUBLIC
MessageServer=<YOUR_MESSAGE_SERVER>;System
Id=<YOUR_SYSTEM_ID>;Group=<YOUR_GROUP>
You can get all these necessary details from SAP GUI
Logon screen. For example, if you're working with Win-
dows OS, see https://help.sap.com/docs/
sap_gui_for_windows/
63bd20104af84112973ad59590645513/64dba409f84
84e8ea5f8de81f74d4112.html?version=800.07. In this
example, the MessageServer field contains the appli-
cation host name (abcdefg.wdf.sap.corp) and the port
number (123456).
• InitialValueMode=InitialValue
The InitialValueMode property is used within the
SAP driver to control how null values and initial/default
values from SAP are handled and represented in the
data retrieved. By default, the InitialValueMode
property is set to null. This means that any initial val-
ues (default values as defined by SAP) are treated as
null in the results returned by the SAP driver. By set-
ting InitialValueMode to InitialValue, the origi-
nal default value will be extracted.
• ReplaceInvalidDatesWithNull=True
Usage Recommendations
• The data extraction of large tables like CDPOS, CDHDR, BSEG, BKPF requires an elaborate partitioning
strategy to run without errors. Read more in the section Partition Strategies [page 186].
• Extracting data from CDS Views is now supported. To connect CDS Views of SAP S/4HANA systems that
are running in the public and private cloud, you can use the SAP S/4HANA connector. Read more in the
Connector - SAP S/4HANA [page 79] section.
Allowed IP Addresses
The Firewall allows data through specific IP addresses. For the list of IPs, see the Regions, IP Addresses, and
URLs section.
Security Recommendations
• Use On-Premises extractor with SNC for both encryption and authentication.
• For basic authentication, it's recommended to periodically rotate passwords or do so immediately if there
is any suspicion that a password may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Related Information
When extracting data while connected to the SAP ERP system, you can encounter any of the following errors:
If a problem continues, please contact our SAP Signavio service experts from the SAP for Me portal .
If your data has an unsupported date format, then the following error occurs:
The date must be in the ISO-8601 format. To fix the error, add the following argument in the Extra Connection
Arguments field on your connector configuration screen and then extract data.
ReplaceInvalidDatesWithNull=True
If your data has an unsupported time format, then the following error occurs:
Solution:
The time must be in the Unix time or ISO-8601 format. To fix the error, add the following argument in the Extra
Connection Arguments field on your connector configuration page and then extract data.
ReplaceInvalidTypesWithNull=True
This argument replaces invalid data types other than date in your dataset with a null value.
In some cases, the value of the column is extracted as null, even though the original default value is a different
one.
Example:
When extracting data from SAP ERP through the RFC connector, the field ERZET with value '00:00:00' is
extracted as NULL.
When extracting data from SAP ERP through the RFC connector, the field VBTYP from the table LIKP with an
empty string value ('') is extracted as NULL.
Solution:
To extract the same values in source data, add the following argument in the Extra Connection Arguments field
on your connector configuration page.
InitialValueMode=InitialValue
By setting InitialValueMode to InitialValue, the previously mentioned ERZET field will be extracted as
'00:00:00' and the VBTYP field will be extracted as an empty string ('').
Note
This change is applied to the values of all the tables linked to that connection.
When extracting data from SAP ERP ECC systems, sometimes you encounter duplicate entries in the extracted
table.
Solution:
Add the following argument in the Extra connection arguments field in the SAP ERP (RFC) connection.
RFCReadTableOptions=USE_ET_DATA_4_RETURN,GET_SORTED;Pagesize=20000
You can change the Pagesize based on your need, ranging from 5000 to 1 million.
When performing advanced extractions from SAP ERP systems, sometimes you encounter the following
exception. This is due to the fetching entries more than 500.000 in one logical partition.
Solution:
Each complex script must have multiple partitions to fetch few entries from each partition. The entries
extraction limit per partition is set to nearly 500.000 in our system.
For example, let's assume that GJAHR 2020-2023 for BSEG has the following logical partition:
If one of the partitions is greater than a limit set in our system ~500.000, the extraction fails with an out-of-
memory issue.
Assuming that we have 750.000 entries for 2022 and company codes are equally distributed across BSEG for
2022, the solution could be to add BUKRS = (0001, A100, A200, A201) . Then we have the following logical
partitions:
BSEG for GJAHR 2022 and BUKRS = 0001 => assume 300.000
BSEG for GJAHR 2022 and BUKRS = A100 => assume 150.000
BSEG for GJAHR 2022 and BUKRS = A200 => assume 250.000
.....
Now, the partition size is lowered from 750.000 to 300.000. This will not lead to out-of-memory issues.
By default, the tilde "~" character is the column delimiter. Where a table contains a tilde "~" character, the
information after the tilde in the cell is pushed into the next column to the right and the information in all
subsequent columns of that row is pushed one column to the right.
Solution:
To replace the tilde "~" character as the column delimiter, add the following argument in the Extra connection
arguments field in the SAP ERP (RFC) connection.
ReadTableDelimiter=<Delimiter character>
Related Information
Learn about the required credentials for establishing a connection between SAP Signavio Process Insights and
SAP Signavio Process Intelligence.
Stage Released
Version 22.0.8370.0
For information about the steps required in your SAP BTP subaccount before you can load data from SAP
Signavio Process Insights to SAP Signavio Process Intelligence, see Prepare to Connect SAP Signavio Process
Insights.
OAuth Authentication
Credential Description
OAuth Client Secret The OAuth Client Secret used to authenticate the OAuth Cli-
ent Id. This corresponds to the uaa.clientsecret from
the service key.
Service Root For information about completing this field, see Service Root
with Region Information [page 75].
OAuth Access Token Url The OAuth Access Token Url is used to retrieve the access
token. This corresponds to the uaa.url from the service
key.
Note
The suffix /oauth/token must be added.
Target Currency All monetary values from the extracted data are converted
to the target currency specified here. The target currency
specified here must be a currency that is available in SAP
Signavio Process Insights, for example, USD, EUR, or GBP.
Note
We do not recommend changing the target currency
after data has already been loaded because this might
result in inconsistent values.
Learn how to complete the Service Root field for the SAP Signavio Process Insights connector.
In the Service Root field, enter the URL in the format <root-URL>/api/<service>/v1/, where you can find
the information for <root-URL> in the following table.
Note
The root URLs for eu10 and eu20 need to be handled differently to the root URLs for other region IDs.
https://bpi-plug-
and-
gain-001.cfapps.eu20
-001.hana.ondemand.c
om
Note
Please check the
uaa.url field in your
service key to confirm
the region information
that applies in your case.
<service> specifies which data (process flows and performance indicators) you want to extract from SAP
Signavio Process Insights. The supported services are described in the following table.
Service Description
EventLogsService This service is relevant for the entry point for process land-
scape analysis.
Finance_I2CService This service is relevant for the Value Accelerator for Analysis
of Invoice to Cash.
Finance_I2PService This service is relevant for the Value Accelerator for Analysis
of Invoice to Pay.
ManufacturingService This service is relevant for the Value Accelerator for Analysis
of Plan to Fulfill.
AssetManagementService This service is relevant for the Value Accelerator for Analysis
of Acquire to Decommission.
SourcingAndProcurementService This service is relevant for the Value Accelerator for Analysis
of Source to Pay.
SalesService This service is relevant for the Value Accelerator for Analysis
of Lead to Cash.
RecordToReportService This service is relevant for the Value Accelerator for Analysis
of Record to Report Processes (Financial Closing).
Get the service key information required to set up the connection between SAP Signavio Process Intelligence
and SAP Signavio Process Insights. You will find your service key details in SAP BTP.
Prerequisites
• The SAP Signavio Process Insights tenant has already been set up and has data available that you want to
send to SAP Signavio Process Intelligence.
• A service instance has been created with the api-plan service plan. This is the same plan used for the
analytical API of SAP Signavio Process Insights.
• Your SAP BTP user has the Subaccount Administrator role collection for the subaccount in which the
service instance of the SAP Signavio Process Insights API service was created.
You can see whether you have this authorization in the SAP BTP cockpit by navigating to the subaccount
and choosing Users. If you don't have this role collection, a subaccount administrator can assign it to you.
See Add Members to Your Subaccount.
• Your user has the Org Manager role in your organization.
You can see whether you have this authorization in the SAP BTP by navigating to the subaccount and
choosing Cloud Foundry Org Members . If you don't have this role, an org manager can assign it to
you. See Add Org Members Using the Cockpit.
• Your user has the Space Developer role in the space created by the booster.
You can see whether you have this role in the SAP BTP cockpit by navigating to the subaccount,
choosing Cloud Foundry Spaces , opening the relevant space, and choosing Space Members. If you
haven't yet been added to the space, a space manager can add you as a member and assign the Space
Developer role.
Context
When a service instance is created in your subaccount for SAP Signavio Process Insights using the api-plan
service plan, a service key is also created for that service instance in the SAP BTP cockpit.
The service key contains the following parameters that are required to connect your SAP Signavio Process
Intelligence system to your SAP Signavio Process Insights tenant:
• uaa.clientid and uaa.clientsecret: Client ID and client secret used to authenticate to SAP Signavio
Process Insights. Enter the uaa.clientid value into the OAuth Client Id field in the SAP Signavio Process
Insights connector and the uaa.clientsecret value into the OAuth Client Secret field.
• uaa.url: Token endpoint to generate the authorization token. Enter this value into the OAuth Access Token
Url field in the SAP Signavio Process Insights connector.
1. In the SAP BTP cockpit, navigate to your subaccount and choose Instances and Subscriptions.
2. Under Instances, select the service instance for SAP Signavio Process Insights.
3. Under Service Keys, choose the link for the key that was created for the service instance.
4. Copy or download the JSON so you have the details on hand.
Caution
Outside of the SAP BTP cockpit, service keys must be stored securely since it's the key that allows
data to be sent to your tenant. If you need a service key, create the service key directly in the SAP BTP
cockpit, and access it from there whenever you need it.
Next Steps
Now that you have the service keys details, you can use them as you create the connection to SAP Signavio
Process Insights. For more information about creating this connection, see Connector - SAP Signavio Process
Insights [page 73].
For general information about how to create a connector, see Creating a Connection in the SAP Signavio
Process Intelligence User Guide.
Learn how to connect with your SAP S/4HANA CDS Views source system.
Stage Released
Version 22.0.8370.0
Overview
You can connect to SAP S/4HANA systems, on-premise, public cloud and private cloud, and extract data using
Cloud Data Integration (CDI) adapter. The adapter provides access to the business objects in the connected
source system.
Using an CDI adapter, you can access the Core Data Service (CDS) extraction views available in SAP S/4HANA
Cloud.
Credential Description
Client The three-digit unique code of the client to use in the source
system for this connection. If no value provided, the sys-
tem's default client is used. For example, 100.
Note
If the client certificate isn't configured, the connector only supports basic authentication, which is logging
in with a user name and password.
Security Recommendations
• For basic authentication, it's recommended to periodically rotate passwords or do so immediately if there
is any suspicion that a password may have been compromised.
• Use HTTPS if possible.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Related Information
https://editor.signavio.com/g/statics/pi-etl/documentation/solutions/templates-intro
Stage Released
Version 22.0.8307.0
Overview
Credential Description
Host Set this to the URL of the server where your SAP
SuccessFactors instance is hosted.
The Firewall allows data through specific IP addresses. For the list of IPs, see the Regions, IP Addresses, and
URLs section.
Security Recommendations
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
Related Information
Find the required credentials for ServiceNow connector, and high-level information on how to create a
connector and extract data.
Stage Released
Version 22.0.8370.0
Overview
You can connect to the instances hosted on ServiceNow and third-party sites.
Credential Description
Security Recommendations
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for Jira Software connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Enterprise URL The URL to your Jira endpoint, for example https://
jira.acme.com/.
Security Recommendations
• Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a
password may have been compromised.
• Use HTTPS if possible.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for AWS Athena connector, and how to create a connection and extract data.
Stage Released
Version 2.0.35.1001
Overview
Credential Description
AWS Access Key ID Your AWS account access key. You access this value from
your AWS security credentials page.
AWS Secret Access Key Your AWS account secret key. You access this value from
your AWS security credentials page.
AWS Region The hosting region for your Amazon Web Services.
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Usage Recommendations
• When you use this connector, do not extract the columns c_key and c_fetchdate as these are internally
created from the connector itself.
• No validation: This field can't be validated when testing the connection. The validation won't detect
any errors related to information provided in this field. If the information in this field isn't correct, the
operations will fail later on, for example preview, extract, even if the validation is successful.
Access key rotation, periodically rotate access keys or do so immediately if there is any suspicion that an
access key may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for AWS S3 connector, and how to create a connection and extract data.
Overview
Credential Description
AWS Access Key ID Your AWS account access key. You access this value from
your AWS security credentials page.
AWS Secret Access Key Your AWS account secret key. You access this value from
your AWS security credentials page.
Delimiter Character Character that separates the columns in the CSV, TSV, or
TXT file.
Security Recommendation
Access key rotation, periodically rotate access keys or do so immediately if there is any suspicion that an
access key may have been compromised.
To customize the driver behavior when discovering the schema of the files, add the file schema.ini in your
bucket in the same location where all the files exists. In the schema.ini file you can specify the format of
a text file you want to model as a table and you can also define the columns of the table. The driver uses a
definition in schema.ini if one exists and the file name otherwise to report the table.
Any section in schema.ini must begin with the file name including the file extension enclosed in square
brackets.
Example:
[InvoicesFile.txt]
With the ColNameHeader property, you can specify whether the file contains a header or not.
Example:
ColNameHeader=True
You can set the Format property to the format of the file. The following values are possible:
• CSVDelimited
• TabDelimited
• Delimited (custom character)
Format=Delimited(,)
The DateTimeFormat property can be set for date, time, and datetime type columns. All standard formats are
supported.
Example 1:
DateTimeFormat=M/d/yyyy
Example 2:
DateTimeFormat=yyyy-M-dTHH:mm:ss.SSSZ
Define Columns
There are two ways to define columns based on the fields in your text files:
• Define the column names in the file's first row, the header row. When you connect to an AWS S3 bucket,
the driver determines the data type.
• Define the column number, name, data type, and width in schema.ini. Columns defined this way
override columns initially accepted from the header row. You can ignore a file's header row by specifying
ColNameHeader=False in the file's section in schema.ini.
To define a column in schema.ini, use the following format: Coln=ColumnName DataType [WidthWidth]
Example:
If you set a column to a fixed length, it is mandatory to define the width of each column as well (see above).
[invoices.csv]
ColNameHeader=True
Format=Delimited(,)
DateTimeFormat=d/M/yyyy
Col1=id numeric
Col2=invoicedate date
Col3=total numeric
• boolean
• date
• time
• datetime
• decimal
• double
• tinyint
• smallint
• integer
• bigint
• float
• string
• text
• longtext
• char
• varchar
• nvarchar
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for Google BigQuery connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Service User Key You can download the Service key from a Google Service
Account.
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Remember
• The Google BigQuery connector can retrieve results only if destination tables are defined. This is
because the maximum response size for queries written to temporary tables is 10 GB.
Including a destination table in your query gives results without any query or extraction
errors. Read more in https://cloud.google.com/knowledge/kb/bigquery-response-too-large-to-return-
consider-setting-allowlargeresults-to-true-in-your-job-configuration-000004266 .
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for Snowflake connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Security Recommendation
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for Azure Data Lake Storage Gen2 connector, and learn how to create a connection and
extract data.
Stage Released
Version 23.0.8839.0
Connects to an Azure Data Lake Storage Gen2 and extracts CSV, TSV, or TXT files.
Credential Description
OAuth Client Id The OAuth Client Id of a custom created Azure OAuth app.
OAuth Client Secret The OAuth Client Secret of a custom created Azure OAuth
app.
Location The path to the files stored. Starts with the name of
the filesystem followed by the path to the files to be ex-
tracted. Schema: filesystem/path-to-files. For example: ex-
ports/2024/12 or exports/2000-2010/2008.
Azure Tenant The name or ID of the Azure tenant containing the Azure
Data Lake Gen2.
Delimiter Character Character that separates the columns in the CSV, TSV, or
TXT file. To specify a tab as a delimiter set this value to
TabDelimited.
See section on Azure Data Principal here . For authentication, use the client secret and not the certificate
option.
Regularly change the OAuth Client Secret and do so immediately if there is any suspicion that the secret has
been compromised.
Any section in schema.ini must begin with the file name including the file extension enclosed in square
brackets.
Example:
[InvoicesFile.txt]
With the ColNameHeader property, you can specify whether the file contains a header or not.
Example:
ColNameHeader=True
You can set the Format property to the format of the file. The following values are possible:
• CSVDelimited
• TabDelimited
• Delimited (custom character)
Format=Delimited(,)
The DateTimeFormat property can be set for date, time, and datetime type columns. All standard formats are
supported.
DateTimeFormat=M/d/yyyy
Example 2:
DateTimeFormat=yyyy-M-dTHH:mm:ss.SSSZ
Define Columns
There are two ways to define columns based on the fields in your text files:
• Define the column names in the file's first row, the header row. When you connect to an AWS S3 bucket,
the driver determines the data type.
• Define the column number, name, data type, and width in schema.ini. Columns defined this way
override columns initially accepted from the header row. You can ignore a file's header row by specifying
ColNameHeader=False in the file's section in schema.ini.
To define a column in schema.ini, use the following format: Coln=ColumnName DataType [WidthWidth]
Example:
If you set a column to a fixed length, it is mandatory to define the width of each column as well (see above).
[invoices.csv]
ColNameHeader=True
Format=Delimited(,)
DateTimeFormat=d/M/yyyy
Col1=id numeric
Col2=invoicedate date
Col3=total numeric
• boolean
• date
• time
• datetime
• decimal
• double
• tinyint
• smallint
• integer
• bigint
• float
• string
• text
• longtext
• char
• varchar
• nvarchar
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Stage Released
Version 22.0.8472.0
Overview
Credential Description
Username The user account provided for authentication with the SAP
HANA database.
Value Description
Value Description
O Organization
OU Organizational Unit
L Locality
S State
C Country
E Email Address
Value Description
USER - default For Windows, this specifies that the certificate store is a
certificate store owned by the current user. Note that this
store type is not available in Java.
JKSFILE The certificate store is the name of a Java key store (JKS)
file containing certificates. Note that this store type is only
available in Java.
Value Example
The Firewall allows data through specific IP addresses. For the list of IPs, see the Regions, IP Addresses, and
URLs section.
• To get the allowed IP address for Managed Private Cloud, contact your customer success manager.
• A secure websocket and HTTPS traffic to the IP addresses need to be allowed on TCP port 443.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Related Information
Find the credentials for MySQL connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
• mysql
• information_schema
• performance_schema
• sys
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Security Recommendation
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for PostgreSQL connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Security Recommendation
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Following are the high-level steps to connect with your source system and extract data:
Learn about the parameters needed to connect with your MongoDB database.
Stage Released
Version 22.0.8370.0
Overview
Connects to a MongoDB database, whether it's installed and run on cloud or on-premises.
The following table shows the essential details for connecting with your MongoDB, whether it's a cloud
deployment or on-premises.
Parameters Description
Database The name of the MongoDB database that you want to read
from and write to.
If you don't pass values to this parameter, the driver uses the
default value, admin.
Note
You need both the parameters, Database and Authenti-
cation Database to properly authenticate with a different
database on the server.
To establish a connection with MongoDB database installed on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence.
Security Recommendations
Following are the high-level steps to connect with your cloud system and extract data:
To extract data from your MongoDB database available on-premises, refer to the On-Premises Extractor [page
131] section.
Stage Released
Version 1.0
Overview
Credential Description
API endpoint The endpoint URL generated when you create a connection
with Ingestion API source system. Use the endpoint URL to
connect your third-party source systems to SAP Signavio
Process Intelligence.
Prerequisites
Before uploading, make the following checks and preparations in your data to avoid encountering errors.
• Ensure dates and times use supported data types. These include:
• Date
• Timestamp (millisecond precision)
• Time (millisecond precision)
• Convert date and timestamp formats in files to milliseconds before upload. For example:
Fri Jun 24 2022 10:58:41 GMT+0200 becomes 1656061121670 milliseconds
• Replace NULL values with an empty string in files. Not all NULL value types are detected.
• Column names must not contain special characters. Only characters from the character class [A-Za-z_]
are valid for column names. For example, column names must not contain spaces.
• Avoid spaces in the names of uploaded files.
• Avoid spaces in the specified table name in the schema.
• SAP Signavio Process Intelligence doesn't support pseudonymizing the uploaded data.
Uploading Data
1. Set up data ingestion by creating an ingestion connection, source data, or process data pipeline in SAP
Signavio Process Intelligence. Read more in section Setting Up Source Data [page 216].
Note
• Connections to the Ingestion API are authenticated by an access token. This token can't be
refreshed once you have created an Ingestion API connection. To get a new token, delete the
existing connection and create a new connection. See Ingestion API Authentication for more
information.
• Only one connection can be linked to one source data for data ingestion.
2. Call the API using the API credentials and upload the data. Read more in the sections Ingestion Request
and Ingestion Status Request.
Note
• The duration between API calls that upload data to the same source must be at least 30 seconds.
Otherwise, you get a timeout error.
• If the size of the CSV or TSV file exceeds 150 MB, we recommend you split it into multiple files of
maximum 150 MB each. You can then make multiple upload requests using the same schema.
• API calls mustn't contain more than five files per call.
3. Run the initial transformation and load. Pipeline logs are generated to provide transformation and load
information. Read more in section Running the Transformation and Load [page 289].
Once uploaded, the data is ready for investigation. Read how to define and grant access to process data in
section Prepare a process [page 20] and how to analyze data in section Process Mining [page 301].
• Your data must conform to the existing data ingestion schema. The Ingestion API can't be used to modify
the schema.
• The primary key of your new data must be the same as that of the existing data. The primary key of an
existing ingestion table cannot be modified.
• If you upload data for existing records which were added in earlier requests, it is assumed you're
performing an update. Pushing data to the same table with the same primary keys overwrites existing
data. Note that this doesn't apply to duplicate records uploaded in the same request. It is expected that all
records in a single upload request are unique.
Stage Released
AthenaJDBC42-unsigned
Version 22.0.8369.0
2.0.16.1000
Before using the manual CSV upload connector, read the requirements and considerations for CSV files. See
Upload process data as CSV files [page 128].
Overview
Once the data is uploaded, it gets parsed via the CSV CDATA driver and then the connection is backed by an
AWS Athena database for the extractor to read data from it.
Current Limitations
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Security Recommendations
• API key rotation, periodically rotate the API key or do so immediately if there is any suspicion that an API
key may have been compromised.
• Use HTTPS if possible.
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Find the credentials for OData connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8370.0
Overview
This adapter connects to OData 2.0, 3.0, and 4.0 services. The OData webservice needs to support $select
queries as follows:
If this isn't supported, the OData webservice won't be able to select individual fields from a table or entity.
Authentication
When configuring an OData (Open Data Protocol) connector, two authentication methods are available:
• Basic
• OAuth
Note
Basic Authentication
Credential Description
Service Root URL to the organization root or the OData services file, for
example https://MySite/MyOrganization.
OAuth Authentication
Credential Description
OAuth Client Id The OAuth Client Id used to authenticate to the OData data-
base.
OAuth Client Secret The OAuth Client Secret used to authenticate the OAuth
Client Id.
Service Root URL to the organization root or the OData services file, for
example https://MySite/MyOrganization.
OAuth Access Token Url The OAuth Access Token Url used to retrieve access token.
Security Recommendations
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Find the credentials for Microsoft SQL Server connector, and how to create a connection and extract data
Stage Released
Version 22.0.8370.0
Overview
Credential Description
Extra Connection Arguments Timeout: The value in seconds until the timeout error is
thrown, canceling the operation, for example 10 for a time-
out after 10 seconds.
Security Recommendations
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Connect to SAP Cloud and extract data from the integrated applications.
Stage Released
Version 22.0.8370.0
Note
Currently, SAP Cloud Integration only supports full data extraction. If you want to filter out specific data
and extract only the data you need, use the Ingestion API. For information on the Ingestion API, see Upload
Data Using the Ingestion API [page 215].
Basic Authentication
When configuring SAP Cloud Integration connector, you'll use the basic authentication method.
Credential Description
Service Root URL to the organization root or the SAP Cloud services file,
for example https://MySite/MyOrganization.
Security Recommendation
Password rotation, periodically rotate passwords or do so immediately if there is any suspicion that a password
may have been compromised.
Find the credentials for SAP Datasphere connector, and how to create a connection and extract data.
Stage Released
Version 22.0.8472.0
Overview
For information on obtaining Open SQL schema connection information, see Connect to your Open SQL
Schema section in SAP Help portal.
Credential Description
Username The user account obtained from Open SQL schema to au-
thenticate with SAP Datasphere.
Port 443
Database H00
Value Description
Value Description
O Organization
OU Organizational Unit
L Locality
S State
C Country
E Email Address
Value Description
USER - default For Windows, this specifies that the certificate store is a
certificate store owned by the current user. Note that this
store type is not available in Java.
JKSFILE The certificate store is the name of a Java key store (JKS)
file containing certificates. Note that this store type is only
available in Java.
Value Example
Allowed IP Addresses
The Firewall allows data through specific IP addresses. For the list of IPs, see the Regions, IP Addresses, and
URLs section.
Security Recommendations
Following are the high-level steps to connect with your source system and extract data:
To establish a connection with your source system hosted on-premises, you need to enable the option
This datasource connects to an on-premises system while setting up a connection in SAP Signavio Process
Intelligence. For information on how to connect with on-premises system, see Set Up and Manage On-Premises
Extractors [page 137]
Related Information
Enterprise systems
SAP ERP (SAP RFC) • Use On-Premises extractor with SNC for both encryp-
tion and authentication.
• For basic authentication, it's recommended to periodi-
cally rotate passwords or do so immediately if there is
any suspicion that a password may have been compro-
mised.
SAP S/4HANA CDS Views • For basic authentication, it's recommended to periodi-
cally rotate passwords or do so immediately if there is
any suspicion that a password may have been compro-
mised.
• Use HTTPS if possible.
Database
Other
Microsoft SQL Server • Enable encryption using Encrypt extra connection ar-
gument.
• Password rotation, periodically rotate passwords or do
so immediately if there is any suspicion that a password
may have been compromised.
Elastic Search • API key rotation, periodically rotate the API key or do
so immediately if there is any suspicion that an API key
may have been compromised.
• Use HTTPS if possible.
Connections define from where a data pipeline will extract the data. You can create new connections as well as
manage existing ones using filtering and sorting.
For each process data pipeline, you need to specify from where to extract the data. For that, you set up a
connection and link it with your process data pipeline.
• Credentials and connection parameters for accessing a source system that is hosted in a cloud or on-
premises environment
• A link to an on-premises extractor in case of a source system hosted in an on-premises environment
• Raw process data uploaded manually as zipped CSV files
Learn how to get to the Connections feature in SAP Signavio Process Intelligence and view the available
options.
To view the connections, open (Process Data Management) in the sidebar and select Connections.
Note
Your assigned feature set determines which options ap-
pear in the action menu. For information on feature
sets, see Access Requirements for Process Data Man-
agement [page 52]
How to create a connection that connects SAP Signavio Process Intelligence with a source system.
You can upload raw process data manually using the Source Data feature. For information on how to upload the
data in zipped CSV files, see Uploading Data Manually section.
For information on the connection types and list of connectors, see Connection Types and Available
Connectors [page 55] section.
Once you've established a valid connection, the next step is to create a source data using this connection.
Related Information
How to revalidate the credentials in the connection if the link to a source system is lost.
Once you've created a valid connection and if something has changed in the connection type, for example,
if the user password has been changed, then the connection becomes invalid in SAP Signavio Process
Intelligence. Then, you need to revalidate the credentials.
How to change a connection or delete it. Changing a connection requires entering the password for the source
system.
Editing a Connection
Note
You need to confirm changes to a connection by re-entering the password. Because of that, the connection
is marked as invalid as soon as you start editing a connection.
Deleting a Connection
Note
Deleting a connection that is linked to a process data pipeline breaks the source data. Then, data can no
longer be extracted.
How to create a connection that links Process Intelligence with a source system.
An on-premises extractor is installed and operated within company's own physical infrastructure. The primary
function of an on-premises extractor is to collect and extract data from various sources hosted in your
company's environment.
Using on-premises extractor, you can connect your on-premises source systems to Process Data Management
in SAP Signavio Process Intelligence.
You can set up and run the on-premises extractor in either of the following ways:
• Download and install manually. Then, run the extractor using commands.
See the Set Up and Manage On-Premises Extractors [page 137] section.
• Set up and run an on-premises extractor using Docker.
See the On-Premises Extractor Setup Using Docker [page 146] section.
Authentication
Authentication between the on-premises extractor and the process data management bridge is handled
through the ID and the secret of the extractor. Both parameters are passed as request headers for every
request done by the on-premises extractor. The parameters are configured when connecting the on-premises
package on the server with the on-premises extractor in SAP Signavio Process Intelligence, read more in
section Set up the connection to an on-premises system [page 137].
Encryption
The secret is used to authenticate the on-premises extractor instance when connecting to the process data
management bridge.
1. The on-premises extractor uses the secret to get a temporary encryption key.
2. The on-premises extractor uses the temporary encryption key to encrypt the ID, and the result passes as
another HTTP header.
3. The original secret is used to decrypt the authentication header and if it can match the decrypted value to
the ID, then the authentication is successful, if not, it's denied.
Certificates
Client-side certificates aren't used when connecting an on-premises extractor and the process data
management bridge.
The on-premises extractor and the process data management bridge communicate using HTTPS and Secure
WebSocket.
The following illustration provides the high-level overview of the systems and components involved when you
establish a connection between on-premises systems and SAP Signavio Process Intelligence.
The data flow between the source system on your on-premises and SAP Signavio Process Intelligence is
described as follows:
1. You start the on-premises extractor, which establishes the initial connection to Process Data Management.
Note that this initial connection can only be started from the on-premises extractor (so you only need
to enable outbound communication for your firewall). After the channel is established, the on-premises
extractor polls Process Data Management for extraction requests.
2. You start the extraction in Process Data Management.
3. This extraction request is sent to the on-premises extractor via the permanent bidirectional WebSocket
channel established in step 1.
4. The on-premises extractor receives the extraction request (by way of its polls of Process Data
Management for extraction requests) and queries the source system.
5. The source system responds and sends the table data to the on-premises extractor. If applicable,
pseudonymization happens at this point.
6. The parquet file for the table data is created in the on-premises extractor and sent via an HTTPS channel to
Process Data Management. This is not a permanent channel. A new channel is established for each table.
When you create a process data pipeline that has a transformation script and run the pipeline, the data is
transformed and an event log is generated. This event log is further uploaded into the process defined in your
process data pipeline. You can then use this data available in the process for data analysis and process data
mining.
Related Information
Read an overview about of all the components used when setting up a on-premises system and SAP Signavio
Process Intelligence.
Before setting up your connection to an on-premises system there are component descriptions you need to
know before implementing the connection.
The following table provides a description of each component used when connecting to an on-premises
system:
Component Description
on-premises system This is the source system that stores your business process
data, for example SAP. Your company hosts the source sys-
tem in an on-premises environment.
Related Information
Learn more about some of the security considerations when using the on-premises extractor.
Security Considerations
Secure Opera- Default Setting or
Priority tions Map Title Behavior Last Update Index
Critical Secure SAP Code Security Fixes The on-premises April 11, 2024 PI-SSR-0001
extractor is an ap-
plication deployed
by you, the cus-
tomer. SAP Signa-
vio can't update
the software com-
ponent automati-
cally.
Critical Authentication Encryption, Data in The on-premises April 11, 2024 PI-SSR-0002
Transit extractor doesn't
expose any end-
point to the Inter-
net. The on-prem-
ises extractor is
used to connect to
your internal sys-
tems. The connec-
tion settings are
configured by cus-
tomers. We highly
recommend using
secure protocols.
Critical Authentication Access Control The on-premises April 11, 2024 PI-SSR-0003
extractor sends
data to SAP
Signavio Process
Intelligence end-
points. The config-
uration is loaded
from a config.env
file.
Critical Secure SAP Code Security Fixes To run, the on- April 11, 2024 PI-SSR-0004
premises extractor
depends on a
Java SDK. The
Java SDK is an on-
premises software
component, which
you must patch
regularly.
Advanced Security Harden- Encryption, Data in By default, the April 11, 2024 PI-SSR-0006
ing Transit service supports
all cipher suites
used by your Java
virtual machine.
Before you get started with setting up the on-premises extractor, make sure you've met the following
prerequisites.
List of on-premises server requirements needed before connecting to an on-premises server with SAP Signavio
Process Intelligence.
Before setting up an process data management on-premises connector in SAP Signavio Process Intelligence to
your on-premises server, the following requirements and configuration is required.
Hardware
• A server, for example Azure machine, AWS EC2 or GCP instance, or a virtual machine, or a physical server
• Greater than 50 GB of free disk space
• RAM, Minimum 20 GB
• CPU, minimum X86-64 architecture with quad core processor
Software
• Windows OS:
• 64 bit Windows 10 or later
• Windows Server 2019
• MacOS:
• MacOS 11 or later
• Linux OS:
• Ubuntu 18.04 LTS or later
• Red Hat Enterprise Linux 7 or later
• SUSE Enterprise Linux (SLES) 12 or 15
• Oracle Linux 6 or 7
• OpenJDK 17
If you're using an earlier or later version of Java, upgrade/downgrade to Java 17. Then, download and install
the latest version of the on-premises extractor. For more information, see Download, verify, and install the
on-premises package [page 139].
If you run into issues, see the Errors related to on-premises source systems [page 160] section.
Server Configuration
• The server is located in the same network as the source system so that it can reach the system with data.
The firewall allows access through specific IP addresses. For the list of IP addresses, see the Regions and IP
Addresses section.
Related Information
Steps to set up the connection from SAP Signavio Process Intelligence to an on-premises system.
To set up the connection between your on-premises system and SAP Signavio Process Intelligence, follow
these steps:
Read how to configure your on-premises server for the on-premises connector.
The first step is to set up your on-premises server that the process data management on-premises connector
in SAP Signavio Process Intelligence connects to.
Set up the on-premises server using the requirements listed in the Hardware, Software, and Network
Requirements section.
After setting up the on-premises server, get required access to your source system. For this, see Prepare
Access to your Source System section.
Read how to prepare access to your SAP or other source systems, which are used with on-premises connector.
After setting up your on-premises server, the next step is to provide access to your source system.
1. In your SAP system, install a custom RFC function. Read more in section Supported Systems and RFC
Usage [page 58].
2. Get the SAP credentials for the connector. For SAP ERP, read more in section Connector - SAP ERP (RFC)
[page 65].
3. If you want to secure the connection of your SAP source system, you can enable SAP Secure Network
Connection. Read more in section Connector - SAP ERP (RFC) [page 65].
To provide access to your source system, get the source system credentials for the connector. Read about the
credentials for each connector in section Connection Types and Available Connectors [page 55].
After preparing access to your source system, download and install the on-premises package on server. For
this, see Download and Verify the On-premises Package section
Related Information
Read how to download, verify, and install the on-premises connector on your on-premises server.
To verify the integrity of the on-premises package, run the package file through a checksum tool on the
on-premises server. Compare the file hash value from the checksum tool with the copied hash value from the
Download the on-premises package [page 139] section. If the hash values match, the package is secure and
can be installed on the on-premises server.
After installing the on-premises package, you need to create an on-premises extractor in SAP Signavio Process
Intelligence.
Related Information
Find the steps for installing the on-premises package on Windows and Linux servers, and subsequently
executing the extractor using script.
Context
After downloading and verifying the on-premises package, you need to install it on the server of your choice,
Windows or Linux.
Note
For Linux: If you prefer to manually install and run the on-premises extractor on your Linux machine, follow
these steps. However, if you want to run on-premises package as a service, then refer to the Install the
On-premises Package as a Service on Linux and Run the Extractor section.
Before you get started, make sure you met the requirements listed in the Hardware, Software, and Network
Requirements section.
Procedure
Related Information
Follow the provided steps to install and run the On-premises Package as a Service on Linux. Link the extractor
with a connection after installation.
Note
We recommend creating a non-root Linux user named etluser to run only the on-premises extractor. In
addition, make sure a home directory is also created for the etluser.
You can create a non-root linux user named etluser by running the command sudo useradd -m
etluser. Alternatively, you can use an existing non-root user by changing the variable User=<name> in
the extractor.service file.
After downloading, save the unzipped package in the non-root Linux user's home directory.
Before you start installing and running the extractor, make sure that the following conditions are also met:
After installing and running the on-premises extractor, you need to link the extractor with an existing or new
connection.
Create a Connection and Link it with the On-Premises Extractor [page 143]
After downloading, verifying, and installing the on-premise package on your on-premises server, the next step
is to create an on-premises extractor in SAP Signavio Process Intelligence.
Note
After creating an on-premises extractor in SAP Signavio Process Intelligence, the next step is to connect the
on-premises package on the server with the on-premises extractor and run it.
Related Information
Connect the On-Premises Package on the Server with the On-Premises Extractor and Run the Extractor [page
142]
Read how to connect the on-premises package on the on-premises server with the on-premises extractor in
SAP Signavio Process Intelligence.
After creating an on-premises extractor in SAP Signavio Process Intelligence, the next step is to connect the
on-premises package on the server with the on-premises extractor in SAP Signavio Process Intelligence.
After running the on-premises extractor, link the extractor with a new or existing connection.
Related Information
Create a Connection and Link it with the On-Premises Extractor [page 143]
Read how to create a connection in SAP Signavio Process Intelligence and link it to the on-premises extractor.
After connecting the on-premises package on the on-premises server with the on-premises extractor in SAP
Signavio Process Intelligence, the next step is to create a connection and then link it to the extractor you
created in SAP Signavio Process Intelligence.
The connection is set up. You can set up process data pipelines using this connection.
Related Information
How to manage on-premises extractors and packages. On-premises extractors and packages are necessary to
connect SAP Signavio Process Intelligence with a data system that is hosted locally.
You can manage on-premises extractors and packages in the following ways:
Note
An extractor can't be edited. You need to create a new extractor for changes, read more in section Set up
the connection to an on-premises system [page 137].
We provide new versions of the on-premises package through the user interface.
If a newer version of your on-premises package is available, this is indicated with in the Version column.
On the on-premises extractors page, select to find the new version steps, template documentation, public
documentation, and release notes.
Example
3. Connect the on-premises package on the server with the on-premises extractor in SAP Signavio Process
Intelligence.
Example
Note
Learn how to set up the connection from SAP Signavio Process Intelligence to an on-premises system using
Docker.
You can use Docker as an alternative method to install and run the on-premises extractor. With Docker, setting
up a connection with the on-premises extractor is now faster and simpler. It reduces operational costs and
enables you to upgrade to the latest version of the extractor with a quick command run.
Note
• You can install the on-premises extractor using Docker on both Linux and Windows.
• It's recommended to have a basic knowledge of Docker.
At a high-level, the following steps are involved in installing the on-premises extractor using Docker.
1. Ensure that you meet all the system requirements, access permissions, and Docker requirements.
See the Requirements for On-Premises Extractor Setup Using Docker [page 147] section.
2. Create an on-premises extractor in SAP Signavio Process Intelligence user interface.
This generates extractor parameters, which need to be copied and used while setting up the environment
for an on-premises extractor.
See the Create an On-Premises Extractor in SAP Signavio Process Intelligence [page 142] section.
3. If you're using a Windows system, skip to the next step. If you're using a Linux system, you need to create a
user named etluser and add that user to the user group named docker.
See the Creating a User (Linux) [page 148] section.
4. Set up the environment to install the on-premises extractor through Docker.
See the Preparing the Environment for On-Premises Extractor (Linux and Windows) [page 149] section.
5. Log in to the SAP's Docker repository using your S-User name.
See the Logging into the Docker Repository [page 152] section.
6. Run the on-premises extractor using the run command. When you run the command, the docker image
pulls the specified version of the on-premises extractor and runs the extractor.
See the Running the On-Premises Extractor Using Docker (Linux and Windows) [page 153] section.
7. Create a new connection in SAP Signavio Process Intelligence user interface and link this connector to it.
See the Create a Connection and Link it with the On-Premises Extractor [page 143] section.
8. Create a source data and perform the extraction.
See the Creating a Source Data [page 171] section.
Updating the On-Premises Extractor Version (Linux and Windows) [page 155]
Stopping and Deleting the On-Premises Extractor (Linux and Windows) [page 159]
Find out the prerequisites for using the on-premises extractor through Docker.
Before setting up the on-premises extractor using Docker, make sure you've met all the following requirements:
Note
Only Linux users need to create a new user named etluser. Windows users don't require this.
Find out what's needed to configure and run the on-premises extractor using Docker.
Before you start using Docker for configuring the on-premises extractor, make sure you've met the following
requirements in addition to the prerequisites listed in the Requirements for On-premises Extractor Setup Using
Docker section.
Linux
Note
It's recommended that you install Docker Desktop on your Windows machine.
Related Information
Context
Once you've met all the requirements to setup the on-premises extractor using Docker, create the user named
etluser and add that user to the user group named docker. Then, you can prepare the environment for the
on-premises extractor and log into SAP's Docker repository.
Note
Ensure that you've got the necessary permissions to create a new user on your Linux system. Then add
that user to the docker user group.
User creation is only required for audience working with Linux systems.
Procedure
Learn how to install and run the on-premises extractor with Docker.
Before getting started, make sure that you've met all the prerequisites.
Related Information
Learn how to configure the environment for the on-premises extractor on Linux and Windows machines.
Prerequisites
• Check if you've met all the requirements listed in the Docker Environment Requirements (Linux and
Windows) [page 147] section.
• For Linux systems, make sure you've created a new user named etluser and added that user to the user
group named docker.
Once you've created the on-premises extractor in SAP Signavio Process Intelligence, you need to configure the
on-premises extractor.
Linux
Procedure
mkdir -p ~/onprem-extractor/etltmp/logs
Running this command creates a directory called onprem-extractor, along with a sub-directory called
etltmp . The etltmp contains another sub-directory called logs.
2. Make sure that the etluser has write access to the directory. To do so,
1. Change the owner of the directory to the etluser by running:
cd ~/onprem-extractor
sudo su etluser
5. Copy the extractor parameters from SAP Signavio Process Intelligence user interface.
1. Open (Process Data Management) in the sidebar.
2. Select On-Premises Extractors. The on-premises extractors overview opens.
3. To copy the extractor parameters, select (More) Copy Parameter to Clipboard for the
extractor that you want to connect.
6. Paste the copied parameters into the configuration file config.env using either of the following ways:
1. Recommended: Create the config.env file manually in the working directory, onprem-extractor,
using the file explorer of your choice. Paste the parameters in the config.env file.
2. Alternatively, you can run the following command:
Replace the "<<extractor_credentials>>" with the copied parameters and make sure that the
credentials are within quotes.
echo "EXTRACTOR_ID=value
EXTRACTOR_SECRET=value
WEBSOCKET_SERVER_ADDRESS=value" > config.env
Caution
Running the echo command stores the extractor credentials in the console's command history.
The etluser has write access to the working directory onprem-extractor, which contains a valid
config.env file.
Next Steps
Windows
Procedure
1. Create a working directory, for example in your home directory with etltmp\logs sub-directory. To do so,
run the command:
mkdir %HOMEDRIVE%%HOMEPATH%\onprem-extractor\etltmp\logs
Running this command creates a directory called onprem-extractor, along with a sub-directory called
etltmp . The etltmp contains another sub-directory called logs.
2. Switch to the working directory by running:
cd %HOMEDRIVE%%HOMEPATH%\onprem-extractor
3. Copy the extractor parameters from SAP Signavio Process Intelligence user interface.
1. Open (Process Data Management) in the sidebar.
2. Select On-Premises Extractors. The on-premises extractors overview opens.
3. To copy the extractor parameters, select (More) Copy Parameter to Clipboard for the
extractor that you want to connect.
4. Paste the copied parameters into the configuration file config.env of your working directory, onprem-
extractor.
Create the config.env file manually in the working directory using the file explorer of your choice. Paste
the parameters in the config.env file.
The working directory, onprem-extractor, exists with a valid config.env file.
Context
Once you've prepared the environment for the on-premises extractor, the next step is to log into SAP's Docker
repository.
Procedure
Learn how to run the on-premises extractor using the Docker image.
Context
Once you've prepared the environment for the on-premises extractor and logged in to SAP's Docker repository,
you must download a specific version of the extractor and run it.
You can get the latest version of the on-premises extractor from the SAP Signavio Process Intelligence user
interface. Open the On-Premises Extractors from the sidebar and hover over the option to view the version
number.
For more information on how to use Docker, refer to the official documentation of Docker at https://
docs.docker.com/ .
Linux
Prerequisites
Procedure
1. In the following command, substitute the <<version>> (available at two places in the command) with the
new version of the on-premises extractor you've downloaded. Ensure that the working directory contains
the config.env file, which was created while preparing the environment. Then, execute the command in
the working directory to run the extractor.
docker run \
--detach \
--env-file config.env \
--env ETLUSER_UID=$(id -u) \
--volume ./etltmp/logs:/etltmp/logs \
--name sap-signavio-pi-onprem-extractor-<<version>> \
--restart unless-stopped \
--memory 8g \
--hostname=docker-$(hostname) \
73555000100900006498.dockersrv.base.repositories.cloud.sap/sap-signavio-pi-
onprem-extractor:<<version>>
docker container ls
For more information on how to use Docker, refer to the official documentation of Docker.
Next Steps
Create a connection and link it to the on-premises extractor. Then, create a source data and perform the test
extraction.
See the Creating a Source Data [page 171] and Create a Connection and Link it with the On-Premises Extractor
[page 143] sections.
Windows
Prerequisites
Procedure
1. In the following command, substitute the <<version>> (available at two places in the command) with the
new version of the on-premises extractor you've downloaded. Ensure that the working directory contains
the config.env file, which was created while preparing the environment. Then, execute the command in
the working directory to run the extractor.
docker run ^
--detach ^
--env-file config.env ^
--env ETLUSER_UID=0 ^
--volume .\etltmp\logs:/etltmp/logs ^
--name sap-signavio-pi-onprem-extractor-<<version>> ^
--restart unless-stopped ^
--memory 8g ^
73555000100900006498.dockersrv.base.repositories.cloud.sap/sap-signavio-pi-
onprem-extractor:<<version>>
For more information on how to use Docker, refer to the official documentation of Docker.
2. After executing this command, you'll be asked to allow Docker access to the logs directory in your working
directory. Select Yes.
3. Switch to the SAP Signavio Process Intelligence user interface and check if the on-premises extractor
status is connected.
Next Steps
Create a connection and link it to the on-premises extractor. Then, create a source data and perform the test
extraction.
See the Creating a Source Data [page 171] and Create a Connection and Link it with the On-Premises Extractor
[page 143] sections.
You can configure and run the new version of the on-premises extractor using Docker.
You need to create an on-premises extractor in SAP Signavio Process Intelligence user interface.
For information on how to create, see the Create an On-Premises Extractor in SAP Signavio Process
Intelligence [page 142] section.
Step 2: Set the Environment for the New Version of On-Premises Extractor
For setting up the environment for the new on-premises extractor version, the working folder name must be
unique. When creating the working directory, choose a different name than the already running extractor. For
example, onprem-extractor-new (a new extractor name) instead of onprem-extractor (an old extractor name).
All the commands in this step use onprem-extractor-new as the working directory name for the new
extractor version.
When setting the environment, make sure that you substitute the correct name of the new working
directory and the new version of the on-premises extractor into the commands.
When obtaining credentials from the user interface, get them from the newly created extractor.
Once the new extractor is installed and tested, you can delete the old extractor.
Linux
Prerequisites:
• A new user named etluser was created and added to the docker user group.
• A new on-premises extractor in SAP Signavio Process Intelligence.
mkdir -p ~/onprem-extractor-new/etltmp/logs
2. Make sure that the etluser has write access to the directory. To do so,
• Change the owner of the directory to the etluser by running:
cd ~/onprem-extractor-new
sudo su etluser
5. Copy the extractor parameters from the SAP Signavio Process Intelligence user interface.
1. Open (Process Data Management) in the sidebar.
2. Select On-Premises Extractors. The on-premises extractors overview opens.
3. To copy the extractor parameters, select (More) Copy Parameter to Clipboard for the
extractor that you want to connect.
6. Paste the copied parameters into the configuration file config.env using either of the following ways:
• Recommended: Create the config.env file manually in the working directory using the file explorer
of your choice. Paste the parameters in the config.env file.
• Alternatively, you can run the following command:
Replace the "<<extractor_credentials>>"with the copied parameters. Make sure that the
parameters are enclosed in quotes.
After substituting the extractor_credentials with the actual values, the command looks like the
following:
echo "EXTRACTOR_ID=value
EXTRACTOR_SECRET=value
WEBSOCKET_SERVER_ADDRESS=value" > config.env
Windows
Prerequisite: Ensure that you've already created a new on-premises extractor in SAP Signavio Process
Intelligence.
mkdir %HOMEDRIVE%%HOMEPATH%\on-prem-extractor-new\etltmp\logs
cd %HOMEDRIVE%%HOMEPATH%\onprem-extractor-new
The commands for logging into the SAP provided Docker repository are the same for both Linux and Windows
systems.
For commands, see the Logging into the Docker Repository [page 152] section.
Linux
1. Substitute the <<version>> (available at two places in the command) with the new extractor version and
execute this command in the working directory to run the extractor:
docker run \
--detach \
--env-file config.env \
--env ETLUSER_UID=$(id -u) \
--volume ./etltmp/logs:/etltmp/logs \
--name sap-signavio-pi-onprem-extractor-<<version>> \
--restart unless-stopped \
--memory 8g \
--hostname=docker-$(hostname) \
73555000100900006498.dockersrv.base.repositories.cloud.sap/sap-signavio-pi-
onprem-extractor:<<version>>
2. Switch to SAP Signavio Process Intelligence user interface and check if the on-premises extractor status is
connected. Perform a test extraction.
If the new extractor isn't working properly, don't perform the next step, which is deleting the old version of
the extractor.
3. If the new extractor works, link the new extractor to the existing connection.
Windows
1. Substitute the <<version>> with the new extractor version and execute this command in the working
directory to run the extractor:
docker run ^
--detach ^
--env-file config.env ^
--env ETLUSER_UID=0 ^
--volume .\etltmp\logs:/etltmp/logs ^
--name sap-signavio-pi-onprem-extractor-<<version>> ^
--restart unless-stopped ^
--memory 8g ^
73555000100900006498.dockersrv.base.repositories.cloud.sap/sap-signavio-pi-
onprem-extractor:<<version>>
2. After executing this command, you'll be asked to allow Docker access to the logs directory in your working
directory. Select Yes.
3. Switch to SAP Signavio Process Intelligence user interface and check if the on-premises extractor status is
connected. Perform a test extraction.
If the new extractor isn't working properly, don't perform the next step, which is deleting the old version of
the extractor.
4. If the new extractor works, link the new extractor to the existing connection.
Linux
1. Get the Docker container with the running on-premise extractor by running the command:
docker rm -f <<containerId>>
Windows
Related Information
Create a Connection and Link it with the On-Premises Extractor [page 143]
Learn how to stop the running extractor and then delete it.
Prerequisites
Procedure
1. Get the Docker container with the running on-premise extractor by running the command:
docker rm -f <<containerId>>
Windows
Procedure
Find solutions to errors that can occur when using the on-premises source systems.
When running the start.bat script the following error occurs in the terminal:
Solution:
Solution:
Note
This solution is only applicable to non-Windows servers. For Windows servers, please contact our SAP
Signavio service experts from the SAP for Me portal .
Edit the start.sh script and increase the memory allocation in the first line of the script below:
When extracting data with the on-premises extractor connected to the RFC function module, the numeric data
type values are split off into multiples or fractions of the actual value. This is due to the misinterpretation of
decimal separators. For example, the actual value is 15.000 USD and the extracted value is 15 USD.
Solution:
Learn different ways to get your process data into SAP Signavio Process Intelligence.
Process data can reside in different databases, in the cloud, or in files. SAP Signavio Process Intelligence offers
the following ways to help you bring in your process data and start analyzing it.
• Data extraction
• Manual data upload
• APIs
Data Extraction
Connect to your source data systems and extract data. Depending on your use case, perform extraction in
Standard or Advanced mode. SAP Signavio Process Intelligence offers various extraction options, each with
its own purpose.
Upload your data manually into a process, or use the Manual upload connection type to upload data as zipped
CSV files.
Using APIs
For information about uploading and ingesting data, see Data Upload and Ingestion Options [page 163].
Learn about the extraction types, different extraction options, and modes of extraction.
To understand the extraction options, you must first comprehend the extraction types and how data is
extracted based on the configuration in the Source Data.
The following are the main data extraction types, along with specifics on what is considered during data
extraction:
Extraction Options
The functionality of extraction options varies based on where they're available within SAP Signavio Process
Intelligence:
Extract button in Source Data > Configuration screen. Performs initial load extraction for the selected table.
Extract option available in the table details view screen of Performs delta load for the selected table.
Source Data.
Extract option available in the Source Data header menu. Performs delta load for selected tables on the Source Data
screen.
Run ETL (manually or automatically) in a Process Data Pipe- Performs delta load for all the Source Data tables in a pipe-
line. line.
Modes of Extraction
You can extract data from your source system using either of the following modes:
Related Information
Find different ways to ingest and upload data, and also their use cases.
The different ways to ingest and upload data into SAP Signavio Process Intelligence are as follows:
Data Ingestion
Data ingestion is a process of loading data from one or more sources into SAP Signavio Process Intelligence.
Once the data is ingested, it can be transformed, making it available for querying and analyzing data.
Data ingestion through connectors Connect your source systems with SAP Data is ingested into SAP Signavio
Process Intelligence depending on the
Signavio Process Intelligence for the
Different types of connections such as, source system that is hosted in a cloud
extraction and transformation of your
SAP ERP, AWS S3, Service Now, Big- environment.
process data to make it available for
Query and more to link with process
process mining or process analysis.
data pipeline and source data systems.
Refer to Creating a Connection section.
Data Ingestion API Use this API to ingest your data into Supports CSV and TSV file formats
You can manually upload data into SAP Signavio Process Intelligence and process data management system. If
you ingest raw data, it can be transformed and loaded into the system to query and analyze data. If you ingest
the final event logs or transformed data, it can then be analyzed in the SAP Signavio Process Intelligence.
Manual CSV upload connector Use this connector to manually upload Raw data in zipped CSV files
Manually upload data directly into a Use this option to manually upload data Supports CSV and XES file formats
process
directly into a process. You create a
Read more about file types in the Data
process in SAP Signavio Process Intelli-
file types [page 44] section.
gence and upload data to it.
Data upload API, also known as event Use this API if you have already ex- Final event logs in CSV and XES file for-
log upload API mats
tracted, transformed and loaded data
into a system outside of SAP Signavio
and want to load your final event logs
into a process to analyze the data.
Related Information
Get a quick technical overview of the data extraction from SaaS applications, and how the pseudonymization
happens during data extraction.
Pseudonymization replaces personally identifiable data with artificial pseudonyms. The pseudonymization
happens before the data reaches SAP Signavio Process Intelligence. During the data extraction process,
the SaaS extractor checks the data from the source system for pseudonymized fields. When configured for
pseudonymization, the extractor encodes the data before it reaches the SAP Signavio Process Intelligence
application.
The following illustration provides the high-level overview of the systems and components involved when you
establish a connection between a SaaS application and SAP Signavio Process Intelligence.
1. When the user triggers the data extraction in SAP Signavio Process Intelligence, the extractor utilizes the
JDBC driver to extract the data from the connected source system.
This extraction process returns a Java object with data to the SAP Signavio Process Intelligence
application. The SAP Signavio Process Intelligence application sequentially processes (record-by-record
basis) each data record within a Java object.
Note
The Java object is stored in memory until it's processed, and then the object is deleted by Java’s
garbage collector.
2. The SAP Signavio SaaS extractor checks the retrieved data for pseudonymized fields.
3. If the field is configured for pseudonymization, the application encodes the field and then writes the
encoded values to the parquet file.
The application encodes the text values into hexadecimal values using the SHA3-256 algorithm.
4. For further processing, the parquet files are stored in the temp directory, which is created by the
application.
5. The data in the form of parquet files is transferred to the SAP Signavio Process Intelligence through the
connection type that you've created as part of the setup.
Note
The Table Preview tab within the Source Data feature also displays the pseudonymized data retrieved
from the source system with its placeholder values. The data accessible in the Table Preview tab is
obtained directly by returning the Java object to the SAP Signavio Process Intelligence application. It
doesn't need to be written to the parquet files.
6. When you create a process data pipeline with transformation script and run the pipeline, the data is
transformed and an event log is generated. This event log is uploaded into the process linked in your
process data pipeline.
7. The data available in the process can be used for data analysis and process data mining.
Learn how to get started with extracting data from source systems.
Data extraction is the process of getting data from a given source into SAP Signavio Process Intelligence for
further processing and analysis. The data can come from various sources such as, enterprise systems, cloud
storage or warehouse, database, CSV file uploads, APIs, and other.
Related Information
This section explains the use cases of two modes of extraction, standard and advanced.
Data extraction can be handled easily using SAP Signavio Process Intelligence user interface components,
without using the custom code. The standard extraction setup is easy with user interface, and the only
requirement is data. Therefore, whenever possible, use the standard data extraction.
Note
When it comes to more complex scenarios, the data needs to be extracted using custom code. Therefore,
you must be familiar with SQL code and YAML to use the advanced data extraction.
Following are other possible use cases for choosing advanced data extraction:
• When you need two or more delta parameters to extract the table.
• When you want to partition data based on range.
• When you want to join two or more tables for the extraction.
• When the table does not have sufficient columns or data to run delta loads. For example, a missing date
parameter that indicates the change or creation of a record or missing an ascending object number, etc.
• When you want to reduce the number of records in a table that is linked to your main table, even if it has a
delta parameter. In this case, you can use another table to pull records of a certain type, such as a specified
document type.
• When tables have key columns with null values and need to be handled separately. For example, table
QAVA - CONCAT(MANDANT, PRUEFLOS, KZART, COALESCE(ZAEHLER, 'NULL')) AS c_key.
To extract data, you need a connector that is linked to a source system. The connector forms a link between
your source system and SAP Signavio Process Intelligence. All connectors support standard data extraction,
whereas only the SAP ERP connector supports both standard and advanced extraction.
The SAP ERP connector is used for creating examples in the data extraction sections.
Find how to customize source data to define which data is extracted from a connected source system and
loaded into SAP Signavio Process Intelligence.
A source data defines what data is extracted from a source system. The data extraction is configured in source
data. You can set up different source data using the same connection and data set.
Related Information
Learn how to get to the Source Data feature in SAP Signavio Process Intelligence and view the available
options.
To view the source data, open (Process Data Management) in the sidebar and select Source Data.
The Source Data overview page appears with the following options:
Each card shows the status of the source data. For example,
Never executed, Error, Canceled, Completed.
Note
Your assigned feature set determines which options ap-
pear in the action menu. For information on feature
sets, see Access Requirements for Process Data Man-
agement [page 52]
Following status messages show up next to the name of the source data in the header:
Find out how to create a source data, which defines what is extracted from a process data pipeline.
Note
You need the feature set ETL - Superuser Role to use these functions. Your workspace administrator
can enable it for you.
After establishing a connection with your source system, you must create a source data using that connection
type. While creating a source data, you're prompted to choose a connection, and you need to choose the one
you've created earlier.
• You create a new source data and link it to a process data pipeline.
• While creating a process data pipeline, if you choose the New Source Data option, then a source data is
automatically created and linked to the process data pipeline. The following applies:
• If you've set up your process data pipeline with a transformation template, the source data is
preconfigured. You can customize the source data if necessary.
• If you didn't use a template, you need to configure the source data.
Read more in the section Create a process data pipeline [page 245].
Example
To customize the source data, you can perform the following tasks:
• Managing Tables for Data Extraction [page 173]
• Standard Data Extraction [page 178]
• Advanced Data Extraction [page 201]
• Running the Initial Load Extraction [page 213]
• Scheduling the Data Extraction [page 214]
Related Information
Find out how to link a connection while creating and editing a source data.
Context
While creating a source data, you're prompted to choose a connection. If you've already linked a connection,
you can still change to a new connection. Before you do that, make sure that the source data is not linked to
any process data pipeline that's running or scheduled to run. If the source data is part of any pipeline run, then
that pipeline execution fails.
Note
Before you edit or delete a source data, make sure that the source data is not linked to any process data
pipeline that's running or scheduled to run. If the source data is part of any pipeline run, then it fails.
Note
You can customize the data extraction by adding or removing tables and columns to or from source data. You
can also add column descriptions.
After creating a source data with linked connection, you need to add tables, columns from which you want
to extract. While creating a source data, you can add tables and columns. If you chose not to do that during
source data creation, you can still add tables and columns while editing the source data.
After customizing tables and columns, and defining initial load and delta load criteria, you need to run the
extraction. The status of the extraction can be monitored from Logs tab in the source data.
For different extraction types and options, see Extraction Options [page 162].
The options available on the Source Data header menu, Tables tab, and the table detail view screen are as
follows.
This image is interactive. Hover over each area for a description. Click or tap highlighted areas to show
more information below the image.
Option Description
Option Description
Delete data / table Shows options to delete extracted data while retaining table,
or delete the entire table from your source data.
Option Description
Delete data / table Shows options to delete extracted data while retaining table,
or delete the entire table from your source data.
Option Description
Learn how to add tables and columns to the source data for data extraction.
Context
After creating a source data with linked connection, you need to add tables, columns from which you want to
extract data. While creating a source data, you can add tables and columns. If you chose not to do that during
source data creation, you can still add tables and columns while editing the source data.
When adding a table, you're required to add at least one column. Editing a table lets you add and edit columns.
Note
NULL values aren’t supported in key columns for SAP source data. To use NULL values in key columns for
SAP source data, use the advanced extraction mode.
Note
If your linked connection is SAP RFC, the extract configuration type dialog opens.
The table is added to the source data. Using SQL and YAML script, you need to define the initial load
and delta load extraction.
Related Information
Find out how to rename the table alias, which is used as a table reference during data transformation.
Context
The table alias is used as a reference in the transformation scripts of business objects. Alias isn't included in
the data extraction.
Note
• When you add a new table to the source data, the table alias is created automatically with the table
name.
• If you want to link multiple source data to your process data pipeline, table names must be unique
across all source data. For this reason, you need to create aliases for tables with the same name.
• When creating, changing, or removing aliases for table names, you must update the table names
accordingly in the transformation scripts of business objects.
Procedure
3. From the table header menu, select and then Rename alias.
4. In the Rename alias popup, enter a name, and confirm with Save.
Learn how to extract data from your source systems using the standard data extraction feature.
The following image shows the standard data extraction workflow for SAP ERP (RFC) connector.
Note
For SAP ERP (RFC) source data, you don't need to prepare an initial load for source data tables where
you choose Automatic Pagination as your partition strategy. The Automatic Pagination feature enables
automatic data partitioning when you run the extraction. For more information, see Automatic Pagination
[page 190].
Related Information
Learn how to add columns to your source data, how to define key columns, and add descriptions to columns.
While creating a source data, you can add tables and columns to it. If you chose not to add the tables and
columns during source data creation, you can still do it while editing the source data. For more information on
adding tables, see the Adding Tables [page 176] section.
If you've added a table but not all the required columns, you can then add columns to an existing table. In
addition, you can set key columns, and pseudonymize column values.
You can also extract selected table data and run initial extraction from the table details view screen.
Following are the options available in the table details header menu and columns table.
This image is interactive. Hover over each area for a description. Click or tap highlighted areas to show
options and their details below the image.
Option Description
Delete data / table Shows options to delete extracted data while retaining table,
or delete the entire table from your source data.
Option Description
Next: Adding Column to an Existing Table and Deleting a Column [page 182]
Related Information
While creating a source data, if you haven't already added all the required columns to a table, you can still add
columns to an existing table. You can also delete columns from a table in source data.
Using the Edit columns option in the table details view, you can add columns to an existing table, edit, and
delete columns.
Data pipelines fail if any name or alias contains other characters. Read more in the Troubleshooting
Transformation Errors section.
Column names and aliases can contain only alpha-numeric and the following special characters:
• a-z
• A-Z
• 0-9
• §±!@#$%^*()_-+=[]{}'`~\|/.>? äöüÄÖÜß
Deleting a Column
Select the columns that you want to delete. Then, select Delete and confirm with Save.
Related Information
Learn how to set key columns for the tables that you want to extract, and how to add descriptions to the
columns.
Context
After creating a source data and adding tables and columns to it, you must set key columns in each table.
Key columns are used to identify unique rows in a table and to find duplicate rows. So, it's important that the
key columns are correctly defined in each table. For that, do the following:
Procedure
1. On the Columns tab, select Edit columns. All the existing columns are editable.
2. Choose the column and enter text in the Description field.
Tip
3. Select the Key column checkbox for a column that you want to define as a row identifier.
Remember
You can select up to 63 columns as the primary key in a table. Selecting more primary keys is invalid
and causes an error.
Previous: Adding Column to an Existing Table and Deleting a Column [page 182]
Related Information
Learn how to replace personally identifiable data with artificial pseudonyms. This is to pseudonymize extracted
data.
After adding tables and columns to your source data, you must enable the pseudonymize option for the
columns with personally identifiable data.
Personally identifiable data can be replaced with artificial pseudonyms. Pseudonymization happens during
data extraction, before the data reach SAP Signavio Process Intelligence.
When you select a column for pseudonymization, the values will be encoded using the SHA3-256 algorithm.
To learn about the pseudonymization in SaaS applications, see How the Data is Extracted from SaaS
Applications [page 165] section and for applications hosted on-premise, see How the On-Premises Extractor
Works [page 132] section.
Note
• You can pseudonymize data of the type Text and the number types, BigInt, Double, and Numeric (but
not Integer).
• While previewing the table data, you can see the artificial pseudonyms assigned to values. To preview,
open the table and in the side panel, select Table preview.
• Pseudonymization is applied in the next extraction run.
Related Information
The largest amount of data is pulled during the first extraction. To avoid overloading the system, you can
configure a partition strategy. With the partition strategy, you define to extract the data in chunks.
Note
The optimal partition size depends on the total number of data rows and how the data is distributed over a
certain period of time. For example, you select a different strategy for data that is uniformly distributed over
a period of time than for data that isn't equally distributed.
For each partition, an extraction is performed. One the one hand, the more partitions you specify to reduce
the partition size, the more extractions are run. This can reduce system performance. On the other hand, if
partitions are too large, connection timeouts to source systems can prevent the extraction.
Therefore, we recommend that you analyze the amount of extraction data and its distribution. Also, check
with the preview function whether the selected partition strategy is reasonable.
When you set a partition strategy, it's also applied to the delta extraction. Find more information on this,
seeSetting Delta Filters [page 192].
Available Strategies
None No partitions -
Manual Date (Time) Load data based on its date and time Format for dates: Specify the format in
information
which you want to enter the dates.
Manual Static Value Load data based on selected attributes, Choose a column, then add the values
for example industries or countries that are used to create partitions.
Automatic (Pagination) Partition extracted data in chunks of You can change the custom page size of
rows 25,000 rows.
Note
For more information, see Automatic
This option is only available for SAP Pagination [page 190].
ERP (RFC) connections.
Note
If you change the partition strategy for a table from Automatic (Pagination) to Manual or None, all the
already extracted data for that table is deleted.
Error Handling
• Reduce the partition size by changing the partition parameters and try again.
• Increase the value for the connection timeout via the connection parameters in the connection and try
again.
Related Information
3.6.4.3.3.5.1 Manual
For manual partition, you need to prepare the initial load and then extract the initial load.
After adding tables and columns to your source data, and customizing columns, you need to configure tables
to extract data.
During initial extraction, a large amount of data is pulled. To avoid overloading the system, you need to
configure a partition strategy. With the partition strategy, you define to extract the data in chunks.
Before you run the initial extraction, we highly recommend applying data partitioning to all the tables in your
source data. This ensures that the initial load is executed successfully. Configuring partitions is most important
for large tables such as EKKO.
For information on different use cases and the way data is extracted based on the criteria in table configuration,
see Delta Load and Partition Strategy [page 194].
Note
For partitioning and extracting data from your SAP ERP (RFC) source system, you don't need to perform
manual partition for source data tables where you choose Automatic Pagination as your partition strategy.
The Automatic Pagination feature enables automatic data partitioning when you run the extraction. For
more information, see Automatic Pagination [page 190].
For other connectors and for SAP ERP (RFC) source data tables where you have not selected Automatic
Pagination as the partition strategy, you still need to prepare the initial load.
For information on how to partition SAP ERP (RFC) source data tables, see Automatic [page 190].
Once the initial extraction is done, you can no longer change the partition strategy to keep the data consistent.
To choose a different partition strategy, delete the extracted data and reset the partition strategy.
Related Information
Context
When you are partitioning data for SAP ERP (RFC) connections, you have the option of Automatic Pagination
as well as the manual options available for the other connectors. To set a partition strategy for an SAP ERP
(RFC) source data, follow these steps:
Procedure
Note
This feature applies only to SAP ERP (RFC) connections where you have selected Automatic (Pagination) as
the partition strategy. For other connectors or for SAP ERP (RFC) connections where you haven't enabled
Automatic Pagination, you still need to prepare the initial load. For more information, see Manual [page
188].
By default, when you select Automatic Pagination as the partition strategy for an SAP ERP (RFC) source
data table, data is extracted in chunks of 25,000 rows. You can change this page size to meet your own
requirements. This means that you no longer need to use advanced data extraction if your sole purpose was
data partitioning. The maximum page size using this data extraction pagination feature is 50,000 rows and the
minimum page size is 10,000 rows. You can change the page size in intervals of 5,000 rows. We recommend
using the default page size of 25,000 rows unless you have a specific need to change it.
Caution
The SAP ERP (SAP RFC) connector page size should be greater than or equal to the page size set here. The
default connector page size is 50,000.
Note
Make sure that each extracted table contains the key columns correctly defined.
When you view the extraction log for a table, you will see just one entry, which is updated as the data is
extracted. This is different from previous behavior, where you see several entries for each partition.
When the data extraction fails, the retry begins from the end of the last page that was successfully
extracted. For example, if the error occurs at row 50,786, and the default page size of 25,000 has been
used, the retry starts with row 50,001.
For information about changing the default pagination for an SAP ERP (RFC) source data table, see Changing
the Page Size for Automatic Pagination [page 192].
Before you activate Automatic Pagination, ensure that you meet the following minimum requirements:
• You must meet the prerequisites of your SAP ERP (RFC) source system, including having the required SAP
Notes installed. For more information on the required SAP Notes, see Required and Use Case Specific SAP
Notes [page 59].
• If you are using the on-premises extractor, you must be using version v1.944.0 or later for the Automatic
Pagination feature to work.
• Validate that all the key columns are correctly configured.
Context
To change the default pagination for an SAP ERP (RFC) source data table, follow these steps:
Procedure
Note
The maximum page size using this data extraction pagination feature is 50,000 rows and the minimum
page size is 10,000 rows. You can change the page size in intervals of 5,000 rows. We recommend
using the default page size of 25,000 rows unless you have a specific need to change it. For more
information, see Automatic Pagination [page 190].
7. Choose Change.
8. Save your changes.
Overview
After adding tables and columns to your source data, and configuring the partition strategy, you can set delta
filters if you want to periodically extract data from your source system.
Make sure that each extracted table contains a primary key definition, PK.
Note
The delta parameter in the advanced extraction usually pulls data up to the current day minus one (current
day - 1), while the standard extraction always pulls data up to the current date.
Note
You can only use columns with the types Date and Timestamp as delta filters. Where Timestamp is used
as a delta filter, only the date part of the timestamp is considered. The time part of the timestamp is not
considered.
If you've set up your process data pipeline using a process-based template, check for each table whether the
default initial values of the delta filter are meaningful for your use case.
• Newly added rows are extracted based on the timestamp or insertion sequence.
• Updated rows are extracted as new parquet files with the extraction timestamp, c_fetchdate.
For each extracted table, at least two parquet files are generated, one with the raw data extracted during the
initial run and the other with the deduplicated data, which includes the existing and newly extracted data. Data
deduplication happens by grouping all the rows based on the primary key, then choosing the most recent entry
extracted from all parquet files. All this happens internally, therefore, you can only access the optimized table
with unique data.
Let's assume you want to extract the delta load. For that, consider you have initially extracted data from the
following table named "Sales" extracted on 2022-11-05.
After initial extraction, you have updated the existing customer data (Customer ID 003) and added new data,
as follows:
As the existing data is updated and new data is added, a parquet file called "Sales-20221105.parquet" is
generated. This parquet file contains duplicate rows for the record, 003. The data in these two tables is then
grouped by primary key and c_fetchdate, so that only the most recent records are available in the optimized
table.
The parquet files are maintained by SAP Signavio on the AWS platform and are currently not accessible.
The data extraction happens based on the table configuration in source data.
The following table shows the behavior of delta load based on different configurations.
• Partition strategy: Manual Date (Time) Data is extracted and partitioned with the columns used in
• Delta filter: Configured the Partition Strategy and Delta Filter.
• The column in the delta filter is the same as the column Dates defined in the Delta Filter are considered.
in the partition strategy.
• Delta filter: Configured Dates defined in the Delta Filter are considered.
• The column in the delta filter is not the same as the
column in the partition strategy.
Find out how to apply an SQL filter to define what data to extract with a data pipeline.
On table level, you can limit the extraction data with an SQL filter. Then, only data sets with attributes matching
the SQL query are extracted.
Note
You cannot start the filter with a WHERE clause because the WHERE clause is automatically applied by
the system. In addition, the filter cannot contain a semicolon (;).
To edit an SQL filter, select the code, edit, and confirm Save.
Note
For SAP ERP (SAP RFC) connectors, if you are using a static date in the filter, ensure that each element
separated by a hyphen. For example '2024-07-24'.
You can add comments to your SQL query to help explain different parts of the query or if you want to
temporarily exclude part of the query. Comments can be placed anywhere within the query, including at the
beginning and end of the filter.
Multi-line comments begin with a forward-slash (/) followed by an asterisk (*) on the first of the lines that
you want to use as a comment and end with an asterisk (*) followed by a forward slash (/) on the last line
of the comment. These comments span multiple lines, allowing for more detailed descriptions or temporary
exclusions of larger sections of the query.
/*
This is a test
multi-line comment.
*/
CustomerID = 13
AND Country = 'France'
You can save your filter with the comments included and then preview the actual query that will be executed by
the system with all the comments removed. Comments are not sent to the source system and have no impact
on the logic or execution of the query.
Related Information
3.6.4.3.3.7.1 Subqueries
A subquery is a query that is nested inside another SQL statement. You can add a subquery in the SQL Filter.
This subquery corresponds to the WHERE clause of the main query.
Note
• We ony support the following aggregate functions (AVG, COUNT, MIN, MAX, SUM)
• EXISTS statements are not supported
InvoiceNumber in (
SELECT Number FROM Invoices WHERE Value > 10
)
Nested queries such as the following are supported for all connectors except for SAP ERP (RFC) tables where
you have enabled Automatic Pagination. This is because subqueries would be paginated for these SAP ERP
(RFC) tables.
InvoiceNumber = (
SELECT Number FROM Invoices
WHERE AmountOfItems = (
SELECT MAX(AmountOfItems) FROM Purchase
)
)
Similiarly, multiple subqueries such as the following are supported for all connectors except for SAP ERP
except for SAP ERP (RFC) tables where you have enabled Automatic Pagination. Again this is because
subqueries would be paginated for these SAP ERP (RFC) tables.
InvoiceNumber in (
SELECT Number FROM Invoices WHERE Value > 10
)
AND
AmountOfItems = (
SELECT MAX(AmountOfItems) FROM Purchase
)
3.6.4.3.3.7.2 Variables
Note
This variables feature is only available for SAP ERP (RFC) connectors.
You can use dynamic time variables in the SQL Filter in order to specify delta criteria in subqueries or for delta
logic that cannot be handled by the delta filter.
The following example shows a case where a varable is required as the delta logic cannot be handled by the
delta filter.
KOART = 'K'
AND (
( AUGCP >= '$delta' )
OR
(
BELNR IN (
SELECT
DISTINCT BELNR
FROM BKPF
WHERE
( CPUDT >= '$delta'
OR AEDAT >= '$delta'
OR UPDDT >= '$delta')
)
)
Learn how to delete the extracted data while either keeping the table's configuration or deleting the table
entirely in your source data.
You can delete the extracted data while retaining the configured table, or delete the entire table from your
source data in SAP Signavio Process Intelligence. To do this, use the Delete data / table option, which is
accessible in SAP Signavio Process Intelligence at the following places:
Caution
• Deleting the extracted data or the entire table that is linked to a process data pipeline will affect the
data transformation within that pipeline.
• Data is deleted permanently from SAP Signavio Process Intelligence and this can't be undone.
Tip
You can also delete from the table details view. Expand the table and from the header menu, select ,
then Delete data/table.
A popup window appears with the following options for you to choose.
• Delete only extracted data and keep the table: Deletes only the extracted data while keeping the table's
configuration for later use. Selecting this option allows you to re-extract the data using the existing
table configuration.
The following table configuration-related options are also displayed:
Keep partition strategy It's selected by default if partitions are set for your table,
else disabled.
Keep delta filter It's selected by default if the delta filter is configured for
your table. The following delta options are displayed.
• Keep the initial value
• Make current value as initial value
For example,
Keep SQL filter It's selected by default if SQL filters are defined for your
table, else disabled.
Keep SQL filter and variables It's selected by default if SQL filters and variables are
defined for your table, else disabled.
Note
Deselect to delete the filter and variables.
This variables feature is only available for SAP ERP
(RFC) connectors.
• Delete the table completely: Table and extracted data is deleted permanently.
Learn how to extract a large volume of data from your source systems using the advanced data extraction
feature.
Note
• To extract data, you need a connector that is linked to a source system. All connectors support
standard data extraction, whereas only the SAP ERP connector supports both standard and advanced
extraction.
• When it comes to more complex scenarios, the data needs to be extracted using custom code.
Therefore, you must be familiar with SQL code and YAML to use the advanced data extraction.
Use Cases
Following are other possible use cases for choosing advanced data extraction:
• When you need two or more delta parameters to extract the table.
• When you want to partition data based on range.
• When you want to join two or more tables for the extraction.
In a table's advanced extraction mode, you use a script to specify what data to extract from a table.
In an extraction script, you can specify extraction options such as a partition strategy and filters, as well as
custom extraction options.
The script editor provides a linter that parses the script and detects errors related to extraction code. Rows
with errors are highlighted and indicated by a red dot. If available, additional information is displayed when you
hover over the error.
Learn the different ways of partitioning your data, and why it's necessary.
Before you run the initial extraction, we highly recommend applying data partitioning to the table. Due to the
high amount of data being extracted from the source systems, you need to split the data into smaller chunks.
This improves the processing time.
To configure the initial load partitions and delta criteria, open your source data and expand the table. The script
editor opens in the side panel for you to add your code.
Learn how to use static values to partition your data. Example code is included to help you get started.
Example code:
tableSyncConfigurations:
alias: bkpf
sql: |
SELECT
CONCAT(MANDT, BUKRS, BELNR, GJAHR) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDT,
BUKRS,
BELNR,
GJAHR,
CPUDT,
CPUTM,
In the above code, partitions is the place where you define the values to filter data. When using partitions,
the extraction code needs to be adjusted at two places. Depending on the defined number of partitions, the
individual extractions start.
In the example, GJAHR is the column in the table that you want to partition.
:gjahr is the link to the number of partitions defined under partitions in the query.
Learn how to partition your data based on its date values. Example code is included to help you get started.
Example code:
tableSyncConfigurations:
alias: cdhdr
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDANT,
OBJECTCLAS,
OBJECTID,
CHANGENR,
UDATE,
UTIME,
TOSTRING(HASHBYTES('MD5', CDHDR.USERNAME)) AS USERNAME
FROM CDHDR
WHERE UDATE >= :partition_date
AND UDATE <= DATEADD('MM', 1, :partition_date)
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
partitions:
- name: partition_date
values:
- '2022-01-01'
- '2022-02-01'
- '2022-03-01'
- '2022-04-01'
- ...
DATEADD('MM', 1, :partition_date) 'MM' represents monthly. You can set daily as DD, weekly as WW,
monthly as MM.
Learn how to partition your data based on subquery results. Example code is included to help you get started.
Use this partition strategy when the table does not have a delta parameter.
Example code:
tableSyncConfigurations:
alias: cdhdr
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR) as c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
CHANGENR,
MANDANT,
OBJECTCLAS,
OBJECTID,
UDATE,
TOSTRING(HASHBYTES('SHA3_256', CDHDR.USERNAME)) AS USERNAME,
UTIME
FROM CDHDR
WHERE OBJECTID IN (
SELECT DISTINCT CONCAT(AUFK.MANDT, AUFK.AUTYP, AUFK.AUFNR) AS aufk_keys
FROM AUFK
WHERE ERDAT >= :partition_date
AND ERDAT <= DATEADD('WW', 1, :partition_date))
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
partitions:
- name: partition_date
values:
- '2020-01-01'
- '2020-01-08'
- '2020-01-15'
- '2020-01-22'
- '2020-01-29'
The following table explains the code for partitioning data based on subquery results.
WHERE OBJECTID IN ( Using the partitions with ":" shows the connection to defined
SELECT DISTINCT partitions. This can be daily, weekly, and monthly.
CONCAT(AUFK.MANDT, AUFK.AUTYP,
AUFK.AUFNR) AS aufk_keys The subquery is executed before the main query and then
FROM AUFK applied as filter to make sure that the desired data is ex-
WHERE ERDAT
>= :partition_date tracted.
AND ERDAT <= DATEADD('WW',
1, :partition_date))
After initial load, make sure you set the delta load based
partitions: on the defined partitions here. While creating delta load us-
- name: partition_date
ing the date information, add parameters section with initial
values:
- '2020-01-01' date for the very first delta load. This is to ensure the data
- '2020-01-08' extraction happens based on correct date values. For infor-
- '2020-01-15' mation about delta load, see Delta load using date informa-
- '2020-01-22' tion. [page 208]
- '2020-01-29'
Learn how to partition your data based on the change number range.
Use the following script to extract data from the CDPOS table based on the change number range:
tableSyncConfigurations:
alias: cdpos
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR, TABNAME, TABKEY, FNAME,
CHNGIND) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDANT,
OBJECTCLAS,
OBJECTID,
CHANGENR,
TABNAME,
TABKEY,
FNAME,
CHNGIND,
VALUE_NEW
FROM CDPOS
WHERE CHANGENR >= :changenr AND CHANGENR <= :max_changenr
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
parameters:
- name: changenr
initial: '0000000001'
idformat: '%010d'
pagesize: 100000
type: id
In the preceding script, the WHERE condition in the SQL query uses the values defined in the Parameters code
block. This implies the table is extracted using parameters, which dynamically partition the data based on the
initial and pagesize values.
For initial data extraction from the CDPOS table, you first need to extract data from the CDHDR table based
on the udate, the required date range. For example, extract the data that is added or updated on or after
01.01.2023, udate >= 01-01-2023. For information on extracting data based on date, see Partition based on
date [page 204].
Example
Example Query
tableSyncConfigurations:
alias: cdhdr
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDANT,
OBJECTCLAS,
OBJECTID,
CHANGENR,
UDATE,
UTIME,
TOSTRING(HASHBYTES('MD5', CDHDR.USERNAME)) AS USERNAME
FROM CDHDR
WHERE UDATE = :partition_date
AND UDATE = DATEADD('MM', 1, :partition_date)
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
partitions:
- name: partition_date
values:
- '2023-01-01'
- '2023-02-01'
- '2023-03-01'
- '2023-04-01'
- ...
Next, you need to get the minimum change number from the CDHDR table that was extracted in your process
data pipeline. For that, use the following query:
SELECT min(changenr)
FROM CDHDR
Let's assume the minimum change number you get is 000000000. Pass this number as an input to the
initial parameter.
The Pagesize parameter defines the number of rows in a partition. This parameter dynamically splits the
table into partitions based on the minimum change number. You'll define only the initial value for the Page size
parameter, and the system then iterates and creates partitions until no new results. When a partition doesn't
return results, the last value used for the extraction is saved and used as the new initial value for the next data
extraction. This saved value is passed as an input to the initial parameter in the next extraction.
Let's assume:
initial: 00000000,
pagesize: 100000
then the dynamic partitioning happens internally in the CDPOS table, as follows:
WHERE changenr >= 00000000 and changenr < 01000000. The result of initial + pagesize, 00000000
+ 100000 becomes 01000000.
CDPOS WHERE changenr >= 01000000 and changenr < 02000000) . The result of initial +
pagesize+Pagesize, 01000000 + 100000 + 100000 becomes 02000000.
If the second partition doesn't return results, the last extracted value in the first partition 01000000 is saved in
the background for the next extraction. The 01000000 value is passed as an input to the initial parameter
in the next extraction.
Caution
The extraction process will stop when a partition does not return any result. Similarly, if a specified
pagesize does not return any result due to other applied filters, the extraction will stop, regardless of the
presence of records with a higher changeNR. In such cases, it's recommended to adjust the pagesize
parameter value accordingly.
Related Information
Delta load is the process of extracting only the new data added to a table since the last successful load. You can
define delta criteria such as certain date and time for each field in a table. Based on the set criteria, the delta
load of the table is extracted. Each table extracted as a parquet file has an extraction timestamp, c_fetchdate.
Make sure that each extracted table contains a primary key definition, PK.
Example
• Newly added rows are extracted based on the timestamp or insertion sequence.
• Updated rows are extracted as new parquet files with the extraction timestamp, c_fetchdate.
For each extracted table, at least two parquet files are generated, one with the raw data extracted during
the initial run and the other with the deduplicated data, which includes the existing and newly extracted
data. Data deduplication happens by grouping all the rows based on the primary key, then choosing the
most recent entry extracted from all parquet files. All this happens internally, therefore, you can only access
the optimized table with unique data.
Let's assume you want to extract the delta load. For that, consider you have initially extracted data from the
following table named "Sales" extracted on 2022-11-05.
After initial extraction, you have updated the existing customer data (Customer ID 003) and added new
data, as follows:
As the existing data is updated and new data is added, a parquet file called "Sales-20221105.parquet" is
generated. This parquet file contains duplicate rows for the record, 003. The data in these two tables is
then grouped by primary key and c_fetchdate, so that only the most recent records are available in the
optimized table.
The parquet files are maintained by SAP Signavio on the AWS platform and are currently not accessible.
For advanced data extraction, you can configure delta load in the following ways:
Note
The delta parameter in the advanced extraction usually pulls data up to the current day minus one (current
day - 1), while the standard extraction always pulls data up to the current date.
When you run extraction, based on the defined delta criteria, the data is extracted.
Related Information
Following is the example code for delta load using date information:
tableSyncConfigurations:
alias: cdhdr
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDANT,
OBJECTCLAS,
OBJECTID,
CHANGENR,
UDATE,
UTIME,
TOSTRING(HASHBYTES('MD5', CDHDR.USERNAME)) AS USERNAME
FROM CDHDR
WHERE UDATE >= :left_erdat
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
parameters:
- name: erdat
initial: 2023-01-01
format: yyyy-MM-dd
type: date
In the above code snippet, the :left_ and the parameter name entered in parameters section, is a delta
variable.
In parameters section of code, you are free to choose a name for your parameter. The initial date which
you entered is used for the very first delta load. After this load, the value of that parameter is automatically
adjusted in the background and set to the date of the last extraction.
After configuring the delta load, run the extraction to pull the new data continuously into SAP Signavio Process
Intelligence. You can then create a process data pipeline and perform data transformations. You can also
monitor the pipeline.
Following is the example code for delta load using ascending numbers. The following query can be used for the
initial extraction as well. For example, changenr of the CDPOS Table.
tableSyncConfigurations:
alias: cdpos
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR, TABNAME, TABKEY, FNAME,
CHNGIND) AS c_key,
CURRENT_TIMESTAMP() AS c_fetchdate,
MANDANT,
OBJECTCLAS,
OBJECTID,
CHANGENR,
TABNAME,
TABKEY,
FNAME,
CHNGIND,
VALUE_NEW
FROM CDPOS
WHERE CHANGENR >= :changenr AND CHANGENR <= :max_changenr
keyColumn: c_key
mostRecentRowColumn: c_fetchdate
parameters:
- name: changenr
initial: '0000000001'
idformat: '%010d'
pagesize: 1000000
type: id
In the above sample code, the partitions with ":" shows the connection with defined parameters for a potential
load.
For the initial delta load, the value entered in the parameters , is the starting point. The logic in the
parameters, loads the number of rows defined in the pagesize parameter, and often return the results.
We recommend that you increase the page size when you add additional parameters.
After configuring the delta load, run the extraction to pull the new data continuously into SAP Signavio Process
Intelligence. You can then create a process data pipeline and perform data transformations. You can also
monitor the pipeline.
Learn how to configure delta load using sub queries for advanced data extraction.
tableSyncConfigurations:
alias: cdhdr
sql: |
SELECT
CONCAT(MANDANT, OBJECTCLAS, OBJECTID, CHANGENR) as c_key,
In the above sample code, the :left_ and the parameter name entered in parameters section, is a delta
variable.
In the parameters, you can assign a name for your parameter. And, the initial date, which you entered is used
for the first delta load. After this load, the value of that parameter is automatically adjusted in the background
and set to the date of the last extraction.
Note
After configuring the delta load, run the extraction to pull the new data continuously into SAP Signavio Process
Intelligence. You can then create a process data pipeline and perform data transformations. You can also
monitor the pipeline.
Learn how to delete the extracted data while either saving the script or deleting the table entirely in your source
data.
Context
You can delete the extracted data while saving the advanced extraction's script, or delete the entire table from
your source data in SAP Signavio Process Intelligence. To do this, use the Delete data / table option, which is
accessible in SAP Signavio Process Intelligence at the following places:
• Action menu of a selected table on the source data tables list screen
• Table configuration screen
• Deleting the extracted data or the entire table that is linked to a process data pipeline will affect the
data transformation within that pipeline.
• Data is deleted permanently from SAP Signavio Process Intelligence and this can't be undone.
Procedure
• Delete only extracted data and keep the table: This deletes the extracted data and saves the current
script for later use. For new data with current configuration, you need to re-extract data from your
source system.
• Delete the table completely: This deletes the table and extracted data permanently.
The data extraction happens based on the table configuration in source data. After adding required tables and
columns, and configuring initial load and delta load, as needed, you can then trigger the data extraction.
Note
Only the Extract(Initial load) button on the table configuration screen in source data can trigger the initial
load extraction process.
For delta load extraction options, see Data Extraction Options [page 162] and for defining delta load filters,
see Setting Delta Filters [page 192].
1. Open your source data and expand the table whose initial data load you want to extract.
2. On the Configuration tab, select Extract (initial load).
3. Select to close the table details view.
4. Select the Logs tab to monitor the extraction and to view the extraction details of all the tables .
5. Select for the log entry that you want to monitor.
The side panel opens with details of the selected log entry.
You can preview the extracted data per table. To do so, switch to the Tables tab. Expand the table and select the
Table preview tab.
After running your initial extraction, configure and schedule the delta load to automatically pull new data from
your source system.
For each new process data pipeline, you need to run the initial data extraction manually.
Related Information
While creating or editing the source data, you can set a schedule for the data extraction. Based on the set
criteria, it extracts the new data since last extraction from the source system.
After configuring the delta load, the new data that is added to your source system is continuously pulled into
SAP Signavio Process Intelligence. Then, you can create a process data pipeline and run data transformations.
You can also monitor the pipeline during data transformation.
Related Information
Learn how to configure an Ingestion API connection and source data, then upload data to SAP Signavio Process
Intelligence.
Uploading data to SAP Signavio Process Intelligence with the Ingestion API is different to the standard way
of uploading and transforming data. There is no extraction step after uploading data to SAP Signavio Process
Intelligence, only transformation and load.
You can set up data ingestion by either creating a source data or a process data pipeline.
Prerequisites
Before uploading, make the following checks and preparations in your data to avoid encountering errors.
• Ensure dates and times use supported data types. These include:
• Date
• Timestamp (millisecond precision)
• Time (millisecond precision)
• Convert date and timestamp formats in files to milliseconds before upload. For example:
Fri Jun 24 2022 10:58:41 GMT+0200 becomes 1656061121670 milliseconds
• Replace NULL values with an empty string in files. Not all NULL value types are detected.
• Column names must not contain special characters. Only characters from the character class [A-Za-z_]
are valid for column names. For example, column names must not contain spaces.
• Avoid spaces in the names of uploaded files.
• Avoid spaces in the specified table name in the schema.
• SAP Signavio Process Intelligence doesn't support pseudonymizing the uploaded data.
Uploading Data
1. Set up data ingestion by creating an ingestion connection, source data, or process data pipeline in SAP
Signavio Process Intelligence. Read more in section Setting Up Source Data [page 216].
Note
• Connections to the Ingestion API are authenticated by an access token. This token can't be
refreshed once you have created an Ingestion API connection. To get a new token, delete the
existing connection and create a new connection. See Ingestion API Authentication for more
information.
2. Call the API using the API credentials and upload the data. Read more in the sections Ingestion Request
and Ingestion Status Request.
Note
• The duration between API calls that upload data to the same source must be at least 30 seconds.
Otherwise, you get a timeout error.
• If the size of the CSV or TSV file exceeds 150 MB, we recommend you split it into multiple files of
maximum 150 MB each. You can then make multiple upload requests using the same schema.
• API calls mustn't contain more than five files per call.
3. Run the initial transformation and load. Pipeline logs are generated to provide transformation and load
information. Read more in section Running the Transformation and Load [page 289].
Once uploaded, the data is ready for investigation. Read how to define and grant access to process data in
section Prepare a process [page 20] and how to analyze data in section Process Mining [page 301].
• Your data must conform to the existing data ingestion schema. The Ingestion API can't be used to modify
the schema.
• The primary key of your new data must be the same as that of the existing data. The primary key of an
existing ingestion table cannot be modified.
• If you upload data for existing records which were added in earlier requests, it is assumed you're
performing an update. Pushing data to the same table with the same primary keys overwrites existing
data. Note that this doesn't apply to duplicate records uploaded in the same request. It is expected that all
records in a single upload request are unique.
Read about how to set up data ingestion with the Ingestion API in SAP Signavio Process Intelligence.
You can set up data ingestion by either creating an ingestion source data or a process data pipeline.
Note
• You can delete only the ingested data, or delete the entire table. To delete, select the table, then Delete
data / table. Choose one of the options and select Delete.
Note
The source data and linked connection are created at the same time. They both share the same name.
Read about how to manage the ingestion API access token in SAP Signavio Process Intelligence.
An access token is used to authenticate calls to the Ingestion API. The token is created when you set up an
ingestion connection, as described in Setting Up Source Data [page 216].
After creating an ingestion connection, the generated access token can't be refreshed or changed. If an access
token for a specific connection is compromised, you must get a new access token.
To get a new access token, delete the existing connection and create a new connection. Use the new access
token to authenticate the Ingestion API calls to SAP Signavio Process Intelligence.
Note
Read about the reference information for the Ingestion API available in SAP Signavio Process Intelligence.
The following table describes possible solutions to problems you might encounter when using the Ingestion
API.
Access is denied. The access token for your connection Obtain a new access token. For more information on obtain-
might be invalid. ing a new token, see Ingestion API Authentication.
Your data files are rejected as an unsupported type. The supported file extensions are CSV and TSV. Check your
file extensions.
An invalid character exists between the encapsulated token The relevant line number is provided in the error trace. Find
and the delimiter. that line in your data and amend the invalid character.
A name in the header is missing or duplicated. Check that the header line of the CSV or TSV is present,
complete and has no repeated names.
A field is rejected as not accepting NULL values. In the schema, set the relevant field as nullable.
An error is encountered for a specific input string when con- Ensure that the type in the schema matches the correspond-
verting the CSV/TSV file. ing data in the files.
You receive the message: The initial request for Wait for the first table ingestion request to be concluded
table <Table Name> is still in progress. before starting a second one. After this, other requests can
Please wait for it to finish before be started, but there should be at least 30 seconds between
starting a new request.
them. Otherwise, you may get a timeout error.
How to create a connection that connects SAP Signavio Process Intelligence with a source system.
In SAP Signavio Process Intelligence, you can find the following source data systems to upload data manually:
The following table summarizes the key differences between the two types of uploads:
Configuring and uploading zipped CSV files is centralized. Configuring and uploading CSV file requires a connection
No connection type is needed to create a source data. This setup and involves multiple screens navigation.
simplifies and speeds up the setup.
Table customization is simplified with the improved user in- Table customization is complex and requires additional
terface. steps.
Supports delimiters such as comma (,), semi-colon (;), and Supports only comma (,).
tab control.
Supports uploading zipped CSV files. Supports uploading zipped CSV files.
Using the manual upload source data system, you can upload any data as zipped CSV files into SAP Signavio
Process Intelligence. Before uploading data, get familiar with the guidelines on preparing the data.
Prerequisites
Before uploading, make the following checks and preparations in your data to avoid errors.
• Column names must not contain special characters. Only characters from the character class [A-Za-
z_] are valid for column names. For example, column names must not contain spaces.
• Replace NULL values with an empty string in files. Not all NULL value types are detected.
• Avoid spaces in the names of uploaded files.
• Avoid spaces in the specified table name in the schema.
• Each row must have its own unique identifier in the process data.
• Supports delimiters such as comma (,), semi-colon (;), and tab control.
Data with the separator “\t” is not supported.
• CSV files must contain headers
SAP Signavio Process Intelligence doesn't support Pseudonymizing the uploaded data.
• Data in UNIX timestamp format is supported. For example, while configuring a table, for timestamp value,
1656061121670, you can choose date, timestamp, or time data type.
• To use a string as date or timestamp, convert the string into UNIX timestamp format before uploading. For
example,
convert the string, 1999-05-05'T'05:05:05 into UNIX timestamp, 318304922000.
• If your file has a column with timestamp data, for example, Fri Jun 24 2022 10:58:41 GMT+0200,
use STRING data type for the column while configuring a table.
• The duration between each file upload to the same source must be at least 30 seconds. Otherwise, you get
a timeout error.
Related Information
For detailed data requirements, see the Prerequisites [page 220] section.
The manual upload uses the Avro Apache library to serialize input data into supported Avro data types. While
configuring the table, the data types you choose for the date, date-time, and time formats are automatically
changed into the appropriate Avro data types.
This section describes the date and time data types that are supported and must be used when configuring
tables.
Date
A date logical type annotates an Avro int, where the int stores the number of days from the unix epoch, 1
January 1970 (ISO calendar).
A time-millis logical type annotates an Avro int, where the int stores the number of milliseconds after
midnight, 00:00:00.000.
Timestamp
A timestamp-millis logical type annotates an Avro long, where the long stores the number of milliseconds
from the unix epoch, 1 January 1970 00:00:00.000 UTC.
Nullability
Unions are used to represent nullable fields, for example, ["null", "string"] declares a field which may be
either a null or string.
How to create, edit, and delete source data. A source data defines what is extracted from a data pipeline.
You can upload data manually as zipped CSV files. A zip file can contain data with the same schema or different
schemas.
You'll find the Manual upload option while creating a source data.
Before you get started, make sure you check the Prerequisites [page 220] and Supported Data Types [page
221].
After uploading, configure the table with the correct data types and key columns. See Configuring a Data Table
[page 223].
How to create, edit, and delete source data. A source data defines what is extracted from a data pipeline.
After uploading the data into SAP Signavio Process Intelligence, you can configure the data table with the
correct data types and key columns. For list of supported data types, see Supported Data Types [page 221]
section.
Note
A data table can be configured for multiple files with the same header only.
2. Choose the zip file, open the menu, and select one or more CSV files.
3. To configure a data table for multiple files with same header, choose the files and select Configure table.
4. In the Configure table popup window, enter a name for the data table.
5. Choose either single or double quotes and a field separator from the list and select Next.
Example
Each value in a row is checked against the selected quotation mark and parsed by the separator. If
the selected quotation mark matches the quotation marks around the value, then these values are
formatted automatically. Otherwise, value format remains unchanged. The parsed values are then
available for the table configuration.
The following table demonstrates how quotation marks work with comma (,) as a separator:
FALSE
Double quotation marks 001," Process ",FALSE 001 Values are parsed and
(") auto-formatted.
Process
For example, the space and
FALSE
double quotation marks (")
around the value Process
are removed because the
selected quotation mark
matches the quotation
marks around the value.
Single quotation mark (') 001," Process ",FALSE 001 Values are parsed, but the
value format remains un-
" Process "
changed.
FALSE
For example, the space
and double quotation
marks (" ") around the
Process value remain the
same because the selected
quotation mark doesn't
match the quotation marks
around the value.
Single quotation mark (') 001," Process,Data 001 The values format remains
",FALSE unchanged.
" Process
If your selected quotation
Data "
mark is single, the comma-
FALSE separated values enclosed
in double quotation marks
(" ") are considered as two
separate values while pars-
ing.
Note
Make sure the timestamp data type is set to string. Otherwise, the file upload fails.
8. Choose one or more columns as key columns by ticking the checkboxes and select Create table.
Remember
You can select up to 63 columns as the primary key in a table. Selecting more primary keys is invalid
and causes an error.
9. To view all the columns in a data table, choose the data table and open the menu.
Related Information
How to create, edit, and delete source data. A source data defines what is extracted from a data pipeline.
You can edit the unprocessed CSV file names and zipped file names. You can also edit the configured data table
names.
How to create, edit, and delete source data. A source data defines what is extracted from a data pipeline.
You can delete manually uploaded files, configured data tables, and columns.
Open your source data, choose the file, and select Action menu > Delete.
Open your source data, choose the data table, and select Action menu > Delete.
1. Open your source data, choose the data table and open the menu.
How to create a connection that connects SAP Signavio Process Intelligence with a source system.
Note
The legacy manual upload feature will no longer be available after September 9, 2023.
As an alternative, you can use the improved manual upload feature. Using the improved manual upload, you
can configure and upload files from one place. You don't need a connection type for creating a source data.
This simplifies and speeds up the set up process. For information about the improved manual upload, see
the Manual upload [page 220] section.
Using legacy manual upload, you can upload process data as zipped CSV files.
Related Information
How to create a connection that connects SAP Signavio Process Intelligence with a source system.
Setting up and working with the legacy manual upload feature involves the following procedures:
Remember
You can select up to 63 columns as the primary key in a table. Selecting more primary keys is invalid
and causes an error.
Get an overview of transforming the extracted data and loading it into a process.
After extracting data from source systems, transform and load data into a process for further investigations.
Using process data pipelines, the data is transformed and an event log is generated as a result. The event log
is then loaded into a process linked within a process data pipeline. You can use this process to perform further
investigations and generate insights.
To transform and load data into a process, you need the following:
• A valid connection with your source system whose data you want to transform.
• Data Extracted into SAP Signavio Process Intelligence using Source Data.
If data hasn't extracted already, you can still extract the data using Run ETL option in Process Data
Pipelines.
• A process data pipeline. While creating a process data pipeline creation, you're asked to choose the
appropriate template and then source data. The connection associated with the selected source data is
linked automatically.
• Data transformation rules defined in business objects, event collectors, and a case attribute.
• A process linked to your process data pipeline. This is to upload the event log with transformed data into a
process for further analysis and mining.
• Access to Run ETL or Run T&L options.
In a process data pipeline, you define how the data pipeline extracts and transforms data, and where to load the
data.
A process data pipeline contains all settings necessary to run a data pipeline:
Note
We recommend setting up and testing data pipelines in a test environment. After a successful test, you
can transfer the process data pipeline to your production system by using the import and export functions.
Read more in section Export and import a process data pipeline [page 254].
Concepts
• Business process: The process that you want to analyze in your organization. When you set up a process
data management pipeline, you model the business process in the process data pipeline.
• Business object: An artifact in a process data pipeline. A business object, for example, a lead, consists of
events and attributes.
• Events: Activities for a specific business object, for example the creation and qualification of a lead.
• Event collectors: The scripts for events are called event collectors.
• Attributes: A characteristic of an event on event-level or case-level. For example, the name of the person
that created a lead is an attribute on event-level. The ID of that person is an attribute on case-level.
Data transformation templates, which are part of our accelerators for SAP Signavio Process Intelligence, speed
up the creation of process data pipelines. They're based on business processes in specific source systems, like
the Incident-to-Resolution process in ServiceNow. Find here an overview of all templates that you can use to
create data models for process data pipelines.
For some source systems, we provide transformation templates in which the business process, the extraction,
and the transformation are preconfigured.
The templates are based on common business processes like Lead-to-Opportunity, Lead-to-Quote, or Incident-
to-Resolution.
For a detailed description of each template, see section Data transformation templates.
• You need the feature set ETL - Superuser Role to use the templates. Your workspace administrator
can enable it for you.
• The template documentation is available only for SAP Signavio users with a license for SAP Process
Intelligence.
Use Cases
You can use a process-specific template as a starting point and customize it based on your needs.
Blank templates are empty. Use a blank template if no template is available for your business process.
Here, you can find all templates that are developed and maintained by us or by our partners.
Acquire to SAP S/ Acquire to Asset Man- Reduce Asset All Available SAP Signavio
Onboard 4HANA Decomission agement Maintenance
(SAP S/ Cost
4HANA)
Acquire to SAP ECC Acquire to Asset Man- Reduce Asset All Available SAP Signavio
Onboard Decomission agement Maintenance
(SAP ECC) Cost
Aligning De- SAP IBP Plan to Fulfill Supply Chain Reduce Days All Available SAP Signavio
mand, Sup- in Inventory
ply, and Fi-
nancial Plans
(SAP Inte-
grated Busi-
ness Planning
for Supply
Chain)
Attract to Ac- SAP Recruit to Re- Human Re- Reduce HR All Available SAP Signavio
quire Talent SuccessFacto tire sources Manual
(SAP Suc- rs Transaction
cessFactors) Effort; Re-
duce Time to
Fill
Incident to ServiceNow Lead to Cash Sales Reduce Serv- All Available SAP Signavio
Resolution ice and Sup-
(ServiceNow) port Cost
Inspect to SAP S/ Plan to Fulfill Manufactur- Reduce All Available SAP Signavio
Quality (SAP 4HANA ing Waste Gener-
S/4HANA) ation Cost;
Reduce Total
Manufactur-
ing Cost; Im-
prove On-
Time Delivery
Performance
Inspect to SAP ECC Plan to Fulfill Manufactur- Reduce All Available SAP Signavio
Quality (SAP ing Waste Gener-
ECC) ation Cost;
Reduce Total
Manufactur-
ing Cost; Im-
prove On-
Time Delivery
Performance
Invoice Excel- SAP S/ Finance Finance Improve Days All Available SAP Signavio
lence for In- 4HANA Payable Out-
voice to Pay standing; Re-
Supported by duce Finance
OpenText in Cost; Im-
SAP S/ prove Ac-
4HANA counts Paya-
ble FTE Pro-
ductivity
Invoice Excel- SAP ECC Finance Finance Improve Days All Available SAP Signavio
lence for In- Payable Out-
voice to Pay standing; Re-
Supported by duce Finance
OpenText in Cost; Im-
SAP ECC prove Ac-
counts Paya-
ble FTE Pro-
ductivity
Invoice to SAP ECC Finance Finance Reduce Days All Available SAP Signavio
Cash (SAP Sales Out-
ECC) standing; Re-
duce Compli-
ance and Risk
Management
Cost
Invoice to SAP S/ Finance Finance Reduce Days All Available SAP Signavio
Cash (SAP S/ 4HANA Sales Out-
4HANA) standing; Re-
duce Compli-
ance and Risk
Management
Cost
Invoice to SAP S/ Finance Finance Reduce Days All Available SAP Signavio
Cash (SAP S/ 4HANA Sales Out-
4HANA Cloud Public standing; Re-
Cloud Public Edition duce Compli-
Edition) ance and Risk
Management
Cost
Invoice to Pay SAP ECC Finance Finance Improve Days All Available SAP Signavio
(SAP ECC) Payable Out-
standing; Re-
duce Compli-
ance and Risk
Management
Cost
Invoice to Pay SAP S/ Finance Finance Improve Days All Available SAP Signavio
(SAP S/ 4HANA Payable Out-
4HANA) standing; Re-
duce Compli-
ance and Risk
Management
Cost
Issue to Res- Jira Service Issue to Res- Service Reduce Cus- All Available SAP Signavio
olution (Jira Management olution tomer Churn;
Service Man- for Cloud Reduce Serv-
agement for ice and Sup-
Cloud) port Cost
Issue to Res- Jira Service Issue to Res- Service Reduce Cus- All Available SAP Signavio
olution (Jira Management olution tomer Churn;
Service Man- for Data Reduce Serv-
agement for Center and ice and Sup-
Data Center Server port Cost
and Server)
Lead to Close Salesforce Lead to Close Sales Reduce Sales All Available Partner offer-
(Salesforce Sales Cloud Cost ing by PwC
Sales Cloud) Switzerland
by PwC Swit-
zerland
Lead to Op- Salesforce Lead to Cash Sales Reduce Sales All Available only SAP Signavio
portunity Sales Cloud Cost for existing
(Salesforce customers
Sales Cloud)
Lead to Op- SAP Sales Lead to Cash Sales Reduce Sales All Available SAP Signavio
portunity Cloud Cost
(SAP Sales
Cloud)
Lead to Salesforce Lead to Cash Sales Reduce Sales All Available only SAP Signavio
Quote (Sales- Sales Cloud Cost for existing
force Sales customers
Cloud)
Make to SAP ECC Plan-to-Fulfill Manufactur- Reduce Days Process in- Available SAP Signavio
Stock (SAP ing Sales Out- dustry
ECC) standing; Re-
duce Compli-
ance and Risk
Management
Cost
Make to SAP S/ Plan to Fulfill Manufactur- Reduce Days Process in- Available SAP Signavio
Stock (SAP 4HANA ing Sales Out- dustry
S/4HANA) standing; Re-
duce Compli-
ance and Risk
Management
Cost
Manage Per- SAP Recruit to Re- Human Re- Reduce HR All Available SAP Signavio
sonal Em- SuccessFacto tire sources Manual
ployee Infor- rs Transaction
mation (SAP Effort
SuccessFac-
tors)
Manage SAP Plan to Fulfill Supply Chain Reduce Total All Available SAP Signavio
Transporta- Transportatio Logistics
tion Execu- n Cost; Reduce
tion (SAP Management Transporta-
Transporta- tion Spend
tion Manage-
ment)
Manage SAP EWM Plan to Fulfill Supply Chain Reduce Total All Available SAP Signavio
Warehouse Logistics
and Inven- Cost
tory, Out-
bound Proc-
essing (SAP
EWM)
Meter to SAP for Lead to Cash Finance Reduce Cus- Utilities Available SAP Signavio
Cash (SAP Utilities (IS- tomer Churn;
for Utilities U) Reduce Sales
(IS-U)) Cost
Meter to SAP S/ Lead to Cash Finance Reduce Cus- Utilities Available SAP Signavio
Cash (SAP S/ 4HANA Utiliti tomer Churn;
4HANA Utilit- es Reduce Sales
ies) Cost
Operate Man- SAP Digital Plan to Fulfill Manufactur- Reduce Total All Available SAP Signavio
ufacturing Manufacturin ing Manufactur-
(SAP Digital g ing Cost; Re-
Manufactur- duce Manu-
ing) facturing Cy-
cle Time
Operate to SAP ECC Acquire to Asset Man- Reduce Asset All Available SAP Signavio
Maintain Decommis- agement Maintenance
(SAP ECC) sion Cost
Operate to SAP S/ Acquire to Asset Man- Reduce Asset All Available SAP Signavio
Maintain 4HANA Decommis- agement Maintenance
(SAP S/ sion Cost
4HANA)
Optimizing SAP S/ Invoice to Finance Increase Ad- Insurance Available Partner offer-
Payments for 4HANA Cash herence to ing by Bear-
Premium Col- SAP ECC Standardized ingPoint
lections (SAP SAP Business
Collections Processes;
and Dis- Reduce Days
bursements Sales Out-
for Insur- standing
ance) by
BearingPoint
Order to SAP ECC Lead to Cash Sales Reduce Sales All Available SAP Signavio
Cash (SAP Cost; Reduce
ECC) Customer
Churn; Re-
duce Days
Sales Out-
standing; Im-
prove Cus-
tomer Satis-
faction
Order to SAP S/ Lead to Cash Sales Reduce Sales All Available SAP Signavio
Cash (SAP S/ 4HANA Cost; Reduce
4HANA) Customer
Churn; Re-
duce Days
Sales Out-
standing; Im-
prove Cus-
tomer Satis-
faction
Procure to SAP Ariba Source to Pay Sourcing & Improve User All Available SAP Signavio
Pay (SAP Procurement Compliance;
Ariba) Increase Ad-
herence to
Standardized
SAP Business
Processes;
Reduce Com-
pliance and
Risk Manage-
ment Cost
Procure to SAP Ariba Source to Pay Sourcing & Improve User All Available SAP Signavio
Pay (SAP and SAP S/ Procurement Compliance;
Ariba and 4HANA Increase Ad-
SAP S/ herence to
4HANA or Standardized
SAP Ariba SAP Business
and SAP Processes;
ECC) – Reduce Com-
Cross-Sys- pliance and
tem Acceler- Risk Manage-
ator ment Cost
Procure to Oracle JD Source to Pay Sourcing & Reduce Com- All Available Partner offer-
Pay (Oracle Edwards Procurement pliance and ing by Glo-
JD Edwards) Risk Manage- bant
by Globant ment Cost;
Improve On-
Time Delivery
Performance;
Increase Ad-
herence to
Standardized
SAP Business
Processes
Procure to SAP S/ Source to Pay Sourcing & Reduce Com- All Available SAP Signavio
Pay (SAP S/ 4HANA Procurement pliance and
4HANA) Risk Manage-
ment Cost;
Reduce Fi-
nance Cost;
Improve On-
Time Delivery
Performance;
Increase Ad-
herence to
Standardized
SAP Business
Processes
Procure to SAP ECC Source to Pay Sourcing & Reduce Com- All Available SAP Signavio
Pay (SAP Procurement pliance and
ECC) Risk Manage-
ment Cost;
Reduce Fi-
nance Cost;
Improve On-
Time Delivery
Performance;
Increase Ad-
herence to
Standardized
SAP Business
Processes
Project to SAP S/ Lead to Cash Service Reduce Days Professional Available SAP Signavio
Cash (SAP S/ 4HANA Sales Out- Services
4HANA Cloud Public standing
Cloud Public Edition
Edition)
Reduce Du- SAP S/ Finance Finance Reduce Fi- All Available SAP Signavio
plicate Invoi- 4HANA nance Cost;
ces for In- Reduce Com-
voice to Pay pliance and
(SAP S/ Risk Manage-
4HANA) ment Cost
Reduce Du- SAP ECC Finance Finance Reduce Fi- All Available SAP Signavio
plicate Invoi- nance Cost;
ces for In- Reduce Com-
voice to Pay pliance and
(SAP ECC) Risk Manage-
ment Cost
Request to SAP Service Lead to Cash Service Reduce Cus- All Available SAP Signavio
Service (SAP Cloud tomer Churn;
Service Reduce Fi-
Cloud) nance Cost
Source Prod- SAP Ariba Source to Pay Sourcing & Reduce Com- Procurement Available SAP Signavio
ucts and Procurement pliance and
Services Risk Manage-
(SAP Ariba) ment Cost;
Reduce Over-
all Supply
Chain Plan-
ning Cost;
Reduce
Sourcing Cy-
cle Time; Im-
prove Pro-
curement
FTE Produc-
tivity
Vendor In- OpenText in Finance Finance Improve Days All Available SAP Signavio
voice Man- SAP S/ Payable Out-
agement 4HANA or standing; Re-
(OpenText in SAP ECC duce Finance
SAP ECC or Cost; Im-
SAP S/ prove Ac-
4HANA) counts Paya-
ble FTE Pro-
ductivity
For a detailed description of the process-specific templates, see SAP Signavio Value Accelerators for SAP
Signavio Process Intelligence.
You can access the template documentation when these prerequisites are met:
• Directly open the documentation by clicking the following link, based on the region in which your
workspace is hosted:
• Australia (AU): Template documentation
• Canada (CA): Template documentation
• Europe (EU): Template documentation
• Japan (JP): Template documentation
• Singapore (SGP): Template documentation
• South Korea (KR): Template documentation
• USA (US): Template documentation
Example
• When creating a process data pipeline and you're about to select a template, you can use the More info link
on the template tile to open the template documentation.
Read more about process data pipelines in section Process Data Pipelines [page 229].
Overview of ETL requirements considering feature sets, connectors, transformation templates, and source
systems.
You can manage data pipelines only if the following requirements are met:
All connectors for source systems and all data transformation templates are available by default for users
with the feature set ETL-Superuser role. For more information about managing feature sets, see section
Activate feature sets.
Source systems
Find the supported data types, and the mandatory columns for case attribute and event collectors.
The execution of data transformation scripts uses AWS Athena, which is based on open-source Trino and
Presto projects.
For a list of supported data types, see the Data Definition Language (DDL) column in the tableData types in
Amazon Athena - Amazon Athena .
For a general reference guide covering SQL query operators and functions, refer to the Trino documentation.
Each business object has one case attribute and multiple event collectors where you specify the
transformation rules in the form of SQL queries. These queries require certain mandatory columns to generate
an event log.
For example,
• YYYY-MM-DD HH:mm:ss
'2024-03-25 11:12:13'
• YYYY-MM-DD HH:mm:ss SSS
'2024-03-25
11:12:13.456'
Related Information
High-level description on how to set up and run a data pipeline in SAP Signavio Process Intelligence.
1. Ensure that the requirements are met, read more in section Requirements for data pipelines [page 241].
2. Create a connection. If you want to select a connection that exists, you can skip this step.
Read more in section Manage connections [page 127].
3. Create a process data pipeline and set it up.
Read more in section Create a process data pipeline [page 245].
You can use a template to accelerate process data pipeline creation, read more in section Use data
transformation templates [page 230].
When setting up the process data pipeline, you perform the following tasks:
• Define what data to extract in a source data, read more in the sections Manage process data pipelines
[page 168] and Link source data [page 247].
You can create, edit, and delete a process data pipeline. A process data pipeline can have different source data
systems. For a transformed data to be loaded into process for further investigations, you must link a process
within a process data pipeline.
Learn how to get to the Process Data Pipelines feature in SAP Signavio Process Intelligence and view the
available options.
To view all the process data pipelines, open (Process Data Management)
Each card shows the status of the process data pipeline run.
For example, Never executed, Error, Canceled, Completed.
Note
Your assigned feature set determines which options ap-
pear in the action menu. For information on feature
sets, see Access Requirements for Process Data Man-
agement [page 52]
Note
To create a process data pipeline, you need the feature set ETL - Superuser Role. Your workspace
administrator can enable it for you.
Note
• Process-based template
The process data pipeline is pre-configured. For example, a process-based template is the template for the
Incident-to-Resolution in ServiceNow. You can customize the template.
• Blank template
The template is empty. You need to set up the business process, the extraction, and the transformation
rules on your own.
Read more on templates in section Use data transformation templates [page 230].
Zoom in
Zoom out
Related Information
How to link source data to a process data pipeline. The source data defines what data to extract.
In source data, you define which data is extracted from a source system.
To run transformations for complex processes, you can extract data from different source systems. To do this,
you can link multiple source data systems to your data pipeline.
The table names must be unique across all source data systems that are linked to one process data
pipeline.
In case of redundant table names, you need to create aliases before you can link source data to a process
data pipeline. Read more in section Create a table name alias [page 173].
Note
A complete data pipeline executes for each source data that is linked to a process data pipeline and a
separate event log is generated for each pipeline execution.
You can remove a linked source data in three places, on the Overview, Connections, and Source data tabs.
Related Information
How to link a process to a data pipeline to specify where to extract the transformed data. How to create, edit,
and delete data models
By linking the data pipeline to a process, you define where to load transformed data.
The user linking a process needs the Manager role assigned for the process. Read more in section Roles
and user management [page 31].
Link a process
1. On the Overview tab of your process data pipeline, click on the linked process, then Remove.
The configuration dialog opens.
2. To confirm removal, confirm in the dialog and select Remove Link.
Related Information
Context
Generally, events within a case are sorted by timestamp. For multiple events that occur at an identical
timestamp, you can configure the order of events. If no specific order is configured, the conflicting events
are sorted in alphabetical order.
During the next pipeline run, the events within a case are sorted based on the defined configuration.
Procedure
5. To add more event names, select (Add Event Name) and enter the text.
6. To change the order of events, use options , beside the field.
7. To delete the event name, select its option.
8. Confirm with Save.
Your changes are applied in the next pipeline run.
Gain insights into the health of modeling entities by running the validation checks at the pipeline level.
The pipeline health option actively identifies potential issues in the data transformation logic in a process data
pipeline. It runs the validation checks while defining the transformation rules, helping data experts or analysts
ensure the success of subsequent process data pipeline runs. This reduces the time needed to generate
consistent data for process analysis.
You can find the Pipeline Health Check option in the process data pipeline's header menu.
A series of validation checks are performed based on the query definitions of data views, event collectors, and
case attributes.
If there are no errors or warnings, the pipeline health check option is displayed in green.
1. To view the query issues in event collectors, case attributes, and data views, select the Pipeline Health
Check option in the process data pipeline's header menu.
2. From the pipeline health check popup, you can select the error, warning, or information to quickly navigate
to the query and resolve it.
For users with the superuser role: Share data models for data pipelines between SAP Signavio workspaces
using the export and import functions.
With the export and import functions, you can use process data pipelines in different workspaces.
If the import fails, check the import data or re-export the process data pipeline and run the import again.
Related Information
How to change the settings of a data model that is used to run data pipeline.
Note
A process data pipeline deletion breaks the scheduled pipeline run, preventing the related source data
from extracting the most recent data. As a result, the linked process is unable to access the latest data for
further analysis and mining.
Related Information
How to map extracted source system data to business objects in SAP Signavio Process Intelligence using SQL
transformation scripts, which collect case-level attributes and events.
You can model your business process and define transformation rules in a process data pipeline. For that, you
need to do the following:
• Model your business process with activities and objects on a canvas in your process data pipeline.
• Define transformation rules in business objects for extracted data.
Each business object consists of one case attribute and multiple event collectors where you specify the
transformation rules in separate scripts.
The result of the transformation is event logs. The logs are loaded to the process, which is linked in the process
data pipeline, providing the data for investigations.
The transformation rules must contain certain columns as mandatory to generate an event log. For list of
supported and unsupported data types, as well as mandatory columns, see Data Type Requirements for
Process Data Pipelines [page 242].
If the data hasn't changed since it was last cached, the data pipeline run utilizes the cached information. To
view if your data pipeline run utilized a cached result, select the log entry from your process data pipeline.
Details are displayed in the Message column on the respective tab in the Transformation & Load or Full ETL
screen.
Note
It's important to have a good understanding of SQL to add or customize transformation rules or scripts.
This knowledge helps you to fine-tune the transformation rules and tailor them to your specific business
requirement.
You can view, create, edit, delete, enable, or disable business objects from the following places in a process
data pipeline:
Concepts
• Business process: The process that you want to analyze in your organization. When you set up a process
data management pipeline, you model the business process in the process data pipeline.
Enabling or Disabling Business Objects, Event Collectors, and Case Attributes [page 266]
Learn how to model the business process and its objects on a canvas in a process data pipeline.
Related Information
A business object is an artifact in the business process. Each business object consists of a case attribute and
event collectors. The transformation rules are defined as SQL queries written within event collectors and case
attributes.
You can view, create, edit, rename, and delete a business object from the following places in a process data
pipeline:
• Business Objects tab: You can directly open the Business Objects tab, or from the Overview tab, choose the
Process Data Model card. This opens the Business Objects tab.
Your first business object can only be created in the Business Objects tab.
• Query editor: When you select an existing event collector or case attribute to edit, you get to the query
editor interface.
Tip
If you already have an existing business object, select its event collector or case attribute. The query
editor opens with Business Objects and Data Views tabs in a side panel along with all the options.
Context
You can rename and delete a business object from the following places in a process data pipeline:
• Business Objects tab: You can directly open the Business Objects tab, or from the Overview tab, select
Process data model card > Edit. The Business Objects tab opens.
Your first business object can only be created in the Business Objects tab.
• Query editor: When you select an existing event collector or case attribute to edit, you get to the query
editor interface.
Note
To rename or delete a business object from the query editor, select the event collector or case attribute.
The query editor opens with Business Objects and Data Views tabs in a side panel along with all the
options.
Procedure
1. Open your process data pipeline and select the Business Objects tab.
2. Select for a business object, then Rename.
3. Enter the text and select Rename.
Procedure
1. Open your process data pipeline and select the Business Objects tab.
2. Select for a business object, then Delete.
3. Confirm the message by ticking the check box, and select Delete.
Learn how to create an event collector and add transformation rules to it.
The scripts for events are called event collectors. Events are the activities for a specific business object. For
example, the creation and qualification of a lead.
Each business object contains only one case attribute and multiple event collectors in which the
transformation rules can be defined. The transformation rules are written as SQL queries. These queries
require certain columns as mandatory. Read more in Data Type Requirements for Process Data Pipelines [page
242]
During a pipeline run, the defined rules are applied to transform the data, which generates an event log
(process data). The event log is then loaded into a process within a process data pipeline.
Note
An event collector can be enabled or disabled. To apply the defined transformation rules during the pipeline
run, you need to enable the event collector. For more information, see the Enabling or Disabling Business
Objects, Event Collectors, and Case Attributes section.
You can create, edit, rename, and delete an event collector from two places in a process data pipeline:
1. Open your process data pipeline and select the Business Objects tab.
2. Choose a business object and expand to view its case attributes and event collectors.
3. Select Create for the event collector. A popup opens to create an event collector.
Tip
You can also create an event collector in the query editor. Select any entity of a business object, and the
query editor opens with Business Objects and Data Views tabs in a side panel along with all the options.
1. On the Business Objects tab, select the event collector to add transformation rules. The query editor
opens.
2. Write your query in the script editor.
Tip
To add a new part to the existing query, move the cursor to the right place and add the extension.
Under Available Columns, search for available data views, source data tables and their columns in the
tree structure, or enter text in the search field. To add a column, source data table, or data view choose
the data element and select its . The data element is then added to the script at the cursor position.
3. Issues within queries are categorized and represented as errors, warnings, and informational messages.
To view and resolve the list of errors, go to the editor's header menu and select Issues. For information on
issue types and reasons for the issue, see Viewing Pipeline Health Status section.
Tip
In the query editor, you can identify and view the errors, warnings, and informational messages using
the icons in the Data Views, Business Objects, and Available Columns tabs in the side panel.
4. Confirm with Save. It saves the changes made to that specific event collector.
Note
• You can save an event collector with invalid queries. However, it's important to ensure that all event
collectors have valid and functioning queries before running a pipeline. If a pipeline run includes an
enabled event collector with invalid queries, the pipeline execution fails, therefore, no transformations
are applied.
For information on how to monitor the pipeline run, see the Monitoring Data Pipelines section.
• When more than one user modifies the same event collector at once and tries to save it, the last user
who edited that event collector is prompted to overwrite the changes. In the popup window, using the
Related Information
Context
You can delete an event from the following places in a process data pipeline:
Context
If the data hasn't changed since it was last cached, the data pipeline run utilizes the cached information. To
view if your data pipeline run utilized a cached result, select the log entry from your process data pipeline.
Procedure
1. On the Business Objects tab, expand a business object and select the event collector.
The query editor opens.
2. Apply your changes and confirm with Save.
Context
Deleting an event collector can't be undone. Deleting an event collector that is included in a process data
pipeline can fail the execution of pipeline run. Therefore, check for any dependencies before deleting.
Procedure
1. On the Business Objects tab, select for the event collector you want to delete, then Delete.
2. Confirm by ticking the checkbox, and select Delete.
Context
Deleting an event collector can't be undone. Deleting an event collector that is included in a process data
pipeline can fail the execution of pipeline run. Therefore, check for any dependencies before deleting.
Procedure
Context
The case attribute is a characteristic of an event on case-level. For example, the name of the person that
created a lead is an attribute on event-level. The ID of that person is an attribute on case-level.
Each business object contains only one case attribute and multiple event collectors in which the
transformation rules can be defined. The transformation rules are written as SQL queries. These queries
require certain columns as mandatory. Read more in Data Type Requirements for Process Data Pipelines [page
242]
During a pipeline run, the defined rules are applied to transform the data, which generates event logs. These
event logs are then loaded into a process within a process data pipeline.
If the data hasn't changed since it was last cached, the data pipeline run utilizes the cached information. To
view if your data pipeline run utilized a cached result, select the log entry from your process data pipeline.
Details are displayed in the Message column on the respective tab in the Transformation & Load or Full ETL
screen.
Note
Procedure
1. Open your process data pipeline and select the Business Objects tab.
2. Select Create to add a business object.
Tip
You can also create a business object using the query editor. Select any entity of a business object,
and the query editor opens with Business Objects and Data Views tabs in a side panel along with all the
options.
3. Enter a name and select Create. The business object and case attribute are created.
4. Select the case attribute to add transformation rules. The query editor opens.
5. Write your query in the editor.
To add a new part to the existing script, move the cursor to the right place and add the extension.
Under Available columns, search for available data views, source data tables and their columns using
the tree structure, or enter text in the search field. To add a column or data view or source data table,
choose the entity and select . The entity is then added to the script at the cursor position.
6. Confirm with Save. It saves the changes made to that specific case attribute.
Your changes are applied in the next pipeline run.
Issues within queries are categorized and represented as errors, warnings, and informational messages.
To view and resolve the list of errors, go to the editor's header menu and select Issues. For information on
issue types and reasons for the issue, see Viewing Pipeline Health Status section.
Tip
In the query editor, you can identify and view the errors, warnings, and informational messages using
the icons in the Data Views, Business Objects, and Available Columns tabs in the side panel.
Related Information
Procedure
1. On the Business Objects tab, expand a business object and select the case attribute.
The query editor opens.
2. Apply your changes and confirm with Save.
Context
Note
Deleting a case attribute can't be undone. Deleting a case attribute that's included in a process data
pipeline can fail the execution of the pipeline run. Therefore, check for any dependencies before deleting.
To delete a case attribute, you need to delete the associated business object.
Related Information
Learn how to model the business process and its objects on a canvas in a process data pipeline.
The enable and disable options help you design the business process model. You can enable or disable
business objects, case attributes, and event collectors.
You can enable or disable a business object, event collector, and case attribute from two places in a process
data pipeline:
• When you enable or disable a business object, its corresponding event collectors and case attributes are
automatically enabled or disabled. However, you have the option to disable specific event collectors and
case attributes in the active business object.
• Only the enabled business objects are part of the data transformation pipeline.
• For the pipeline to run, at least one business object with an event collector must be enabled.
• While the pipeline is running, any changes made to the process data pipeline are not reflected.
Example
You can also open the event collector or case attribute of a business object. The query editor opens
with Business Objects and Data Views tabs in a side panel along with all the options. Then, follow step 2.
Data views are virtual tables that can retrieve and join data from tables and other data views through queries
that you define. You can create data views in a process data pipeline and reuse them across business objects or
other data views within the same process data pipeline.
Data views are similar to SQL queries used in event collectors and case attributes of business objects.
During the initial pipeline run, the data view's query result is cached and retrieved in subsequent runs. This
speeds up the data transformation. Whenever the pipeline runs, it checks for new data in the underlying tables,
any changes to the table, or a data view definition that requires an update. In such cases, the query results will
be cached again.
You can break down complex queries into multiple chunks and save each chunk as a data view. This reduces
the complexity of a large single query, as a significant chunk of business logic is collected in the data view. For
instance, one data view prepares the header of a specific business object, which is then combined with data
from the item table in the actual business case.
• Reuse data views: In a data view query, add references to other data views and tables.
• Structure the query however you want.
Note
• The execution of data transformation scripts in data views uses AWS Athena, which is based on
open-source Trino and Presto projects. For a general reference guide covering SQL query operators and
functions, refer to the Trino documentation.
Related Information
You can create a data view with columns from source data tables and other existing data views available in a
process data pipeline. The data views created for a process data pipeline are visible only within that pipeline.
You can create, rename, edit, and delete a data view from the following places:
Tip
To add a new part to the existing query, move the cursor to the right place and add the extension.
Under Available Columns, search for available data views, source data tables and their columns in the
tree structure, or enter text in the search field. To add a column, source data table, or data view choose
the data element and select its . The data element is then added to the script at the cursor position.
7. Issues within queries are categorized and represented as errors, warnings, and informational messages.
To view and resolve the list of errors, go to the editor's header menu and select Issues. For information on
issue types and reasons for the issue, see Viewing Pipeline Health Status section.
Tip
In the query editor, you can identify and view the errors, warnings, and informational messages using
the icons in the Data Views, Business Objects, and Available Columns tabs in the side panel.
The validation checks are performed in the background for the syntactical and semantical correctness of the
data.
Your changes are applied in the next process data pipeline run. You can then use the data views while creating
queries in business objects.
Note
• You can save a data view with invalid query. However, it's important to ensure that all data views have
valid and functioning queries before running a pipeline. If a pipeline run includes a data view with an
invalid query, the pipeline execution fails, therefore, no transformations are applied.
For information on how to monitor the pipeline run, see the Monitoring Data Pipelines section.
• All the data views in your process data pipeline are accessible from the Data views tab and Available
Columns section in the SQL editor's sidebar.
• Data views that are empty can be saved but can't be used in other data views or business objects.
• The execution of data transformation scripts in data views uses AWS Athena, which is based on
open-source Trino and Presto projects. For a general reference guide covering SQL query operators and
functions, refer to the Trino documentation.
While creating a data view, make sure that the following rules are met:
Related Information
A nested data view is created when a data view query has references to other data views. You can only nest the
data views that are created within a process data pipeline.
For information about creating data views and rules, see Create a data view [page 269] section.
Example
The following example describes how to create a nested data view within a process data pipeline.
Let's consider that you have a process data pipeline with Recruit to Hire data.
The query retrieves the job requisition IDs that require a 40 hour workweek in the United Kingdom. It also
returns the requisition closed date and time.
o check how the data is displayed for your defined query, use the Preview data option in the script editor.
Example
Query result:
In the same process data pipeline, create a data view named job_requisition_candidates_uk and have reference
to the first data view. Following is the query for nested data view:
The query retrieves the job requisition IDs, hired candidate IDs, and other specified columns data in the
SELECT statement. It joins the data from the job_requisition_uk data view and the job application table, based
on the job requisition ID.
Example
Query result:
• All the nested data views in a process data pipeline are accessible from the Available Columns in the
SQL editor's side panel. You can also access them from the Available Columns in the Business Object
editor's side panel.
• When you modify a data view, make sure all its referenced data views are modified accordingly.
Otherwise, the data transformation fails when you run the pipeline. For example, before deleting a
column from a data view, you must first delete the referenced columns. The same applies to changing a
column or data type in the data view.
Related Information
Note
Query Editor
1. Select the data view from the Data Views tab.
The query editor opens.
2. From the Data Views tab in the side panel, choose the data view, select , then Delete.
3. Confirm by ticking the checkbox, and select Delete.
Note
• The execution of data transformation scripts in data views uses AWS Athena, which is based on
open-source Trino and Presto projects. For a general reference guide covering SQL query operators and
functions, refer to the Trino documentation.
• You can modify the data view name using the option > Rename in the header menu. Changes are
applied immediately.
• When more than one person modifies the same data view at once and tries to save it, the last person
who edited that data view will be prompted to overwrite the changes. On the popup window, using the
More option, the incoming changes can be accepted or rejected. If you choose to reject, your changes
are saved as a new data view.
The script editor provides features for text editing and quick navigation within the editor.
• Preview your SQL query result along with the query execution time.
• Find and replace script elements.
• Change font size of your script.
• View shortcuts for editing the script.
• Compare changes that you make in the script to the last saved version.
Related Information
Learn how to add, edit, and delete a script in the query editor.
Context
From query editor, you can add and edit scripts in case attributes and event collectors. In addition, you can add
SQL queries in data views.
Note
The execution of data transformation scripts uses AWS Athena, which is based on open-source Trino and
Presto projects. For a general reference guide covering SQL query operators and functions, refer to the
Trino documentation.
Actions
Action Description
Add script in case attributes and event collectors. Select the case attribute or event collector from the
Business Objects tab in the side panel. Then write or paste
your script.
Add a script for case attributes. Select the Case Attributes tab, and write or paste your script.
Add a script for events. Select Add event collector, and write or paste your script.
Add queries in the data view. Select the data view from the Data Views tab in the side
panel. Add your query.
Add a new part to the existing script. Select at the correct place in the script and add the exten-
sion.
Search for available data views. Under Available columns, you can search for available data
views, source data tables and their columns. Either search in
the tree structure or enter a search string in the search field.
To add an available column, choose the column and select
. The column is then added to the script at the cursor
position. Similarly, you can also add source data tables to
the script editor from the Available columns.
Delete the script of an event collector. Either select the script and clear it using the delete key on
the keyboard or delete the event collector. To delete the
event collector, select its , then Delete.
Delete the script of a case attribute. Select the script and clear it using the Delete key on the
keyboard. To delete the case attribute entity, you need to
delete the business object.
Procedure
The script editor supports multi-cursor editing to quickly edit multiple rows of code and multiple occurrences
of an element in the code at once. Each cursor functions independently, adapting to its specific context. You
can efficiently change your event collectors, case attributes, and data views in a process data pipeline.
Using keyboard shortcuts, you can add additional cursors to the subsequent lines of code and also select
multiple occurrences of an element.
• To view keyboard shortcuts for multi-cursor editing, in the query editor header menu, select , then
Keyboard Shortcuts. A popup opens with a list of keyboard shortcuts.
Under the Selection category, you find the following:
• To remove multiple cursors, either click somewhere in the script editor area or select Esc on your keyboard.
Tip
Each cursor comes with its own clipboard that allows you to copy and paste common patterns.
1. Position your cursor at the end of the timesheet_record table name and press ⌘ + ⌥ + ↓ to add an
additional cursor in the next line of code.
2. Add your code and save your changes.
1. Select the code to bulk edit and press the keyboard shortcut Command+D. Then, add your code.
2. To edit specific text in rows of code as shown in the example, position the cursor at the starting of text and
press the keyboard shortcut ⌥ + Click (for Mac) or Alt + Click (for windows).
3. Select and copy the code.
4. Position the cursor where you want to add the code and paste it as shown in the example.
3. If you want to replace a finding, select and enter the new term in the Replace with field.
4. Choose whether to consider capitalization by selecting the Consider capitalization option.
5. Select Replace or press Enter. To update all findings at once, select Replace All.
The script is updated accordingly.
Get familiar with the options in the script editor while previewing your query result.
The following options are available while previewing the query result:
Option Description
Preview Displays the preview panel with the query result and its exe-
cution time.
The query result shows any NULL and empty values re-
trieved.
Row Limit The preview panel's default setting for returned rows is 100.
You can change this limit using the Row Limit field at the
bottom of the panel. The pre-defined row limits are 100, 250,
500, and 999.
Generating a preview can take some time. While a script preview is loading, you can switch to another script.
A list of keyboard shortcuts is provided for you to quickly navigate while working with your script. Shortcuts are
categorized based on their function. Available categories are Actions, Selection, Navigation, and Application.
Procedure
A popup window appears with a list of shortcuts grouped based on their function.
2. Select a category tab to view a list of available shortcuts.
3. Select Close to view the editor.
How to define the transformation rules for case attributes and events with SQL scripts. You can also add your
own event collectors.
Make sure that the scripts contain all the mandatory script items:
Mandatory script items are indicated by a status indicator placed above the script:
The script editor provides a linter that parses the script to detect errors related to event collectors and case
attributes. Rows with errors are highlighted and indicated by a red dot. Each error in a row is indicated by a
wavy underscore. If available, additional information is displayed when you hover the error.
Note
Resolve all indicated errors. In case that one of the mandatory script items is missing, the pipeline will
break.
Example
Related Information
Compare changes that you make in the script to the last saved version or to a previously saved version.
Choosing the Compare Changes button opens the code comparison view, where you can compare the working
version to the latest saved version or to any previously saved version. Choose Latest Saved Version to see
the list of previously saved versions. You can then choose the previous version that you want to compare the
current working version with.
If the working version looks as expected, click Cancel to get back to the SQL editor, where you can save your
changes. If you are not happy with your changes, you can revert some or all of them to the latest saved version
or to the previously saved version that you have opened. Then choose Apply Changes.
If another user has saved changes to the script since you opened it, you receive a warning message in the
SQL editor that the script has been updated by another user. In the message, click More to proceed. You can
either choose Get new version, which discards any unsaved changes that you have made to the script or Review
changes, which opens the code comparison view. In the code comparison view, you can compare your working
version to the new latest saved version. In the 3-way comparison view, you see your working version of the
script side-by-side with your starting version and the latest saved version.
Learn how to run your process data pipeline, and schedule pipeline runs. You can run it manually or schedule an
automatic execution at any given point of time. You can execute the transformation and load at any time.
After setting up your process data pipeline, you can do the following:
Note
If the data hasn't changed since it was last cached, the data pipeline run utilizes the cached information. To
view if your data pipeline run utilized a cached result, select the log entry from your process data pipeline.
Details are displayed in the Message column on the respective tab in the Transformation & Load or Full ETL
screen.
How to start the execution of the data pipeline. You can run it manually or schedule an automatic execution at
any given point of time.
When the Process data pipeline is set up, you can run the complete pipeline manually at any time.
• To run the pipeline, open your Process data pipeline and select Run ETL. If the Run ETL button is
deactivated, check on the Overview tab if there is a process linked to your data pipeline.
The data is extracted, transformed, and uploaded to the process that is linked in the Process data pipeline.
• To navigate to the process with the uploaded data, open your Process data pipeline and on the Overview
tab click the linked process card.
• The process opens in a new tab. You can view the uploaded data in the process settings, read more in the
Process setting section.
You can schedule the automatic execution of the data pipeline. The pipeline then extracts the data that has
changed since the last extraction. You can enable and configure a schedule on source data and Process data
pipeline level. Read more in the Scheduled Pipeline Running section.
Related Information
How to activate the scheduler for data pipeline. The pipeline will then automatically pull your data from the
source system into SAP Signavio Process Intelligence.
You can schedule the automatic execution of the data pipeline. The pipeline then extracts the data that has
changed since the last extraction. You can enable and configure the scheduler in a source data and in a process
data pipeline.
Note
• As long as an extraction is running, no further extraction can be started, even though it's scheduled.
For example, if an extraction takes 2 hours and it's scheduled to run every hour, one extraction is
always skipped.
• Users with the analyst role can't edit or configure schedules in process data pipelines and source data.
Time Zone
For any date and time set with the scheduler, the Coordinated Universal Time (UTC) standard is used.
You can enable the scheduler in two places, in source data and in a process data pipeline.
Note
Enabling the scheduler in source data only schedules the extraction step of data pipelines.
Note
Enabling the scheduler in a process data pipeline schedules the full ETL or T&L (Transform & Load)
pipelines. Read more in section Pipeline scheduling overview [page 288].
1. Open your process data pipeline and select the Schedule tab.
2. Select the schedule toggle or Activate on the upper right corner of the page.
The schedule is activated for the process data pipeline.
3. Confirm with Save.
The execution of the full ETL or T&L data pipeline is scheduled by default to run daily at midnight.
You can customize the scheduler in two places, in source data and in a process data pipeline.
• Saturday
• Sunday
• Monday
• Tuesday
• Wednesday
• Thursday
• Friday
1. Open your process data pipeline and select the Schedule tab.
2. Select the schedule toggle or Activate on the upper right corner of the page.
3. Customize the scheduler. The following options are available:
4. Schedule option Description
• Saturday
• Sunday
• Monday
• Tuesday
• Wednesday
• Thursday
• Friday
This section provides an overview of the data pipeline. You can specify which source data are included and
executed in the scheduled pipeline.
Note
If no source data is included in the schedule, only Transformation and Load (T&L) will execute once the
schedule is triggered.
1. Open your process data pipeline and select the Schedule tab.
2. Select the schedule toggle or Activate on the upper right corner of the page.
3. Customize the scheduler.
4. To include or exclude specific source data from the schedule, Select the toggle on the related source data.
5. Confirm with Save.
You can disable the scheduler in two places, in a source data and in a process data pipeline.
1. Open the source data and select the Schedule in the header.
2. Select the schedule toggle on the upper right corner of the page.
The scheduler is automatically saved and disabled.
1. Open the process data pipeline and select the Schedule tab.
2. Select the scheduler toggle on the upper right corner of the page.
The scheduler is automatically saved and disabled.
Related Information
Learn how to start the execution of the data pipeline. You can run it manually or schedule an automatic
execution at any given point of time.
You can transform the data extracted from your source system and then load that data into the linked process
at any time.
• To run the data transformation and load the transformed data to the linked process, open your Process
data pipeline and select Run T&L. Read more in the Scheduled Pipeline Running section.
Note
The data is transformed and loaded to the process that is linked in the Process data pipeline.
Related Information
How to view the execution history of data pipeline. There is a history for each process data pipeline. How to
create, edit, and delete data models
You can cancel the extraction and the transformation independently for a running pipeline. You can do this in
two places, in source data and in a Process data pipeline.
1. While the pipeline is running, view the Pipeline logs section on the Overview tab.
The logs are displayed.
2. To cancel the extraction for the running pipeline, select Cancel.
A confirmation dialog opens.
3. Select Confirm.
The execution of the running pipeline is canceled immediately. You can now execute other pipelines without
waiting for the previous pipeline cancellation to complete. Already extracted data isn't deleted.
1. While the pipeline is running, open the Overview tab and select Pipeline logs.
The logs are displayed at the bottom of the page.
2. To cancel the extraction, transformation, and load for the running pipeline, select Cancel.
A confirmation dialog opens.
3. Select Confirm.
You can monitor the pipeline run from the Overview tab in a process data pipeline. Upon successful pipeline
run, you can download the event log as a file in XES format.
Accessing the Business Objects and Data Views from Pipeline Logs [page 294]
From the pipeline logs in your process data pipeline, you can access data views, business objects, case
attributes, and event collectors.
Learn how to access the data pipeline logs, view log details, and rerun the extraction from the logs. The section
also describes how to sort and filter log entries, show or hide IDs, and copy error messages.
The execution of each data extraction and each process data pipeline run is logged for you to view the history.
You can view the last 100 extractions.
You can view the status of data upload calls in the logs section of an Ingestion API source data. Read more in
the section Use the ingestion status API.
All Shows all the pipeline executions and data extraction runs
of any source data.
Example
Log Details
Tab Description
Extractions View the extracted tables and how long their extraction took.
If no extraction was run, this tab isn't displayed.
Data view transformation View the used data views and how long their processing
took. If no data views are used, this tab isn't displayed.
Case attribute transformations View transformed case attributes per business object.
Event collector transformations View transformed event collectors per business object.
Event log load View the number of events and cases and how long event log
generation took. If errors happen, corresponding messages
are listed here.
If the data hasn't changed since it was last cached, the data pipeline run utilizes the cached information. To
view if your data pipeline run utilized a cached result, select the log entry from your process data pipeline.
Details are displayed in the Message column on the respective tab in the Transformation & Load or Full ETL
screen.
If you want to rerun all failed extractions at once, select in the header of the Status column.
• To sort and filter the log, use the functions in the table header.
• To show or hide the IDs in the log, select Show IDs or Hide IDs.
• To copy a condition or error message to the clipboard, choose a condition or message and select .
Related Information
Learn how to view the execution history of data pipeline. There is a history for each process data pipeline. How
to create, edit, and delete data models
Note
You need the feature set ETL - Superuser Role to download the pipeline event log. Your workspace
administrator can enable it for you.
4. Click .
The file is saved to your browser's download folder.
Related Information
From the pipeline logs in your process data pipeline, you can access data views, business objects, case
attributes, and event collectors.
Procedure
Find solutions to errors that can occur during the transformation and load steps of a data pipeline.
If a problem continues, please contact our SAP Signavio service experts from the SAP for Me portal .
Solution:
All columns must have unique names or aliases. To fix the error, assign unique names or aliases to all columns
exposed by the case collector query.
Column names can be interpreted as time values or date-time values with time zone information. If queries in a
case attribute script contain such column names, the pipeline fails with a message like this:
If queries in event collectors scripts contain such column names, the pipeline fails with a message like this:
Solution:
To fix the error, change your query to avoid creating any column with a name that be interpreted as time zone
information.
If you are using an Athena function, read in the Trino documentation which function doesn't include timezone
information on its output.
When column or alias names contain characters that aren't supported, the pipeline fails. For example, a column
with the name "SalesDoc:Number" results in a failing pipeline with a message like this:
Some characters are not allowed on column names. Please avoid [':', '&', '<'] on
column names.
Whenever possible, stick to alphanumeric based column names (uppercase letters,
lowercase letters, whitespaces and numbers).
Column '"sales: report"' needs to be renamed to avoid the use of problematic
characters
Solution:
Column names and aliases can only contain alpha-numeric and supported special characters. To fix these
errors, check the column names and aliases for columns from the queries in the failing script. Read more on
supported characters in section Supported characters in names and aliases [page 173].
Running the pipeline or previewing the result of a transformation script fails with any of the following error
messages:
This error occurs when the AWS Athena memory limit is reached. For example, this can happen when
transformation scripts with memory expensive operations are run on large data sets.
Solution:
To solve this error, re-organize and optimize any resource-heavy query in transformation scripts. For
example, you can optimize grouping, ordering, and joining operations as described in this AWS blogpost with
performance tuning tips .
Example:
SELECT
Solution:
SELECT
...
, cast(null as varchar) EventCreatedByUserType
...
Follow these best practices when writing SQL queries for transformations.
Required columns
List the required columns first (c_caseid, c_time, c_eventname).
Indented columns
Use a new, indented line for each column.
Comma positioning
Start each new column with a comma, allowing users to comment out lines of code without producing errors.
Aliases
Alias multi-word column names with double quotes and spaces capitalizing every word ("My Column Name").
Changelogs
At the start of a script, include a change log covering any alterations to the code.
Keyword casing
Capitalize SQL keywords (SELECT, FROM, JOIN, etc.).
1. Column Selection
1. List the required columns first (c_caseid, c_time, c_eventname).
2. Use a new, indented line for each column.
3. Start each new column with a comma, allowing users to comment out lines of code without producing
errors.
4. Alias multi-word column names with double quotes and spaces capitalizing every word ("My Column
Name").
2. Table Joins
1. Start all table joins with "ON 1=1" to increase readability of subsequent (uniformly indented)
conditions.
2. Use a new, indented line for each join condition.
3. Comments
1. At the start of a script, include a change log covering any alterations to the code.
2. Start inline comments with a change date and the editor's initials.
3. End custom code with an inline comment including the same change date and initials.
4. Where Conditions
1. Start where clauses with "WHERE 1=1" to increase readability of subsequent (uniformly indented)
conditions.
2. Use a new, indented line for each WHERE condition.
5. General Formatting
1. Capitalize SQL keywords (SELECT, FROM, JOIN, etc.).
2. Use lower case letters for source column and table names.
To analyze the process data that has been uploaded to your process, you can create investigations and
dashboards.
Investigation
You can create many investigations for each business process. This allows you to focus on data related to a
different aspect in each investigation.
Dashboard
With a dashboard, you can do in-depth process mining analysis and tell your story using data visualizations. It
also lets you monitor key performance indicators that are relevant to a specific goal.
You can create many dashboards for each business process, for example, one dashboard for each audience.
• visualize data in widgets, grouping different aspects of complex processes on separate pages
• narrow down data with filters
The layout of a dashboard is flexible. You can resize and rearrange widgets based on your preference.
Multiple users can work simultaneously on a dashboard without overriding each other’s work. So, you can
make changes that don't need to be saved, for example, when you change filters for exploration purposes. Only
after you save your changes, they become available to other users. If you close the workspace without saving
your work, your changes are not preserved.
Users with consumer role can also explore the dashboards shared with them, for example, they can apply
individual filters. However, they cannot save their changes.
You can shorten the time to insight by using our pre-configured dashboards, which are tailored to specific use
cases and mining needs. These dashboards are available in the value accelerator library for SAP Signavio
Investigation vs Dashboard
Investigation Dashboard
Process view One per investigation, applied to all One per dashboard, applied to all widg-
widgets ets
Access rights Private, anyone can view, anyone can Private, anyone can view, anyone can
edit edit
Read more in section Share an investi- View access still allows users to edit
gation with other SAP Signavio Process a dashboard for exploration purposes,
Intelligence users [page 309]. but the changes can't be saved.
Find the list of available widgets in sec- Find the list of available widgets in sec-
tion Add widgets to an investigation tion Add widgets to a dashboard [page
[page 337]. 337].
Display widgets in other SAP Signavio Embed a widget using the widget ID, Currently, embedding widgets that are
applications read more in section Display widgets in on a dashboard is not supported.
other SAP Signavio applications [page
341].
Changes persisted Changes, for example, adding filters By default, changes to filters and widg-
and widgets are always persisted with- ets are only saved locally for the user
out further ado. and are not persisted automatically.
User needs to manually click Save to
store changes.
Content structure Group widgets in chapters, from top to Group widgets on pages, each page is
bottom, read more in section Add, edit, on a different tab, read more in section
and delete chapters [page 306]. Add, edit, and delete pages [page 326].
Link to a specific location Sharing links to single chapters of in- Share links to single pages of a dash-
vestigations is not supported. board
Insights Available for investigations and widgets, Available for dashboards and widgets,
read more in section Insights [page read more in section Insights [page
454] 454]
Process model Can be linked. Learn more in Link Can be linked. Learn more in Link a
a BPMN Diagram to an Investigation BPMN Diagram to a Dashboard [page
[page 311]. 328].
Export and import Single investigations, as JSON, see Ex- These are the options:
port and import an investigation [page
317] • Single dashboards, as JSON, see
Export and import dashboards
[page 333]
• Bundled dashboards using the
central repository for value accel-
erators, see Value Accelerator Li-
brary for SAP Signavio Solutions
Related Information
4.1 Investigations
Learn about investigations, one of the data analysis options in SAP Signavio Process Intelligence.
You can create many investigations for each business process. This allows you to focus on data related to a
different aspect in each investigation.
Note
To access investigations, open your process and choose the Investigations tab. Here, you can find existing
investigations and create new ones.
Related Information
Create an investigation
Duplicate an investigation
Note
Users with the manager role for a process can change all process views of a duplicated investigation.
Analysts can only change the process views to which they have access.
Related Information
Share an investigation with other SAP Signavio Process Intelligence users [page 309]
Add widgets to an investigation [page 337]
A chapter is a section in an investigation where you can group or organize widgets based on your preferences.
Learn how to add, edit, and delete chapters, or how to change their descriptions.
• Create a chapter
• Rename a chapter
• Change the chapter description
• Copy a chapter
• Change the chapter order
• Delete a chapter
Note
You need the manager or analyst role for a process to use these functions.
Create a chapter
Rename a chapter
Copy a chapter
You can copy a chapter within an investigation or to any other of your investigations. In either case, the chapter
is copied at the end of the investigation. Follow these steps:
Delete a chapter
Note
Related Information
Note
You need the manager or analyst role for a process to use these functions.
Edit an investigation
1. Open your process and under Investigations, select for your investigation, then Settings.
The investigation settings open.
2. You can change the following:
• Select another process view
• Link a BPMN diagram
• Configure a metrics bar
3. Confirm with Done.
Your changes are applied to the investigation.
Rename an investigation
1. Open your process and under Investigations, select for your investigation, then Rename.
Delete an investigation
Note
1. Open your process and under Investigations, select for your investigation, then Delete.
2. Confirm in the dialog with Delete.
The investigation is deleted.
Related Information
Learn how to make investigations private, visible, or editable for other users of SAP Signavio Process
Intelligence.
By default, you're the owner of the investigation that you create or duplicate. The owner can share the
investigation with other users and the owner's name is displayed in the Investigations tab of your process.
You can use process views, either to define which process data your users can view, or use them to grant
access to investigations. For more information, see Define Access with Process Views [page 26].
Anyone can view Any user can view the investigation, but
only the owner can edit it.
Anyone can edit Any user can view and edit the investi-
gation.
Related Information
Learn how to share your investigations in SAP Signavio Process Collaboration Hub. Then, users who don't have
access to SAP Signavio Process Intelligence can also view your findings.
For other users, you need to grant access by assigning a role and a process view.
If a BPMN diagram is linked to the investigation, the diagram is visible in the selected user's Shared
Documents in SAP Signavio Process Collaboration Hub.
When the diagram is opened, only one of its linked investigations is displayed - the one that is first in
alphabetical order. If investigations are renamed, or new investigations are added, these changes are
considered when determining which investigation is first in alphabetical order.
Related Information
Share an investigation with other SAP Signavio Process Intelligence users [page 309]
Define Access with Process Views [page 26]
Export and Import Investigations [page 317]
Roles and user management [page 31]
Linking your BPMN diagram lets you compare your expected process with your actual process - for example, by
using the Activity List, Process Conformance, or Variant Explorer widgets.
Procedure
• To map automatically, select Auto-map and follow the on-screen instructions. The auto-map function
maps events and activities with identical names, but isn't case-sensitive. If multiple activities have the
same name, the event is mapped to the first occurrence of the activity on the process model. You can
edit this manually if needed.
Note
The auto-map function is only available if there are matches between event names in the event log
and the activities in the BPMN diagram. If there are no matches, you need to manually map events
and activities.
Related Information
Widgets allow you to organize and visualize information about your process.
For information about how to add a widget to an investigation or dashboard, see Building Widgets - New User
Experience [page 350].
Related Information
Learn how to copy and duplicate widgets. You can copy widgets from one investigation or dashboard into
another. Duplicating widgets is only possible within the same investigation or dashboard.
• Copy – Specify where the new widget is added, for example, to another chapter, another investigation, or a
dashboard.
• Duplicate – The duplicate is always automatically added next to or below the original widget.
You can copy widgets from an investigation to other investigations or dashboards, and from a dashboard to
other dashboards or investigations.
Duplicate a widget
Related Information
How to change the widget configuration, re-arrange widgets on the user interface, and delete widgets.
Move a widget
On an investigation
Choose (more options) in the widget title and select Move up or Move down.
If you want to move a widget to a different chapter or to another investigation, use the copy function. Read
more in section Copy or duplicate widgets [page 338].
On a dashboard
You can drag and drop widgets to any desired location on a dashboard page. When the Freeform layout option is
selected, you can overlap widgets or place them anywhere.
The following applies to all layout options: You can resize the widgets and rearrange them based on the
available space on the dashboard.
Example
Delete a widget
Note
Related Information
Learn how to configure widgets to display the output of metrics, or add a metrics bar to an investigation. Also,
read how to delete metrics from the metrics bar.
Note
You need the manager or analyst role for a process to use these functions.
To configure widgets to display the output of one or more metrics, follow the instructions in section Add
widgets to an investigation [page 337]
The metrics bar contains widgets that are preconfigured with metrics. You specify the widgets that are
displayed in the bar.
Note
A user with the manager role needs to add metrics to the process before you can select metrics for the
metrics bar. Read more in section Add Metrics to a Process [page 489].
• The metrics bar is always located at the top of an investigation. You can't move it.
• You can change the order in which the metrics are displayed.
• The widget for each metric is preconfigured. You can't edit these widgets.
3. Under Configure metrics bar, select (delete) for the metric you want to remove.
4. Confirm with Done.
The metric is no longer displayed in the investigation.
Learn how to export and import an investigation to share it across processes or workspaces.
You can use your investigations in different processes and workspaces. For that, you export your investigation
and import it into another process. To speed up the creation of an investigation, you can even import it into the
same process.
The export only includes references to process data, not the actual process data. Whether you need to
configure an imported investigation depends on how much the data of the source and target process differ.
Select the links below to learn more.
Related Information
Share an investigation with other SAP Signavio Process Intelligence users [page 309]
Work with Metrics [page 473]
Manage widgets [page 337]
Context
The export only includes references to process data, not the actual process data. Whether you need to
configure an imported investigation depends on how much the data of the source and target process differ.
• widgets
• chapters
• filters
• the metrics bar
• SIGNAL queries that are configured in widgets
• metrics that are added to the process, including their variables
• references to process data
• references to custom attributes
Note
• process data
• custom attributes
• the link to a BPMN process model
Procedure
The investigation is exported as a JSON file. The file is saved to your browser's download folder.
Context
Procedure
The investigation to import is checked. If there aren't any conflicts, the investigation is imported and a
confirmation message appears.
If there are conflicts, you can import the investigation as-is, but you then need to reconfigure the widgets
that show an unexpected result. Alternatively, you can resolve the conflicts. For more information, see
Resolving Import Conflicts for Investigations [page 319].
• the same name is used for a metric in both the exported investigation and the target process, but the
metrics have different SIGNAL code
• the same name is used for a metric variable in both the exported investigation and the target process, but
the variables have different values
• widgets are configured with custom attributes
Custom attributes For each conflicting attribute, you can select an attribute to
which you want to map it.
Related Information
Share an investigation with other SAP Signavio Process Intelligence users [page 309]
Work with Metrics [page 473]
Manage widgets [page 337]
Learn about dashboards, one of the data analysis options in SAP Signavio Process Intelligence.
With a dashboard, you can do in-depth process mining analysis and tell your story using data visualizations. It
also lets you monitor key performance indicators that are relevant to a specific goal.
You can create many dashboards for each business process, for example, one dashboard for each audience.
• visualize data in widgets, grouping different aspects of complex processes on separate pages
• narrow down data with filters
The layout of a dashboard is flexible. You can resize and rearrange widgets based on your preference.
Multiple users can work simultaneously on a dashboard without overriding each other’s work. So, you can
make changes that don't need to be saved, for example, when you change filters for exploration purposes. Only
after you save your changes, they become available to other users. If you close the workspace without saving
your work, your changes are not preserved.
Users with consumer role can also explore the dashboards shared with them, for example, they can apply
individual filters. However, they cannot save their changes.
You can shorten the time to insight by using our pre-configured dashboards, which are tailored to specific use
cases and mining needs. These dashboards are available in the value accelerator library for SAP Signavio
solutions. To get them, please contact your workspace administrator. For more information, see Value
Accelerator Library for SAP Signavio Solutions.
Note
To access dashboards, open your process and choose the Dashboards tab. Here, you can find existing
dashboards and create new ones.
Related Information
Create a dashboard
Edit a dashboard
Copy a dashboard
Note
Related Information
Read how to share a single dashboard with multiple stakeholder groups for which different parts of the process
data are relevant.
Let's assume you want to make KPIs available to several departments, but each department is only allowed to
view a certain portion of the data. To set up one dashboard, which can be shared with all departments, follow
these steps:
1. Create a process view with the option Use for access-control only activated and specify all users of all
departments. Read how to create the process view in section For Dashboard Access [page 28].
2. Create your dashboard and assign the process view by following the steps in section Create a dashboard
[page 322].
You can add widgets nor or later.
3. Set the dashboard visibility to Anyone can view or Anyone can edit, based on your preference. Read more in
section Define Access with Process Views [page 26].
4. For each department, create a process view that controls data access. In each process view, specify the
following:
1. The data that the users of each department can view
2. The users of the respective department
Access setup is complete. You can add the KPIs by creating widgets. Also, the specified users can access the
dashboard.
If users are assigned to multiple process views, they can switch between the process views on a dashboard.
This changes the dashboard data only for the user. However, the assigned process view of the dashboard isn't
changed.
Switching the process view is done using the drop-down menu in the upper right corner of a dashboard. The
selection is saved to the user's browser storage. When users switch the browser or clear the browser storage,
the dashboard opens again with the assigned process view.
Only process views that control data access are available for selection in the process view switcher.
Read how to assign a different process view to a dashboard, and how to switch between process views on a
dashboard.
If users are assigned to multiple process views, they can switch between the process views on a dashboard.
This changes the dashboard data only for the user. However, the assigned process view of the dashboard isn't
changed.
Switching the process view is done using the drop-down menu in the upper right corner of a dashboard. The
selection is saved to the user's browser storage. When users switch the browser or clear the browser storage,
the dashboard opens again with the assigned process view.
Only process views that control data access are available for selection in the process view switcher.
Learn how to set the layout for pages on dashboards in SAP Signavio Process Intelligence. You can choose to
freely arrange widgets or have them automatically organized based on a grid.
Context
Choose the layout for your dashboard pages. By default, widgets can be arranged freely.
Example
Procedure
1. On your dashboard, open the menu at the bottom of a page and choose Layout Settings.
2. In the dialog, choose one of these canvas types:
Related Information
With pages, you can organize widgets based on your preference. Read how to create, rename, duplicate, and
delete pages.
A page is a section on a dashboard where you can organize widgets based on your preferences. You can add
many pages to a dashboard based on your requirement.
Adding a Page
In the bottom-left corner of your dashboard, choose New. A new page tab is added.
Renaming a Page
Open the menu of your page and choose Rename. Type a new name and confirm with Enter.
Duplicating a Page
Deleting a Page
Read how to make dashboards private, visible, or editable for other users of SAP Signavio Process Intelligence.
By default, you're the owner of the dashboard that you create or duplicate. The owner of the dashboard can
share the dashboard with other users and the owner's name is displayed in the Dashboards tab of your process.
Use process views, either to define which process data your users can view, or use them to grant access to
dashboards. For more information, see Define Access with Process Views [page 26].
Anyone can view Any user can view and edit the dash-
board as follows:
Anyone can edit Any user can view and edit the dash-
board.
Related Information
Linking your BPMN diagram lets you compare your expected process with your actual process - for example, by
using the Process Conformance or Variant Explorer widgets.
Procedure
• To map automatically, select Auto-map and follow the on-screen instructions. The auto-map function
maps events and activities with identical names, but isn't case-sensitive. If multiple activities have the
same name, the event is mapped to the first occurrence of the activity on the process model. You can
edit this manually if needed.
Note
The auto-map function is only available if there are matches between event names in the event log
and the activities in the BPMN diagram. If there are no matches, you need to manually map events
and activities.
• To map manually, drag and drop each event from the event list into its corresponding activity on the
diagram. Your manual mapping remains in place even if you use Auto-map later.
• To unmap an event or activity, select it in the event list or diagram and choose , then .
5. Confirm with Go to Dashboard or Save.
Widgets allow you to organize and visualize information about your process.
For information about how to add a widget to an investigation or dashboard, see Building Widgets - New User
Experience [page 350].
Related Information
Learn how to copy and duplicate widgets. You can copy widgets from one investigation or dashboard into
another. Duplicating widgets is only possible within the same investigation or dashboard.
• Copy – Specify where the new widget is added, for example, to another chapter, another investigation, or a
dashboard.
• Duplicate – The duplicate is always automatically added next to or below the original widget.
Copy a widget
You can copy widgets from an investigation to other investigations or dashboards, and from a dashboard to
other dashboards or investigations.
Related Information
How to change the widget configuration, re-arrange widgets on the user interface, and delete widgets.
Move a widget
On an investigation
Choose (more options) in the widget title and select Move up or Move down.
On a dashboard
You can drag and drop widgets to any desired location on a dashboard page. When the Freeform layout option is
selected, you can overlap widgets or place them anywhere.
The following applies to all layout options: You can resize the widgets and rearrange them based on the
available space on the dashboard.
Example
Process Discovery widgets have a minimum size to ensure that all elements for widget navigation, such as
buttons, menus, and sliders, are always displayed.
Delete a widget
Note
Related Information
Learn how to drag, drop, and resize widgets on a dashboard in SAP Signavio Process Intelligence.
You can drag and drop widgets to any desired location on a dashboard page. When the Freeform layout option is
selected, you can overlap widgets or place them anywhere.
The following applies to all layout options: You can resize the widgets and rearrange them based on the
available space on the dashboard.
Example
Related Information
Share dashboards between SAP Signavio workspaces using the export and import functions.
You can use your dashboards in different processes and workspaces. For that, you export your dashboard and
import it into another process. To speed up the creation of a dashboard, you can even import it into the same
process.
The export only includes references to process data, not the actual process data. Whether you need to
configure an imported dashboard depends on how much the data of the source and target process differ.
Select the links below to learn more.
Related Information
Share a dashboard with other SAP Signavio Process Intelligence users [page 327]
Work with Metrics [page 473]
Manage widgets [page 337]
Context
The export only includes references to process data, not the actual process data. Whether you need to
configure an imported dashboard depends on how much the data of the source and target process differ.
• widgets
• pages
• filters
• the metrics bar
• SIGNAL queries that are configured in widgets
• metrics that are added to the process, including their variables
• references to process data
• references to custom attributes
Note
• process data
• custom attributes
The dashboard is exported as a JSON file. The file is saved to your browser's download folder.
Context
Procedure
The dashboard to import is checked. If there aren't any conflicts, the dashboard is imported and a
confirmation message appears.
If there are conflicts, you can import the dashboard as-is, but you then need to reconfigure the widgets that
show an unexpected result. Alternatively, you can resolve the conflicts. For more information, see Resolving
Import Conflicts for Dashboards [page 336].
• the same name is used for a metric in both the exported dashboard and the target process, but the metrics
have different SIGNAL code
• the same name is used for a metric variable in both the exported dashboard and the target process, but
the variables have different values
• widgets are configured with custom attributes
Custom attributes For each conflicting attribute, you can select an attribute to
which you want to map it.
Share a dashboard with other SAP Signavio Process Intelligence users [page 327]
Work with Metrics [page 473]
Manage widgets [page 337]
Widgets are objects that allow you to organize and visualize information about your process. You can add them
to your investigation or dashboard to organize and visualize information about your process.
As a result, you can evaluate and benchmark the performance of your business processes.
Read more about the different widgets in section Widget types [page 343].
Widgets allow you to organize and visualize information about your process.
For information about how to add a widget to an investigation or dashboard, see Building Widgets - New User
Experience [page 350].
Related Information
Widgets allow you to organize and visualize information about your process.
For information about how to add a widget to an investigation or dashboard, see Building Widgets - New User
Experience [page 350].
Related Information
Learn how to copy and duplicate widgets. You can copy widgets from one investigation or dashboard into
another. Duplicating widgets is only possible within the same investigation or dashboard.
• Copy – Specify where the new widget is added, for example, to another chapter, another investigation, or a
dashboard.
• Duplicate – The duplicate is always automatically added next to or below the original widget.
Copy a widget
You can copy widgets from an investigation to other investigations or dashboards, and from a dashboard to
other dashboards or investigations.
Duplicate a widget
Related Information
How to change the widget configuration, re-arrange widgets on the user interface, and delete widgets.
Move a widget
On an investigation
Choose (more options) in the widget title and select Move up or Move down.
If you want to move a widget to a different chapter or to another investigation, use the copy function. Read
more in section Copy or duplicate widgets [page 338].
On a dashboard
You can drag and drop widgets to any desired location on a dashboard page. When the Freeform layout option is
selected, you can overlap widgets or place them anywhere.
The following applies to all layout options: You can resize the widgets and rearrange them based on the
available space on the dashboard.
Example
Delete a widget
Note
Related Information
How to exclude data from widgets for which no value exists in the event log.
A NULL value is used in the event log when the value for an attribute is empty. Widgets always include NULL
values unless they are explicitly excluded.
• Adjust the filters applied to an investigation or a widget. It depends on the filter type whether NULL values
can be excluded. Read more in section Filter process data [page 435].
• Adjust the SIGNAL query used to configure a widget. This option is only available on the following widgets:
• Breakdown [page 345]
• Correlation [page 346]
• Over time [page 347]
• Value [page 349]
• Hide the NULL value group in the widget by deselecting it in the widget legend. This is only possible when
NULL values exist and attributes are grouped. This option is only available on the following widgets:
• Breakdown [page 345]
• Correlation [page 346]
• Over time [page 347]
Learn about the process and requirements for embedding widgets in other applications. For example, on the
launchpad of SAP Signavio Process Collaboration Hub, in journey models, or as Live Insights shapes in process
diagrams or value chains in SAP Signavio Process Manager.
To share analysis results and benchmarks with a wider audience, widgets can be embedded in other SAP
Signavio applications as follows:
Embedding Widgets
When view access isn't granted, embedded widgets don't display data, and Live Insights shapes stay
gray.
2. If you want to link widgets to Live Insights shapes, you must configure thresholds for the widgets:
• Breakdown: Add a threshold [page 345]
• Distribution: Add a threshold [page 346]
• Over time: Add a threshold [page 347]
• Value: Add a threshold [page 349]
3. Get the ID of the widget that you want to embed. To copy the ID to your clipboard, click in the widget
and select Copy widget ID.
Note
The widget ID can't be copied when an investigation or dashboard is private. Read how to change the
status in section Share an investigation with other users [page 309] and Share a dashboard with other
SAP Signavio Process Intelligence users [page 327].
Tree map
Pie chart
Heat map
Sankey chart
Find out which limits apply to the widgets in SAP Signavio Process Intelligence.
Configuration Types
Only the following widgets can be configured with attributes, SIGNAL queries, and metrics:
When you configure the Over time widget with attributes, the following is not supported:
Widgets that are configured by a SIGNAL query or metrics don't support the following features:
• filtering on event-level
• user permissions
Get to know the available widgets, learn how to create and configure them on dashboards and investigations,
and find out what you can do in each widget.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For detailed information, see Building Widgets - New User Experience [page 350].
This widget lists all activities that occur in the process. If a BPMN model is linked to the dashboard, the
activities are grouped by conformance.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
4.4.2 Breakdown
Learn how to use the Breakdown widget to display your process data in charts.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Breakdown Widget
[page 362].
Create a table with case information that you select. When you select a case on the widget, you can also view
the events and event level attributes.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Table Widget [page
360].
Learn how to use the Correlation widget to display process data as a scatter plot.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Correlation Widget
Using a Scatter Plot Visualization [page 366].
4.4.5 Diagram
Display the latest revision of a process model from SAP Signavio Process Manager. From the widget, you can
open the model in the editor of SAP Signavio Process Manager, or compare it with other revisions or models.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
4.4.6 Distribution
Learn how to use the Distribution widget to display how much time your cases take to complete.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Distribution Widget
[page 363].
Learn how to use the Over time widget to display activities in a time series chart. For example, you can view the
cycle time of activities over days and weeks, the number of cases during a certain duration, or the volume of
help requests.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create an Over Time Widget
[page 365].
Check if the actual process flow, as recorded in the event log, matches the planned model and vice versa.
Identify hotspot activities in variants and see how actual paths look like.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
View a process model that is generated based on the event log data and understand how your process is
performing, in terms of complexity and efficiency. Analyze conformance, cycle times, and occurrences of the
individual activity sequences.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
See for each variant which and how many cases follow a specific path. Find out where cases enter and leave the
process and whether activities outside the process exist.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
Learn how to use the SIGNAL table widget to display the result of a SIGNAL query as a table.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Table Widget [page
360].
4.4.12 Spreadsheet
Learn how to add a spreadsheet to your process analysis. With a spreadsheet, you can include additional
information on your data analysis, and perform quick calculations with functions and formulas.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
Use the text box to include additional information about your process analysis. Common formatting options are
available.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
4.4.14 Value
Learn how to use the Value widget to display case data that is aggregated to a single value.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
For more information, see Charts and Tables [page 353] and Example: How to Create a Value Widget [page
361].
Deep dive into the variants of your process, explore their distribution, and compare them with each other.
Analyze conformance, cycle times, and occurrences of the individual activity sequences.
Remember
The classic widget creation experience is no longer available. Instead, use the new user experience to build
your widgets. All widgets you were able to create with the classic experience can still be built using the new
interface.
Learn about our new default widget creation experience on dashboards and investigations.
• Charts and Tables [page 353]: Visualize your data based on your needs and preferences.
• Process-Related Widgets [page 406]: Find out how your processes look like, whether they conform with the
planned process, how many variants exist, and more.
• Utility Widgets [page 430]: Add more information to your analysis, using spreadsheets or text fields.
Remember
The new widget creation and configuration experience brings a more visual and simplified way to work with
your widgets. The goal of this change is to make things easier, faster, and more flexible.
Widget creation Sometimes, widget creation starts with The Create Widget button is always visi-
the question where to find the Create ble at the top of a dashboard.
button.
Choosing a widget type • You need to decide in which chart • For any sort of chart or graph, se-
to display your analysis goal first. lect Charts & Tables and start con-
For example, when you want to figuring your data. As you go, you
view your data in a bar chart, the can try out different visualizations
user interface only allows building and pick the one that represents
a Breakdown widget. For any other your analysis goal the best.
chart, you need to create a differ- • Process-related and utility widgets
ent widget and start over the data are created with a single click.
configuration again.
• Also, creating process-related
widgets takes more clicks than ac-
tually needed.
Usability • The code editor is small so that • In the wide code editor, your
SIGNAL queries are hard to read. SIGNAL code is easy to see and
• You need to know which chart type work with.
to use. • While configuring and previewing
• Some widget settings are hard to your widget, visualizations that
find as they're scattered through- match the current structure of
out the user interface. your data configuration are recom-
mended.
• Widget settings like the number of
results, intervals for time series, or
chart sorting, are built into the user
interface, making them easy to find
and change.
Get to know the user interface for creating widgets that visualize your data in charts or tables, and learn how to
use it.
This image is interactive. Hover over each area for a description. Select highlighted areas to show more
information below the image.
1 - Side Navigation with Access to the Data Configuration and Code View
Option Description
(Configure the data that is shown in your widget) The configuration view is for setting up the data that you
want to display in the widget. Here, you can choose between
drag-and-drop configuration in Interactive mode and query-
based configuration in SIGNAL mode.
(Show the complete SIGNAL code for your widget) The code view shows both what you've set up and what a
widget inherits from the dashboard, such as filters.
Remember
You can copy the SIGNAL code from the code view, but
to make any changes, you've got to switch over to the
configuration view and choose the SIGNAL mode.
• The Attributes list starts with predefined attributes, then you find the case-level attributes, and at the end
event-level attributes.
• The Metrics list separates the valid metrics from the invalid ones, with the valid ones placed at the top of
the list.
The search field above both lists helps you find the attributes or metrics you need.
Note
You can combine both modes. For example, you can start in Interactive mode and then switch to SIGNAL
mode to enhance the query. However, if you are in SIGNAL mode and switch to Interactive mode, all
changes made since the start of configuration will be lost.
Ready to create your first widgets? Follow the instructions in How to Build Data Visualizations [page 356].
If your selected measures and dimensions aren't a valid combination, try out a different combination.
5 - Visualization Options
You've got a variety of charts to choose from, or you can pick the table or KPI (value) option to bring your
selected data to life.
If the combination of measures and dimensions can't be visualized, change the data configuration or try
another visualization.
6 - Preview Area
Here, you can look at the outcome of your data configuration. The moment you change the configuration, the
preview refreshes on the fly.
This is also where you decide how to present the widget content. You can set things like the name and
description, data labels, color palettes, a sorting order, how to round numbers, currency values, and much
more. To view all options, select the Customize button in the top-right corner.
Ready to create your first widgets? Follow the instructions in How to Build Data Visualizations [page 356].
Related Information
As a data analyst, understand what is important for the creation of widgets that visualize data in any sort of
graph, as a table, or as a KPI (value).
Before you start creating widgets, find out more about these topics:
As soon as you're ready to create your first widgets, follow the instructions in How to Build Data Visualizations
[page 356].
Understand how your data is structured, learn what dimensions and measures are, and explore some example
analyses.
Data Structure
The data input for all chart visualizations is a table, with one or more fields, and one or more rows. So, the
default view when creating a widget in SAP Signavio Process Intelligence is a table.
Based on your analysis and visualization goals, you need to configure this table of data using aggregated
attribute values about your data (measures) and attributes by which you want to segment or group your data
(dimensions).
Assume, for example, you want to determine how long it takes, on average, from order placement to goods
delivery for different customer types, such as 'standard' and 'premium'. Then, the average of the aggregated
cycle time is the measure and the customer type is the dimension.
By combining different measures with different dimensions, you can create views of your data that can be used
to visualize the data according to your needs.
In General
When looking at data, dimension and measure are assigned according to the type of attribute, and usually the
following applies:
Measures are aggregated values derived from the quantitative attributes of your data, for example:
To determine a measure based on an attribute, you first select that attribute, and then choose the calculation
to achieve the desired aggregated value.
Measures return one value if there are no grouping dimensions. If dimensions exist, then one value is returned
for each distinct group within the dimension.
Metrics, which are a type of measure, are pre-defined for easy reuse.
Imagine you've an event log with the five attributes 'case_id', 'customer_id', 'city', 'order_date', and
'order_amount'.
The result is a table with one column, 'AVG(order_amount)', with the value '648.5'.
AVG(order_amount)
648.50
• one measure based on 'order_amount', with the average (AVG) aggregation function
• one dimension based on 'city', which segments the data and provides a measure value for each city
AVG(order_amount) city
952.67 Berlin
213.50 Sydney
606.00 Tokyo
The result is a five-column, six-row data table, just like the one at the beginning of this example section.
Learn about the two modes for data configuration and understand how to create widgets in SAP Signavio
Process Intelligence.
• In Interactive mode, you build your data view by combining attributes or metrics, using a drag-and-drop
user interface. In the background, your configuration is turned into a SIGNAL query automatically.
• In SIGNAL mode, you write the SIGNAL query in a code editor. A visual user interface eases the selection of
a visualization.
Note
You can combine both modes. For example, you can start in Interactive mode and then switch to SIGNAL
mode to enhance the query. However, if you are in SIGNAL mode and switch to Interactive mode, all
changes made since the start of configuration will be lost.
Related Information
Creating Charts and Tables Using Attributes and Metrics (Interactive Mode) [page 356]
Creating Charts and Tables Using SIGNAL Code (SIGNAL Mode) [page 368]
Learn how to create charts and tables by combining attributes and metrics, using a drag-and-drop user
interface, in SAP Signavio Process Intelligence. Also find out how to apply aggregate functions to measures,
apply sorting, display data in a time series chart, and more.
Prerequisites
• You're familiar with the concepts of measures and dimensions and understand how to work with time
series and case-and event-level metrics. For more information, see Best Practices for Data Configuration
[page 370].
Build your data view by combining attributes and metrics, using a drag-and-drop user interface. In the
background, your configuration is turned into a SIGNAL query automatically.
Note
You can combine both modes. For example, you can start in Interactive mode and then switch to SIGNAL
mode to enhance the query. However, if you're in SIGNAL mode and switch to Interactive mode, all changes
made since the start of configuration will be lost.
Procedure
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget configuration dialog opens in Interactive mode.
2. To build the data query, drag attributes or metrics into the Measures and Dimensions fields.
The data column on the left displays case or event attributes, custom attributes, and the events within the
attributes. Icons help you identify the type of an attribute or event:
Option Description
Option Description
Remember
It depends on the measure which aggregation function is available. The default for numbers is AVG, while
for text values the default is COUNT.
The result is previewed as a table.
Recommended visualizations for the current data configuration are indicated by a blue dot on the chart
Depending on the combination of measures and dimensions, the number of possible visualizations varies.
The preview updates dynamically.
5. To display activities in a time series chart, activate the Time series option in the configuration area and
choose the length of periods — per day, week, month, quarter, or year — represented on the graph.
6. To customize the appearance of your result, select Customize in the preview area and use these options:
The available options depend on the result of the data configuration and the selected visualization type. For
example, you can change the orientation for bar charts on the Visual tab, but this option isn't available for
pie charts.
7. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Related Information
Learn how to build a widget to display case data in a table using the widget builder.
Context
Assume, for example, you want to display how many orders are placed by different customer types, such as
'standard' and 'premium'. Classically, you would create a Case Table or a SIGNAL table widget to visualize your
information.
In the widget builder, you can create a view of your data by combining measures with dimensions and visualize
the data according to your needs. In the following example, we're creating a table showing the order amount for
each customer type.
Example
Procedure
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, drag an 'Order Amount' event into the Measures field.
The aggregation function is set to AVG by default.
3. To calculate the total sales order, select SUM as the aggregation function.
4. From the Attributes list, drag a 'Customer Type' event into the Dimensions field.
Use the Limit settings to make further adjustments to your configuration if needed.
Your widget is displayed in the preview area. The default visualization is (Table).
5. To name your widget, use Customize.
If you don't provide a name for the widget, the visualization option is used as the default name.
6. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Learn how to build a widget to display a single and significant value of your process using the widget builder.
Context
Assume, for example, you want to display the total sales value of your process. Classically, you would create a
Value widget to visualize your information.
In the widget builder, you can create a view of your data by using an 'Order Amount' measure and visualize the
data as a 'Value'.
Example
Procedure
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, drag an 'Order Amount' event into the Measures field.
The aggregation function is set to AVG by default.
3. To calculate the total sales order, select SUM as the aggregation function.
4. From the visualization menu, select (Value).
Your Value widget is displayed in the preview area.
5. To name your widget, use Customize.
Related Information
Learn how to create a widget to display the division or distribution of your process data in bar charts.
Context
Assume, for example, you want to display the average cycle time, grouped by customer type and city.
Classically, you would create a Breakdown widget to visualize your information.
In the widget builder, you can create a view of your data by combining a 'Cycle Time' measure with a 'Customer
Type' and a 'City' dimension and visualize the data using a bar chart.
Example
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, select a 'Cycle Time' event and drag it into to the Measures field.
The aggregation function is set to AVG by default.
3. From the Attributes list, select a 'Customer Type' measure and drag it into the Dimensions field, then select
a 'City' measure and also drag it into the Dimensions field.
Your widget is displayed in the preview area. The default visualization is (Table).
4. From the visualization menu, select (Bar Chart).
5. To further customize your visualization and to name your widget, use Customize.
For example, you can change the colors and orientation of the bars in a bar chart. You can also set a time
range for the y-axis.
If you don't provide a name for the widget, the visualization option is used as the default name.
6. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Related Information
Learn how to create a widget to display how much time your cases take to complete using the widget builder.
Context
Assume, for example, you want to determine how much time sales orders for different product types take to be
completed. Classically, you would create a Distribution widget to visualize your information.
In the widget builder, you can create a view of your data by combining a 'Cycle Time' measure with a 'Type of
Goods' dimension and visualize the data according to your needs.
Example
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, select a 'Cycle Time' event and drag it into to the Measures field.
The aggregation function is set to AVG by default.
3. From the Attributes list, select a 'Type of Goods' measure and drag it into the Dimensions field.
Your widget is displayed in the preview area. The default visualization is (Table).
The average (x̃), median (µ), and quartile (Q) values are displayed by default.
The first quartile (Q1) is the value under which 25% of data points are found when they are arranged
in increasing order. The third quartile (Q3) is the value under which 75% of data points are found when
arranged in increasing order.
5. To further customize your visualization and to name your widget, use Customize.
If you don't provide a name for the widget, the visualization option is used as the default name.
6. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
Related Information
Learn how to build a widget to display activities in a time series chart using the widget builder.
Context
Assume, for example, you want to determine the average duration of your process cases per city. Classically,
you would create an Over time widget to visualize your information.
In the widget builder, you can create a view of your data by combining a 'Cycle Time' measure with a 'City'
dimension and visualize the data according to your needs.
Example
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, select a 'Cycle Time' event and drag it into to the Measures field.
The aggregation function is set to AVG by default.
3. From the Attributes list, select a 'City' measure and drag it into the Dimensions field.
4. Activate the Time Series setting, then select Quarter from the Interval drop-down list.
Your widget is displayed in the preview area. The default chart is (Table).
5. From the visualization menu, select your preferred visualization option, for example (Line Chart).
Your Over time widget is displayed in the preview area.
6. To further customize your visualization and to name your widget, use Customize.
If you don't provide a name for the widget, the visualization option is used as the default name.
7. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Related Information
Learn how to create a widget to display process data as a scatter plot. A scatter plot is a graphical
representation of numerical variables plotted along two axes.
Context
Assume, for example, you want to discover a relationship between two numerical values, such as the total
quantity of orders and the total gross revenue. You also want to group them by company code. Classically, you
would create a Correlation widget to visualize your information.
In the widget builder, you can combine numerical values (labeled "#") for the total quantity and the total gross
amount and visualize the data as a scatter plot.
Procedure
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. From the Attributes list, select numerical values (labeled '#') for 'Total Quantity' and for 'Total Gross
Amount' and drag them into the Dimensions field.
Your widget is displayed in the preview area. The default visualization is (Table).
3. From the visualization menu, select (Scatterplot).
Your widget is visualized as a single-colored scatter plot.
4. From the Attributes list, select a 'Company Code' attribute and drag it into to the Dimensions field.
Your scatter plot is now colored and grouped by company code.
5. To further customize your visualization, and to name your widget, use Customize.
You can set data ranges for the coordinate axes and choose various color sets.
If you don't provide a name for the widget, the visualization option is used as the default name.
6. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Related Information
Learn how to create charts and tables using SIGNAL code in SAP Signavio Process Intelligence.
Prerequisites
• You're familiar with the concepts of measures and dimensions and understand how to work with time
series and case-and event-level metrics. For more information, see Best Practices for Data Configuration
[page 370].
Context
Build your data view by writing a SIGNAL query in a code editor. A visual user interface eases the selection of a
visualization.
Note
You can combine both modes. For example, you can start in Interactive mode and then switch to SIGNAL
mode to enhance the query. However, if you're in SIGNAL mode and switch to Interactive mode, all changes
made since the start of configuration will be lost.
Procedure
1. On a dashboard or investigation, select Create Widget or Add widget respectively. Then, select Charts &
Tables.
The widget builder opens.
2. Switch the data configuration mode from Interactive to SIGNAL.
3. Enter your query.
The data column on the left displays case or event attributes, custom attributes, the events within the
attributes, and the distinct values for each of them. These functions make writing queries easier and faster
for you:
• Show concrete values for an attribute or event by selecting it. The number on the right indicates how
many distinct values exist.
• Copy an attribute, event, or a value by hovering over it and selecting .
• Icons help you identify the type of an attribute or event:
Option Description
Recommended visualizations for the current data configuration are indicated by a blue dot on the chart
Depending on the combination of measures and dimensions, the number of possible visualizations varies.
The preview updates dynamically.
5. To customize the appearance of your result, select Customize in the preview area and use these options:
The available options depend on the result of the data configuration and the selected visualization type. For
example, you can change the orientation for bar charts on the Visual tab, but this option isn't available for
pie charts.
6. To save your widget configuration, choose Create. On a dashboard, finish by selecting Save.
The widget builder closes and your new widget is displayed.
Related Information
Find out how measures and dimensions can be combined and which predefined attributes exist. Understand
how to work with time series and case- and event-level metrics to optimize data configuration and analysis in
SAP Signavio Process Intelligence.
• These Predefined attributes are numerical values, which can't be used as a dimension together with a
measure:
• 'Case'
About 'Cases Started', 'Cases Ended', and 'Cases Open' in Time Series
In a time series chart, cases are counted for each period of time like day, week, month, and so on. Since a case
potentially spans many periods of time, you have to define exactly what you're counting.
'Cases Started' Count all cases where the first event happens in the speci-
fied period of time.
Example
Here, the cases are counted for each day. Each bar
stands for a case, and the bar length represents the
duration of a case.
'Cases Ended' Count all cases where the last event happened in the speci-
fied period of time.
Example
'Cases Open' Count all cases that started before or in the specified period
of time, but aren't yet finished.
Example
Here, the cases are counted for each day. Each bar
stands for a case, and the bar length represents the
duration of a case.
The red bars stand for the cases that started before or
on January 3, 2023, but are not yet finished on that day.
You can configure widgets using two types of metrics. One type aggregates data over cases, while the other
aggregates data over events.
Restriction
• Case- and event-level metrics are only available for widgets on dashboards. On investigations, you can
only configure widgets using case-level metrics.
• You can configure a widget using either case-level or event-level metrics, but you can't combine case-
and event-level metrics.
Related Information
Learn how to specify the number of results to display in a widget, with options to show all results or set a
specific limit. This feature allows for better control and organization of data presentation in a dashboard.
Prerequisites
• You're familiar with the concepts of measures and dimensions and understand how to work with time
series and case-and event-level metrics. For more information, see Best Practices for Data Configuration
[page 370].
• You're using the Interactive mode to create or edit the widget, see Creating Charts and Tables Using
Attributes and Metrics (Interactive Mode) [page 356].
If you want to specify the number of rows to return in SIGNAL mode, see LIMIT Clause in the SAP Signavio
Analytics Language Guide.
Context
By default, a widget displays up to 500 rows of the query result. You can customize this limit when creating or
editing a widget.
Note
If you choose to hide the warning about the results limit, your data could be more difficult for
others to interpret correctly.
Results
Related Information
Creating Charts and Tables Using Attributes and Metrics (Interactive Mode) [page 356]
Editing Widgets With Charts and Tables [page 375]
Learn how to edit the name and the description of a widget in the widget builder in SAP Signavio Process
Intelligence.
Procedure
If you don't provide a name for the widget, the visualization option is used as the default name.
A description adds context to the data. When a description exists, an info icon is displayed on the
widget. When users select the icon, the description is shown.
4. On a dashboard, choose Save to save the changes.
Learn how to customize widgets displaying charts and tables in SAP Signavio Process Intelligence.
Prerequisites
You're the owner of the investigation or dashboard with the widget that you want to edit.
If you aren't the owner, you need editing rights to the investigation or dashboard. The (Anyone can edit) icon
in the header indicates that you can edit the investigation or dashboard.
For more details on access rights to investigations and dashboards, and corresponding editing options for
widgets, see Share an investigation with other SAP Signavio Process Intelligence users [page 309] and Share a
dashboard with other SAP Signavio Process Intelligence users [page 327].
Context
Edit widgets when you want to reconfigure the data that is displayed in the widget, change the chart type or
switch to table format, change the appearance of the visualization, and more.
Tip
To view other widgets while editing yours, use the collapse options from the dialog header:
• (Dock right) or (Maximize) to pin a narrow version of the dialog to the right or switch back to full
size
• (Minimize) or (Maximize) to minimize the dialog at the bottom or switch back to full size
Results
Changes are applied immediately. If the widget is displayed in other SAP Signavio solutions, changes are visible
there as well.
Related Information
Learn how to set decimal places or specify a rounding procedure for widgets that display a KPI (single
aggregated value) in SAP Signavio Process Intelligence. Also, learn how to add the duration, a currency symbol,
or a custom unit (suffix) to this value.
Prerequisites
For an example of how to configure a Value widget, see Example: How to Create a Value Widget [page 361].
Rounding Procedure
• Round up
Rounds the decimal place to the next higher value.
• Round down
Rounds the decimal place to the next lower value.
• Round to closest
Rounds values above or equal "5" to the next higher value.
View a list with results after rounding up or down in the following example:
Example
Activate rounding, then increase or decrease the decimal places with and .
To learn how to configure units in metrics, see Create a custom metric [page 489].
Learn how to configure the format of data that is displayed in widgets in SAP Signavio Process Intelligence. Use
these options to adjust the optimal level of detail for represented KPIs.
Data formatting options are available under Customize in the preview area of the widget builder and must be
set for each widget individually.
To change the format, open the Customize menu and, on the Format tab, choose your preferred format.
In Table widgets, you can set the format for each column.
Durations
Durations can range from milliseconds to years. The format for calculated durations is determined
automatically by default. For example, longer periods of time are represented in months or years, while shorter
periods are more likely to be represented in hours or days. Manual changes can be reset to Auto to have the
format determined automatically again.
Example
A widget showing the lead time using the Day, Hour, Minute format:
By default, the full name of each unit is displayed. To save space, you can display only the first letter using the
Display units option.
Date and time data formatting is available for Time series options.
You can specify dates in any combination of days, months, and years. They can be displayed in short, medium,
or long formats. For example, a medium-length date format can be DD/MMD/YYYY (day, month, year).
Time data can range from milliseconds to hours, and it can be displayed in the 12-hour format (AM/PM) or the
24-hour format. For example, a time format can be hh:mm (hour, minute) in the AM/PM format.
You can turn the date and time settings on or off individually.
Example
A widget showing the average cycle time over a quarterly interval using the Day, Month, Year format.
Numeric Values
For numbers, you can set decimal places or specify a rounding procedure. You can also add the duration, a
currency symbol, or a custom unit (suffix). For details, see Decimal Places, Rounding Procedure, and a Unit
Type for KPIs [page 376].
Example
A widget showing an order amount with thousand separator, decimal places, and currency prefix.
Table
Example
For information on how to create a table, see Example: How to Create a Table Widget [page 360].
Value
Display case data that’s aggregated to a single value, for example, the average order value at a given location.
Example
Bar Chart
Example
For information on how to create a bar chart, see Example: How to Create a Breakdown Widget [page 362].
Pie Chart
Values are represented as pie slices to show the relative sizes of data.
Example
The values of two measures are displayed on one chart to illustrate the relationships between the values.
To create a dual axis chart, specify two measures and one dimension. The measures can be two attributes, two
metrics, or one attribute and one metric.
Example
The bars represent the values of the first selected attribute or metric, the line graph represents the values
of the second selected attribute or metric.
For information on how to create a dual axis chart, see Example: How to Create a Breakdown Widget [page
362].
Line Chart
Values are displayed as a series of data points that are connected by straight-line segments.
For information on how to create a line chart, see Example: How to Create an Over Time Widget [page 365].
You can also create line charts with SIGNAL code that don't relate to time data. For example, you can render
the number of sales cases in cities. See more in the SAP Signavio Analytics Language user guide.
Area Chart
An area chart is a line chart where the areas between each line and the following line are shaded with a color.
When breaking down the values into groups, the groups are vertically arranged on top of each other. Each
group has a different color.
The top line shows the sum of all groups. When hovering over the chart, tooltips show the specific data values
for each group.
Scatterplot
Display two numerical variables plotted along two axes. This graph can be good when you want to identify or
show the relationship between two variables.
Example
For information on how to create a scatterplot, see Example: How to Create a Correlation Widget Using a
Scatter Plot Visualization [page 366].
Use histograms to display how the numeric or duration attributes of your cases are distributed. For example,
you can see how the times for processing a sales order are distributed. You can group this data by order status,
by adding 'order status' as a dimension.
The average (x̃), median (µ), and quartile (Q) values are displayed by default. The first quartile (Q1) is the value
under which 25% of data points are found when they're arranged in increasing order. The third quartile (Q3) is
the value under which 75% of data points are found when arranged in increasing order.
Example
Note
You can only create histograms in the Interactive mode of the widget builder. This visualization option isn't
available in SIGNAL mode.
For information on how to create a histogram, see Example: How to Create a Distribution Widget [page
363].
Tree Map
Hierarchical data is displayed as nested rectangles. Use it to compare quantities and patterns.
For information on how to create a tree map, see Example: How to Create a Breakdown Widget [page 362].
Heat Map
• When you configure the widget with attributes, specify one measure and two dimensions.
• When you configure the widget with metrics, specify two measures.
• When you configure the widget with SIGNAL code, specify two dimensions.
The values of the two measures are represented by colored fields. The color scale represents the range
between the minimum value and the maximum value. The value range can be reduced to any range of
interest by moving the sliders.
For information on how to create a heat map, see Example: How to Create a Breakdown Widget [page 362].
Sankey Chart
A Sankey chart depicts the flow from one set of values to another. The connections between attribute values
are called links. The thickness of the links is proportional to the quantity or size of the flow.
To create a Sankey chart, select one metric or an aggregated case-level attribute that represents a quantity as
the measure and two or more case-level attributes as dimensions.
Example
This example shows the average cycle time per city, broken down by premium and standard customers.
Links depicting the cycle time for standard customers are always thicker than for premium customers,
indicating that standard customers generally have longer cycle times.
To view the cycle time per city and customer type, hover over the links.
Learn how to customize sorting, orientation, stacking, and data colors in widgets displaying charts in SAP
Signavio Process Intelligence. To better meet your needs, you can also add data labels and adjust the minimum
and maximum scale values of the vertical axis (y-axis).
Visual appearance options are available under Customize in the preview area of the widget builder and must be
set for each widget individually.
Note
Which option is available depends on the visualization option that you've selected in the widget builder.
Bars charts display data in vertical clusters by default. You can change both, stacking, or orientation as follows:
Sorting for Bar Charts, Pie Charts, and Dual Axis Charts.
You can sort data in bar charts, pie charts, and dual axis charts. To sort data, activate sorting and select the
value to sort and the sort order.
The average (x̃), median (µ), and quartile (Q) values are displayed in histograms by default. You can show or
hide individual data values or all at once.
Several predefined color palettes are available for all visualization types, except for tables and KPIs (single
values).
These color palettes are available for most visualization These color palettes are available for heatmaps and tree-
types. maps with two measures and one dimension.
You can show the values of a bar, area, or graph (line), directly on a chart. The option is deactivated by default.
Once you've activated the Data Label option, you can specify the position, orientation, font size, and font style
for the data labels.
By default, the range of values shown in the axes is set to visualize all the data returned by the query. To provide
a tailored context of the data values when designing a data analysis or to make multiple charts comparable,
you can adjust the minimum and maximum scale values.
• Limits are available for dual axis charts, area charts, line charts, bar charts, and histograms.
• For dual axis charts, you can specify limits for both measures.
• You can specify minus values if needed.
• Whether a comma or a period works as a decimal separator depends on your browser. For example, it's a
comma in Microsoft Edge and Google Chrome, and a point in Mozilla Firefox.
• For durations, specify the minimum and maximum values for the week (--w), day (-d), hour (--h), and
minute (--m). For example, a threshold of eight days is specified with 01 w, 1 d, ‐‐ h, ‐‐ m.
Related Information
Creating Charts and Tables Using Attributes and Metrics (Interactive Mode) [page 356]
Creating Charts and Tables Using SIGNAL Code (SIGNAL Mode) [page 368]
Editing Widgets With Charts and Tables [page 375]
Widget functionality in SAP Signavio Process Intelligence: Show or hide data groups, drill down into data,
add thresholds, filter based on selected data, and export data as CSV. Use the options to highlight important
metrics in widgets displaying charts, tables, or KPIs (single aggregated values) and export data for further
analysis.
For data that's displayed in charts and grouped several times, you can show or hide the groups in the widget. To
do so, select or deselect the groups in the chart legend.
Example
Explore multidimensional data by navigating from one level down to a more detailed one. You don't need to
configure a new widget to do this, but instead you break down the data of an existing widget.
To drill down into your data, select (Add) in the widget and select an option. The result set is opened as a
new widget.
Which option is available depends on the data that you've selected in the widget.
Example
Watch how to view a bar chart's data over time and its distribution:
Add Thresholds
Thresholds help draw attention to metric values that are above or below a limit or outside a specific range. You
can add visual thresholds to widgets that display charts, a single metric value, or tables.
You can select one or more data points in a chart and use this selection to set a filter. To create the filter, select
the data in the widget and choose (Add). Then, decide to which level to add the filter (widget or dashboard)
and whether to include or exclude the selected data.
Example
Watch how to select data in a bar chart and create a filter based on the selection:
You can export chart or table data from a dashboard widget to a CSV file. For more details, see Data Export
(CSV) [page 405].
Related Information
4.5.2.10 Thresholds
Learn about thresholds on widgets in SAP Signavio Process Intelligence.
Thresholds help draw attention to metric values that are above or below a limit or outside a specific range. You
can add visual thresholds to widgets that display charts, a single metric value, or tables.
You can set a single threshold or an upper and lower threshold. Different highlight colors are available for areas
inside or outside these thresholds.
Examples
The metric (877) is displayed as a blue bar. A second bar displays the regions above, between, and below the
threshold value according to the set colors. In addition, a goal (880) can be displayed by a dashed line.
Prerequisites
The widget displays case data in a chart, using one of these visualization types:
• Area Chart
• Bar Chart
• Line Chart
To create your widget, see the instructions in How to Build Data Visualizations [page 356].
Context
• Line
The threshold is displayed as a dashed line including the threshold value and label.
Example
• 2 regions
Different background colors for the upper and lower regions are assigned.
• 3 regions
A second threshold can be set, resulting in three regions: Above the upper threshold, below the lower
threshold, and between the thresholds.
Example
For durations, specify the threshold value for the week (--w), day (-d), hour (--h), and minute (--m). For
example, a threshold of eight days is specified with 01 w, 1 d, ‐‐ h, ‐‐ m.
4. If you like, add a goal, for example to indicate a target value.
5. You can re-assign the background colors using the color bar. To swap the color assignment, choose Swap
color order.
6. Confirm with Save.
Results
Prerequisites
• The widget displays data in a table, using the visualization type Table.
• You can add thresholds only to columns with numeric data.
To create your widget, see the instructions in How to Build Data Visualizations [page 356].
Context
• 2 region
Different background colors for the upper and lower regions are assigned.
• 3 regions
A second threshold can be set, resulting in three regions: Above the upper threshold, below the lower
threshold, and between the thresholds.
Procedure
Results
Prerequisites
The widget displays case data that's aggregated to a single value, using the visualization type Value.
To create your widget, see the instructions in How to Build Data Visualizations [page 356].
• 2 region
The threshold value is displayed above the calculated metric. The color of the metric indicates whether the
value is above or below the threshold. You can specify which color is used to indicate going below or above
the threshold.
Procedure
Results
Context
When a threshold is removed, live insight shapes based on the threshold may stop working.
Results
Related Information
Live Insights shapes in SAP Signavio Process Manager require the threshold setting.
With the Live Insights shapes, you can add insights and KPIs you want to monitor to BPMN diagrams, value
chains, and navigation maps.
For that, you add a Live Insights shape to your diagram and link it with a widget from SAP Signavio Process
Intelligence. Users can then view the Live Insights in SAP Signavio Process Collaboration Hub.
In SAP Signavio Process Intelligence, thresholds need to be defined for the widgets that are linked to Live
Insights shapes. In SAP Signavio Process Manager, the color of the shape indicates how the current result of
the widget relates to the defined thresholds. The following example shows how the sentiment shape reflects
the current widget result:
The color of a shape is only visible in SAP Signavio Process Collaboration Hub. In SAP Signavio Process
Manager, the shapes stay grey.
Related Information
Learn how you can export chart or table data from a dashboard or investigation widget in SAP Signavio Process
Intelligence to a CSV file.
Note
The CSV export is available for widgets showing a table, a KPI (value), or any type of chart, but not for
widgets that show a histogram.
To export data shown in a widget on a dashboard or investigation, select in the widget and choose Export as
CSV. The file is saved to your browser's download folder.
Format
Example
The exported data for a widget, which displays a chart with the order amount for each city, can appear in
the CSV file as follows:
Order Amount,City
210.4030769230769,San Francisco
197.6191666666667,Houston
168.4736363636364,Washington
192.35599999999997,New York
250.01249999999996,Miami
235.17714285714285,Boston
Values for cycle time are always exported as milliseconds. If you need a different time format, you need to
convert the milliseconds outside of SAP Signavio Process Intelligence.
By default, the widget displays and exports up to 500 rows of the data set.
• When you edit the widget in Interactive mode, activate the Limit option in the configuration area and define
a new limit.
• When you edit the widget in SIGNAL mode, use the LIMIT Clause to define a limit.
If you then run the export again, the corresponding amount of data is exported.
Related Information
Creating Charts and Tables Using Attributes and Metrics (Interactive Mode) [page 356]
Creating Charts and Tables Using SIGNAL Code (SIGNAL Mode) [page 368]
Use these SAP Signavio Process Intelligence widgets to discover which process steps are taken, whether your
actual process conforms with the target process model, how many process variants exist, and much more.
This widget lists all activities that occur in the process. If a BPMN model is linked to the dashboard or
investigation, the activities are grouped by conformance.
That way, you can compare the activities of the actual process with the activities in the linked process model,
to see how conforming your actual process data is. If no BPMN model is linked, all activities are treated as
non-conforming.
For all activities, the total number of occurrences is given in absolute numbers, as well as percentages.
Prerequisites
A BPMN model is linked to the dashboard or investigation. See more in Link a BPMN Diagram to a Dashboard
[page 328] and Link a BPMN Diagram to an Investigation [page 311].
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
The widget displays up to five activities by default. To show all activities, click Show more. You can reduce the
number or activities again with Show less.
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
Check if the actual process flow, as recorded in the event log, matches the planned model and vice versa.
Identify hotspot activities in variants and see how actual paths look like.
You can browse through all variants one by one. If you want to compare several variants, we recommend using
the Variant Explorer widget.
Prerequisites
A BPMN model is linked to the dashboard or investigation. See more in Link a BPMN Diagram to a Dashboard
[page 328] and Link a BPMN Diagram to an Investigation [page 311].
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
Widget Settings
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
With Variant path and Hotspots, you can switch between path view and hotspot view.
Example
The first shipment lasts on average 1 day and 6 hours, the second 2 days.
• Hotspot: Displays how often activities of a process variant have been executed.
• Hotspots are marked with a turquoise circle and display the execution frequency in total numbers and
in percent.
• The circle size indicates the execution frequency.
• 68 times goods were shipped using the standard shipping, which is 25% of all activities.
• Receiving a t-shirt from printing occurs 34 times, which is 12.5% of all activities.
• Shipping goods with express was never executed.
Exploring Variants
• Number of cases
• Duration attribute
Select a duration attribute from the drop-down list.
• Cost attribute
Select a cost attribute from the drop-down list.
A variant filter narrows down data to the activities of one process variant.
Example
This filter option is available in addition to the filter function in the widget menu, read more in section Apply
filters [page 435].
Zoom in
Zoom out
You can also press Ctrl on your keyboard and use your mouse wheel/trackpad to zoom in and zoom out.
Related Information
Display the latest revision of a process model from SAP Signavio Process Manager. From the widget, you can
open the model in the editor of SAP Signavio Process Manager, or compare it with other revisions or models.
• BPMN diagrams, see section Business Process Modeling and Notation (BPMN)
• Value Chains
• Customer Journey Maps
Prerequisites
A BPMN model is linked to the dashboard or investigation. See more in Link a BPMN Diagram to a Dashboard
[page 328] and Link a BPMN Diagram to an Investigation [page 311].
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
To open the diagram in SAP Signavio Process Manager, click Open in Editor.
Compare with previous version The same diagram is linked in the SAP Signavio Process Manager opens
widget and in the dashboard or investi- in comparison view, displaying the lat-
gation est revision in the widget and the revi-
sion linked in the dashboard or investi-
gation.
Compare with linked model Different diagrams are linked in the SAP Signavio Process Manager opens
widget and in the dashboard or investi- in comparison view, displaying the di-
gation agram in the widget and the diagram
linked in the dashboard or investigation.
Zoom in
Zoom out
You can also press Ctrl on your keyboard and use your mouse wheel/trackpad to zoom in and zoom out.
View a process model that is generated based on the event log data and understand how your process is
performing, in terms of complexity and efficiency. Analyze conformance, cycle times, and occurrences of the
individual activity sequences.
By default, the widget initially displays the most common or significant activities and paths.
On the widget, you can view the entire process or parts of it by adjusting the percentage of activities and paths
that are displayed. The higher the percentage, the more activities or paths are displayed.
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
Widget Settings
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
Example
• Cycle time: Display how much time is between two activities or how long an activity takes.
• Events are sorted by timestamp. If multiple events have the same timestamp, they're sorted
alphabetically by name.
• The line width shows the duration between activities.
• The number on a connector shows the average duration between two activities.
• Added metrics: Display the selected metric on the diagram's activities.
Note
Only metrics of the aggregation type over events are available here. Read more in section Where to use
metrics [page 473].
Example
You can set how many activities and paths are displayed with the sliders:
• If you change the activities percentage, more or less activities are displayed in the diagram. Activities are
added or removed based on a significance algorithm, with frequency being an important factor.
• If you change the paths percentage, more or less of the paths are displayed for the currently displayed
activities. Paths are added or removed based on the frequency of the path.
For example, when you change the percentage of activities to 60%, the widget displays 60% of activities with
the highest frequency.
When changing percentages, newly added activities and paths are colored blue for a few seconds to highlight
the addition.
Example
To visually compare multiple discoveries, add several Process Discovery widgets to your dashboard or
investigation.
You can select one or more activities or a connector in the widget and create a filter from that selection.
Example
Multiple activities selected • Cases that contain all of the selected activities
• Cases that contain none of the selected activities
One connector selected • Cases where the first related activity is directly fol-
lowed by the second related activity
• Cases where the first related activity isn't directly fol-
lowed by the second related activity
• Cases where the first related activity is eventually fol-
lowed by the second related activity
• Cases where the first related activity isn't eventually
followed by the second related activity
This filter option is available in addition to the filter function in the widget menu, read more in section Apply
filters [page 435].
Zoom in
Zoom out
You can also press Ctrl on your keyboard and use your mouse wheel/trackpad to zoom in and zoom out.
See for each variant which and how many cases follow a specific path. Find out where cases enter and leave the
process and whether activities outside the process exist.
Prerequisites
A BPMN model is linked to the dashboard or investigation. See more in Link a BPMN Diagram to a Dashboard
[page 328] and Link a BPMN Diagram to an Investigation [page 311].
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
Example
Widget Settings
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
You can set how many variants are displayed with the Variants slider.
Zoom in
Zoom out
You can also press Ctrl on your keyboard and use your mouse wheel/trackpad to zoom in and zoom out.
Deep dive into the variants of your process, explore their distribution, and compare them with each other.
Analyze conformance, cycle times, and occurrences of the individual activity sequences.
You can display one or several variants at a time and export a BPMN model based on selected variants.
To check whether activities of a variant conform with the flow of a BPMN model or to view hotspot activities, we
recommend using the Process Conformance widget.
Prerequisites
A BPMN model is linked to the dashboard or investigation. See more in Link a BPMN Diagram to a Dashboard
[page 328] and Link a BPMN Diagram to an Investigation [page 311].
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
Widget Settings
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
Example
View a process variant with con-
forming and non-conforming activi-
ties.
You can display occurrences and cycle times in the process flow. The following options are available:
• Occurrences
• Cycle time
The thickness of the paths within the process flow indicates the number of cases.
• Number of cases
Note
Only metrics of the aggregation type over cases are available here. Read more in section Where to
use metrics [page 473].
Example
Watch how to view the path of one or more variants by selecting or deselecting the variants.
Example
Watch how to select an attribute to display and switch from total numbers to percentages.
A variant filter narrows down data to the activities of one or more process variants.
Example
This filter option is available in addition to the filter function in the widget menu, read more in section Filter
process data [page 435].
Zoom in
Zoom out
You can also press Ctrl on your keyboard and use your mouse wheel/trackpad to zoom in and zoom out.
Related Information
4.5.4.1 Spreadsheet
Learn how to add a spreadsheet to your process analysis. With a spreadsheet, you can include additional
information on your data analysis, and perform quick calculations with functions and formulas.
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
Insert Options
When pasting content, the context menu provides the following options:
Formatting Options
When selecting content, the context menu provides the following options:
Left Bold
Right Italic
Center Underline
Justify Strike
Top
Middle
Bottom
Any numbers, negative and positive as float or integer 1, -2, 1.45, -3.67
Cell coordinates A1
Excel formulas
"ABS","ACCRINT","ACOS","ACOSH","ACOT",
"ACOTH","ADD","AGGREGATE","AND","ARABI
C","ARGS2ARRAY","ASIN","ASINH","ATAN",
"ATAN2","ATANH","AVEDEV","AVERAGE","AV
ERAGEA","AVERAGEIF","AVERAGEIFS","BASE
","BESSELI","BESSELJ","BESSELK","BESSE
LY","BETA.DIST","BETA.INV","BETADIST",
"BETAINV","BIN2DEC","BIN2HEX","BIN2OCT
","BINOM.DIST","BINOM.DIST.RANGE","BIN
OM.INV","BINOMDIST","BITAND","BITLSHIF
T","BITOR","BITRSHIFT","BITXOR","CEILI
NG","CEILINGMATH","CEILINGPRECISE","CH
AR","CHISQ.DIST","CHISQ.DIST.RT","CHIS
Q.INV","CHISQ.INV.RT","CHOOSE","CHOOSE
","CLEAN","CODE","COLUMN","COLUMNS","C
OMBIN","COMBINA","COMPLEX","CONCATENAT
E","CONFIDENCE","CONFIDENCE.NORM","CON
FIDENCE.T","CONVERT","CORREL","COS","C
OSH","COT","COTH","COUNT","COUNTA","CO
UNTBLANK","COUNTIF","COUNTIFS","COUNTI
N","COUNTUNIQUE","COVARIANCE.P","COVAR
IANCE.S","CSC","CSCH","CUMIPMT","CUMPR
INC","DATE","DATEVALUE","DAY","DAYS","
DAYS360","DB","DDB","DEC2BIN","DEC2HEX
","DEC2OCT","DECIMAL","DEGREES","DELTA
","DEVSQ","DIVIDE","DOLLAR","DOLLARDE"
,"DOLLARFR","E","EDATE","EFFECT","EOMO
NTH","EQ","ERF","ERFC","EVEN","EXACT",
"EXP","EXPON.DIST","EXPONDIST","F.DIST
","F.DIST.RT","F.INV","F.INV.RT","FACT
","FACTDOUBLE","FALSE","FDIST","FDISTR
T","FIND","FINV","FINVRT","FISHER","FI
SHERINV","FIXED","FLATTEN","FLOOR","FO
RECAST","FREQUENCY","FV","FVSCHEDULE",
"GAMMA","GAMMA.DIST","GAMMA.INV","GAMM
ADIST","GAMMAINV","GAMMALN","GAMMALN.P
RECISE","GAUSS","GCD","GEOMEAN","GESTE
P","GROWTH","GTE","HARMEAN","HEX2BIN",
"HEX2DEC","HEX2OCT","HOUR","HTML2TEXT"
,"HYPGEOM.DIST","HYPGEOMDIST","IF","IM
ABS","IMAGINARY","IMARGUMENT","IMCONJU
GATE","IMCOS","IMCOSH","IMCOT","IMCSC"
,"IMCSCH","IMDIV","IMEXP","IMLN","IMLO
G10","IMLOG2","IMPOWER","IMPRODUCT","I
MREAL","IMSEC","IMSECH","IMSIN","IMSIN
H","IMSQRT","IMSUB","IMSUM","IMTAN","I
NT","INTERCEPT","INTERVAL","IPMT","IRR
","ISBINARY","ISBLANK","ISEVEN","ISLOG
ICAL","ISNONTEXT","ISNUMBER","ISODD","
ISODD","ISOWEEKNUM","ISPMT","ISTEXT","
JOIN","KURT","LARGE","LCM","LEFT","LEN
","LINEST","LN","LOG","LOG10","LOGEST"
,"LOGNORM
.DIST","LOGNORM.INV","LOGNORMDIST","LOGNORMINV","LOWER
For information on how to use these functions and formulas, read more in Microsoft's documentation for Excel
formulas and functions .
Use the text box to include additional information about your process analysis. Common formatting options are
available.
To add a widget to a dashboard or investigation, select Create Widget or Add widget respectively, then choose
the widget type you need.
Widget Settings
To open the widget settings, choose (more options) in the widget and select Edit. Then, apply your
changes.
• Insert a link
4.6 Filters
SAP Signavio Process Intelligence allows you to filter the process data to focus on relevant information. Filters
can be combined on multiple levels and are immediately applied to the displayed data, providing a more
efficient and tailored analysis.
Event-level filters modify cases by including or excluding events based on the applied filter criteria. Event-level
filtering can leave behind cases with empty event lists.
The following case-level configuration filters out all cases with a 'Receive Customer Order' event.
However, the same configuration on event-level returns the cases without the 'Receive Customer Order'
events in the result set.
Notice how the same cases are listed, but there are fewer events for each case.
• Insights level, where filters are applied to the automated insights you create in the Auto-insights tab.
• Dashboard level, where the filters impact data across all pages and widgets or a specific page.
• Investigation level, in which the filters are applied to all chapters and widgets.
• Chapter level, where filters affect all widgets in a specific chapter of an investigation.
• Widget level, which restricts the filter's effect to that particular widget.
• Value case level, where the filter is applied on a particular value case calculation.
Each of these levels can have multiple filters. Filters across all levels are combined using the logical AND
operator.
The order in which the data is filtered depends on the type of filter and at which level the filter was applied, but
not in what order the filters were created or are displayed on the user interface.
Filter Result
The number of cases included in the filter result is displayed in the investigation header. View an example
below.
Example
For this investigation, three filters were created. The result includes 185 cases, which is 21% of the cases.
On widgets, the filter icon in the widget header indicates how many filters were created. View an example
below.
Two filters are applied to the widget. When you hover over the filter icon , the absolute and percentage
number of cases are displayed.
Related Information
Learn how to customize data analysis by creating filters on dashboards, investigations, widgets, and Insights.
Context
Use filters to adjust which events and cases to include or exclude in the process analysis.
Note
The dashboard owner can choose whether other users can save changes to filters.
• Example
Results
Related Information
Get to know the different types of filters available in SAP Signavio Process Intelligence.
Activities
Then, specify whether to include or exclude, and pick the events. To find cases that include certain events while
excluding other events, two separate filters are needed.
Example
Advanced Filter
Filter for data using your own conditions, written in SIGNAL. For example, you can filter for all Miami
purchase orders with the status 'Delivered' by applying the condition ("ORDER Status"='Delivered' AND
"City"='Miami').
Note
Behavior
Use this filter to find cases with activities that show a certain behavior. For example, you can focus on cases
that start or end with certain activities, include certain activity sequences, or include or exclude rework.
Behavior Description
Starts/ends with Show cases that start or end with a certain activity.
When searching for cases with rework, you can specify how
often activities were repeated.
Example
Example
Case ID
To get the ID of the cases to be analyzed, create a Case Table and copy the ID from there.
Note
Make sure to only enter the ID and only enter an ID once. If you enter leading or trailing spaces or enter an
ID multiple times, the filter doesn't return any result.
For more information on how to create a Case Table widget, see Case Table [page 345].
Date
Use this filter to find cases that have a certain status during a specified date range.
For filtering on case level, specify whether to find cases that are started, running, ended, or started and ended.
Then, you pick one of these date range options:
• Rolling date range: Display data from a fixed amount back in time to today’s date. If you set the filter to 10
days, for example, data from the last 10 days is displayed.
• Custom date range: Display the exact date range by selecting a start date and an end date.
When filtering on event level, you directly pick the date range option.
Since a case potentially spans many days, weeks, or other periods of time, you have to define exactly what
you're searching for.
Started Find all cases where the first event happens in the specified
period of time.
Running Find all cases that started before or in the specified period of
time, but aren't yet finished.
Ended Find all cases where the last event happened in the specified
period of time.
Started and ended Find all cases that started before or in the specified period of
time and that are finished.
Note
Choose an attribute to find cases where the attribute does or doesn't have a certain value.
The data type of the selected attribute determines the filtering options. For each filtering option, there's an
example below.
Example
For choice attributes, you can specify whether to include or exclude them.
For durations, you can specify a minimum, maximum, or both, as well as the period of time.
For dates, you can specify the start date, end date, or both.
Filters are listed in the title. Select to hide or to show the filter.
1. In your auto-insights page or widget, select (Filter applied) to open the filter user interface.
2. Filters are listed in the filter overview pane. Select to hide or to show the filter.
3. Close the filter user interface with Ok.
4. Review your changes and confirm with Save.
Introduction to insights, one of the data analysis options in SAP Signavio Process Intelligence.
An insight captures your data and discoveries at a specific point in time. You can think of an insight like a note,
where you write down your findings and if you wish, share them with your teammates.
Insights can be created manually, or generated automatically. This section explains how to create and save
insights, who can work with them, how to manage them, and the underlying technology behind the algorithms
that generate them.
Manual insights are ones which you capture yourself. This section explains how to create and save manual
insights for investigations, dashboards, and widgets.
Learn how to create an insight in an investigation, dashboard, or widget in SAP Signavio Process Intelligence.
Context
Note
Access to the Initiatives feature requires a license for SAP Signavio Process Transformation Manager.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
Procedure
1. To open your process, choose (Processes) in the sidebar, then select your process from the list.
2. In Investigations or Dashboards, select the investigation or dashboard in which you want to create an
insight.
3. To add a new insight:
a. If your investigation, dashboard or widget doesn't already have saved insights, select the icon.
b. If your investigation, dashboard or widget already has saved insights, select Create Insight.
The insight creation dialog opens. The dialog displays the insight details. Any filters (whether case or event
level) that were applied to the investigation or dashboard at the time of the insight's creation are displayed
under Applied Filters.
4. Edit your insight's settings as needed. You have the following options:
The insight is saved automatically. The saved insight includes a icon. Selecting the icon opens the
insight's associated investigation, dashboard, or process, in a new window.
Note
• In some cases, a data snapshot is saved along with the insight. This snapshot shows the data at
the specific time that the insight was created, and can't be edited afterwards. In SAP Signavio
Process Intelligence, data snapshots are captured in Breakdown Over time, SIGNAL table, Value,
and Correlation widgets. For more information, see Data Snapshots and Highlights [page 471].
5. Select to exit the insight creation dialog. To delete the insight, select (Delete).
Related Information
4.7.2 Auto-Insights
This section explains how to use auto-insights to discover anomalies and trends in your process data.
The generation of automated insights is performed by integrated algorithms. Automated insights are
generated based on the valid case-level metrics in your metric collection, and all case-level attributes in the
event log. The cycle time metric is available by default in every process. So, you can generate cycle time-based
insights instantly. The more metrics in your metric collection, the longer it can take to generate insights.
Comparison of Auto-Insights in Investigations, Dashboards and the Auto-Insights Tab [page 459]
Learn how auto-insights differ between investigations or dashboards, compared to the Auto-insights
tab.
Context
Filters applied to the investigation or dashboard are taken into account when generating insights. For more
information, see Filters [page 435].
By default, insights are generated for all valid case-level metrics in your metric collection, and all case-level
attributes in your event log.
Procedure
1. To open your process, choose (Processes) in the sidebar, then select your process from the list.
2. The process overview opens. In Investigations or Dashboards, select the investigation or dashboard for
which you want to create an insight.
3. To generate insights, select Auto-insights. If you'd like to generate insights only for specific metrics and
attributes, select (Automated Insights settings). Choose your settings, then confirm with Apply.
Results
Insights generation starts, and a progress bar displays the insight generation progress. The time needed
for generation depends on the volume of the data set and whether the data was cached previously. In the
meantime, you can visit other pages, and the generation will continue in the background.
You can use the dropdown list to group the insights by metric. The insights are then grouped by the valid
metrics from your metric collection from which they have been generated. By default, it is set to Ungrouped.
If any filters were applied to the investigation or dashboard at the time an insight was generated, they are
displayed in the insight's settings under Applied Filter.
For information about saving a generated insight, see Saving Auto-Insights [page 460].
Related Information
Learn how to generate auto-insights in the Auto-insights tab of your process. Insights can be generated here
even if there aren't any investigations or dashboards yet.
Context
Note
This page describes how to generate insights using the existing All Auto-insights feature. If you are a
participant in the Text to Insights beta release, see AI-assisted Process Analyzer (Beta).
By default, the Auto-insights tab generates insights for all valid case-level metrics in your metric collection, and
all case-level attributes in your event log. If you add or change a filter and save it, new insights are generated to
reflect the new filter. Filters applied in the Insights tab are not applied to investigations or dashboards. For more
information, see Filters [page 435].
Procedure
1. To open your process, choose (Processes) in the sidebar, then select your process from the list.
• If you have access to more than one process view, select (Process view) to choose which one to use
for auto-insights generation.
• To generate insights only for specific metrics and attributes, select (Automated Insights settings).
Choose your settings, then confirm with Apply.
Results
Insights generation starts, and a progress bar displays the insight generation progress. The time needed
for generation depends on the volume of the data set and whether the data was cached previously. In the
meantime, you can visit other pages, and the generation will continue in the background.
You can use the dropdown list to group the insights. The default setting is Ungrouped. Selecting Group by
Metric displays a list of all valid metrics in the metric collection for which insights have been generated. The
Enter a keyword in the Search field to find insights that include specific terms.
For information about saving a generated insight, see Saving Auto-Insights [page 460].
Learn how auto-insights differ between investigations or dashboards, compared to the Auto-insights tab.
Process view Insights are generated based on the Insights are generated based on the
process view that is configured for the process view that is assigned to you.
investigation or dashboard.
When multiple process views are as-
For investigations, you can only change signed to you, you can choose which
the process view if you have the man- one to use for insights generation.
ager or analyst role for a process.
Filters When filters are applied to the investi- When filters are applied, insights are
gation or dashboard, insights are gener- generated only for the filtered data.
ated only for the filtered data.
Related Information
Learn how to save the automated insights you've generated to the Auto-insights tab, or turn them into a widget
on an investigation or dashboard.
Prerequisites
Context
Note
Access to the Initiatives feature requires a license for SAP Signavio Process Transformation Manager.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
Procedure
• To save your insight, select the icon. You have the following options:
• Save as an insight:
a. Select Save to insights. The insight creation dialog opens in a side panel.
b. If needed, configure the insight's Status, Initiative, Details, Business Impact, and Priority.
The saved insight includes a icon. Selecting the icon opens the insight's associated investigation,
dashboard, or process, in a new window.
• Save your insight as a widget:
a. Select Save as widget.
b. Choose whether to save the widget to an investigation or dashboard, then select the specific location
as required.
c. Confirm with Save.
The widget is created and you can further explore the data in it.
Note
Learn about the algorithms that generate auto-insights in SAP Signavio Process Intelligence.
Note
• The algorithms work only with valid metrics in the metric collection. The algorithms don't work with
metrics in the metric library, as they are not valid by default.
• The algorithms work only on case-level metrics.
• A maximum of 20 insights on outliers are displayed per algorithm.
Anomaly detection is based on the identification of outliers. This algorithm defines an outlier as a value that
falls outside the range of the weighted mean +/- twice the weighted standard deviation (95% significance). To
identify outliers, the algorithm groups each metric based on selected attributes. The algorithm first calculates
the aggregated metric value for each group. The second calculation is the mean of the metric values over all
groups, weighted by the number of cases underlying each group.
Outliers are displayed in a bar chart. The chart displays the metric values as bars, with its value axis on the left.
The number of cases is displayed as a line, with its value axis on the right. To view a chart displaying an outlier,
select the example below.
Example
• Only one insight is created per metric value and attribute pair.
• An attribute must have 2 to 500 distinct values to be considered for insights generation. The number of
rows isn't limited.
This algorithm identifies outliers in time series data, which can indicate important points in time where
something unusual happened. This algorithm defines an outlier as a data point that falls outside the range
of the mean +/- twice the standard deviation (95% significance). Note that this algorithm doesn't incorporate
any weighting.
The algorithm creates a time series for each metric selected. The time series sequence is automatically
determined as days, weeks, or months, based on the total time range.
To identify outliers, the algorithm applies a sliding window to the time series and performs a z-score analysis.
The sliding window is roughly 10% of the size of the total number of time buckets.
The insight includes a graph, with the outliers highlighted as blue points. You can view more details about an
outlier by hovering over it. You can use the Time slider to narrow the displayed time frame.
A gray area is also shown, which is the statistical acceptance threshold over which a spike in the time series is
considered an outlier or anomaly. The gray area visualizes the maximum rolling deviation from the average as a
Z-score criteria of 2.0 defined by the algorithm. The highlighted outliers are the ones that breach the gray area
threshold.
Note
• The gray area and outlier visualizations are not displayed in the graph if the insight is saved as a widget.
• An attribute must have 2 to 500 distinct values to be considered for insights generation. The number of
rows isn't limited.
Correlation Algorithm
The algorithm calculates the coefficient for all pairs of case level attributes with numeric values, and also
between case-level metrics in the metric collection. Custom attributes are also considered. Only strong
correlations and anti-correlations are returned. Insights are provided when the calculation produces one of
the following results:
Note
Related Information
This section explains how to navigate, view, edit, and delete your insights.
Prerequisites
Procedure
1.
2. To view your saved insights, select the icon. You can find this icon:
• Use the Filter By and Sort by dropdown lists to filter and sort your insights as needed. The default sort
setting is Sort by Last Changed.
• The creator and last editor of the insight, along with their respective timestamps, are shown in the
insight details view. The time or date displayed is based on your current timezone.
• To share an insight, choose the insight you want to share, then select to manage access and share
it with other members. Users with access to the process view for which the insight was created already
have access to the insight, so you don't need to share it with them again.
To not lose track of an insight's context, insights that were captured manually link back to the investigation or
dashboard where they were created.
Context
Procedure
1. To open your process, choose (Processes) in the sidebar, then select your process from the list.
2. Navigate to your insights management page by selecting Insights.
3. Select Saved to view all saved insights, then choose the insight.
Learn how to edit insights, and which insight properties can be edited.
Prerequisites
Note
Access to the Initiatives feature requires a license for SAP Signavio Process Transformation Manager.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
Prerequisites
Context
Procedure
Results
All users who have access to your process in SAP Signavio Process Intelligence can work with insights as
follows:
Note
Access to the Initiatives feature requires a license for SAP Signavio Process Transformation Manager.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
If filters exist for an investigation, dashboard, or a widget, the filters are stored in all created insights. To view
the filters applied to an insight, you need to open the insight.
If filters have been applied to an investigation, dashboard, widget, or the Auto-insights tab, the filters are stored
in all created insights. To view the filters applied to an insight, you need to open the insight.
Changes to filters aren't applied to existing insights. When you change a filter, you need to create new insights.
Process views control the data for which users can create insights. Also, users can only access insights for
process views to which they have access, or which have been shared with them explicitly - either directly, or as
part of an initiative.
If insights exist for a process view, changes to the process view aren't applied to the existing insights. So, the
data snapshot is still visible to anyone with access to the process view. If you want to restrict a process view, we
recommend to create a new process view and re-assign the users.
Related Information
4.7.6 Comments
Learn how to view comments, how to write or delete comments, and how to mention others.
Select an insight to write, edit, delete or leave comments. Then, select Comments to display all comments
related to that insight.
• Use Status to filter comments that are open, resolved, or rejected, or to view all comments.
• To write a comment, enter your text in the Add a comment field and select Comment.
Your comment will now appear in the list.
• Use @ to mention someone in a comment.
• To resolve a comment, select Resolve.
• To reply to a comment, select Reply.
Action Description
You can also receive notifications about insight comments. Select the links below to learn more.
Related Information
Note
Access to the Initiatives feature requires a license for SAP Signavio Process Transformation Manager.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
When you add insights to the following widgets, a snapshot of the data is included:
Other widgets don't provide a visual snapshot of the data. Data snapshots are also included when generating
automated insights.
Example
Note
This option is only available for Breakdown and Over time widgets.
To draw attention to specific data displayed on a widget, you can create an insight with a highlighted data area.
For that, select the data that you want to highlight and create the insight. Your highlight and current zoom level
are saved to the data snapshot.
Example
How to work with business metrics (KPIs) in SAP Signavio Process Intelligence to evaluate, measure, and
benchmark the performance of business processes.
A metric is a quantifiable measure that you can use to evaluate and track the performance of your processes.
Such quantifiable measures can be, for example:
Metric Collection
Each process has a metric collection containing all metrics that are available for process analytics. By default,
the Average cycle time metric is available in every process.
You can add metrics to the metric collection by assigning them from the metric library or by creating your own
metrics.
To access the metric collection, open your process and choose the Metrics tab.
Metric Library
Our metric library provides you with preconfigured metrics. Some of them are agnostic, others are tailored to
specific business processes. Some metrics work out of the box, others can contain variables or SIGNAL code
that you need to customize.
Prerequisites
You can set up and manage metrics for a process when the following prerequisites are met:
• The process contains data. Without data, metrics you import or create will remain invalid until data is
uploaded.
• You have SIGNAL knowledge to create your own or customize existing metrics.
All users who have access to your process in SAP Signavio Process Intelligence can work with metrics as
follows:
Add metrics from the yes yes - Add from metric li-
library brary [page 489]
Edit and delete metrics yes yes - Edit and delete metrics
[page 492]
You can configure widgets to display metric values. The widget in which a metric can be used depends on the
metric aggregation type. The following table describes the different options.
Note
You can only select metrics of an aggregation type supported by that widget. Metrics aggregated over
cases will not appear in the list of available metrics for a widget supporting aggregation over events, and
vice versa.
Over cases You can configure the following widgets to display metrics:
Example
View a Breakdown widget with the average cycle time
metric
Over events Metrics of this type can be displayed as values in the Process
Discovery [page 347] widget. Read more in section Explore
the variants [page 349].
Example
View a Process Discovery widget with the average cus-
tomer satisfaction (NPS) for each activity
The metric aggregation types determine how metrics are calculated and where they can be used.
Over Cases
Metrics of this type return one value for a group of cases, for example, the average cycle time for all cases in the
month of July.
Aggregation input is the standard case level event log, where each case is one record in the log.
When used with variants, for example in the Variant Explorer widget, the metric returns the value for the cases
that make up a variant.
Example
View a metric that calculates the average cycle time from the first to the last event
SIGNAL code:
Over Events
Metrics of this type return one value for each event (activity) in the event log, for example, the average
customer satisfaction for events.
Aggregation input is the flattened event log, where each event is represented as one record in the log.
When displaying an over events metric in the Process Discovery widget, the metric value is returned for each
event in the process diagram.
Example
View a metric that calculates the percentage of automated events, based on the definition of the status
'Automated'
SIGNAL code:
AVG(IF("Automation Status"='Automated',1,0))*100
If filters are applied to investigations, dashboards, or widgets, metrics return the value for the filtered set of
cases. When you change a filter, metric values are immediately updated.
Related Information
All metrics in the metric collection are available for analyzing and mining a process. Read here how to manage
the metric collection.
Each process has a metric collection containing all metrics that are available for process analytics. By default,
the Average cycle time metric is available in every process.
You can add metrics to the metric collection by assigning them from the metric library or by creating your own
metrics.
To access the metric collection, open your process and choose the Metrics tab.
This page describes the activities you can carry out to manage the metric collection.
View Labels
Label Description
Example
An example of the blue line highlight.
To search for metrics by their name, use the search field on the top of the metric list.
Sort Metrics
To sort the metrics, select the table headers. You can sort by the following:
The progression of a metric visualizes the history of that metric's value. It shows the value at points where the
user uploads and updates a dataset in a process. For more information, see Metric Progression [page 482].
The Result column shows the metric result calculated over the event log. If a metric direction is specified, an
arrow showing the metric's change (trending up or trending down) is displayed. The arrow color is green or
red, depending on whether the metric change is defined as positive or not. For more information about metric
direction, see section General Settings [page 488].
Example
Share Metrics
You can link directly to a metric and share this link with others.
In the metric collection, select (More) > Copy link. The link is then copied to your clipboard. Following this
link takes you to that metric's details.
To view and configure the settings of a metric, open the metric settings page by selecting the metric from the
collection.
Prerequisites
In order for a metric to have its progression displayed, the metric must be in your metric collection, must be
valid, and must have a calculated value.
About Progression
The progression of a metric visualizes the history of that metric's value. It shows the value at points where the
user uploads and updates a dataset in a process. Progression is aggregated by the selected time interval (for
example, day, week, month). The progression of a metric can be viewed in the Progression column in the metric
collection, and in more detail within the metric settings.
The metric collection's Progression column shows the progression of each metric listed. A simplified preview of
the line chart is displayed, showing up to six of the latest values. The last value of the metric is also displayed,
along with any change over its preceding value. The last value of metric progression is an aggregation for the
selected interval, and may vary from the latest metric result.
To view an example of the Progression column in the metric collection, select the example below:
Example
Selecting your metric from the metric collection opens its settings panel. The Progression tab allows you to
view and edit your metric's progression settings, as described below:
Threshold: Add threshold values to the graph. From the drop-down, choose which type of threshold to add:
• Line: Adds a single horizontal line to the graph. Enter a number into Value to determine where on the
vertical axis the line appears. You can also provide a Label for this line.
• 2 regions: Functions like the Line threshold, but also adds color to the horizontal regions above and below
the threshold line. You can use Color above and Color below to change each region's color.
• 3 regions: Functions like the 2 regions threshold, but also adds another threshold line, dividing the chart
into three regions.
An interval selector allows you to choose between levels of precision for the time axis. Select Day, Week, Month,
Quarter, or Year to calculate average values for the chosen interval.
A line chart is displayed, showing the metric's average value over a selected interval. Tooltips show the value of
individual data points. Hover over a data point to see its value.
Note
The metric's actual value may differ from the latest value in the line chart, since the latest value is an
average of values in the interval.
For information about your metric's progression history, select (Overview of all the metric progression
updates). You can export the displayed information with Export to CSV .
To view an example of the Progression tab in the metric settings, select the example below:
Example
Our metric library provides you with preconfigured metrics. Some of them are agnostic, others are tailored to
specific business processes. Some metrics work out of the box, others can contain variables or SIGNAL code
that you need to customize.
Find a list of the default metrics and a description of their variables in section Metrics overview [page 486].
View Labels
Label Description
Recommended In the process settings, you can choose a system type and a
set of process types. A metric is labeled as 'Recommended'
if its system type matches your chosen system type and its
process type matches any of your chosen process types.
Analysis Objective Filter metrics by use case, for example to analyze automa-
tion rates, cycle time, or conformance.
If you've already chosen a system type and a set of process types for your process, selecting Recommended
Metrics filters metrics based on your choices.
To search for metrics by their name, use the search field on the top of the metric list.
Sort Metrics
To sort the metrics, select the table headers. You can sort by the following:
System Type Sort the metrics by systems for which the metric is relevant,
for example SAP Ariba.
Analysis Objective Sort metrics by use case, for example to analyze data for the
level of automation, the cycle time, or conformance.
Select a metric to preview it. The preview opens in a side panel displaying the following metric settings:
You can already assign your values to variables here or do that at a later point of time in the metric collection.
You can find a description of metric variables in section Metrics overview [page 486].
If you've specified the system from which your process data originates, default values are assigned to the
variables of recommended metrics.
Related Information
List of process-independent and process-specific business metrics (KPIs) in SAP Signavio Process Intelligence
• Process-Agnostic Metrics
• Metrics for Acquire to Onboard (SAP S/4HANA)
• Metrics for Acquire to Onboard (SAP ECC)
• Metrics for Attract to Acquire Talent (SAP SuccessFactors)
• Metrics for Incident to Resolution (ServiceNow)
• Metrics for Inspect to Quality (SAP S/4HANA)
• Metrics for Inspect to Quality (SAP ECC)
• Metrics for Invoice to Cash (SAP S/4HANA Cloud Public Edition)
• Metrics for Invoice to Cash (SAP S/4HANA)
• Metrics for Invoice to Pay (SAP S/4HANA)
• Metrics for Invoice to Pay (SAP ECC)
• Metrics for Issue to Resolution (Jira Service Management for Cloud)
A metric has a range of associated settings. View and configure these using Metric Settings.
To access Metric Settings, open the Metric Collection for a process and select a metric from the list. The
metric's settings are arranged into sections:
• General
• Signal
• Progression
• Usage
• History
You can navigate directly to a section by selecting the tab bearing its name.
The following sections describe the settings and how you can configure them.
• Enter or update your metric's name and description in the Name and Description fields.
The metric name must be unique in the associated process. In order to avoid metrics being considered
duplicates, avoid very similar names, such as 'Metric 1', 'Metric 1', ‘Metric {1}', and
'Metric_1'.
• Select whether your metric should be aggregated over cases or over events the Aggregation type check
boxes.
• The Unit dropdown list allows you to specify your units as duration, currency, or suffix, if needed.
• You can enable rounding and select the rounding type using the Rounding dropdown list.
• The Metric direction dropdown list allows you to specify which type of metric change is considered positive.
Example
A t-shirt store uses a metric to track how many of their orders are canceled per month. They aim to
reduce the number of canceled orders. As such, they select their metric direction as Positive is down.
The t-shirt store uses a different metric to track their revenue every month, which they aim to increase.
For this metric, they select the metric direction Positive is up.
In the Signal section, the code of the metric is displayed along with a listing of the process attributes and
variables. The attributes are organized into Case level and Event level. In some cases, the variables are labeled.
Variables from the metric library are labeled Library. Empty variables are labeled Empty. Invalid variables are
grayed out and cannot be used.
• You can search the list of attributes by typing into the Search textbox. All attributes not matching the
search criteria are filtered out.
• You can update the metric code by altering it in the textbox and then pushing the Save button.
Caution
You can go directly to the using investigation or dashboard via its corresponding Link .
Metrics maintain a history of the changes made to them, which is available in the History section. You can
explore several aspects of the metric's history:
• The date and time of each change is displayed, along with the avatar of the user who made it. Hovering
over the avatar shows a tooltip containing that user's email address. The nature of the change is displayed
in bold text. The metric's result appears on the right.
• Expand the change to see details of how the metric was modified, including the name of the changed
property, and its original and updated values.
Adding metrics to your process allows analysts to use them for process mining.
Note
If you want to have variable values assigned automatically to your metrics, select a source system first. For
more information, see Auto-Assigning Variable Values Based on the Source System [page 503].
2. For the metric to be duplicated, open its action menu and choose Duplicate.
The dialog for adding a metric opens.
3. You can edit the metric, read more in section Edit a metric [page 492].
4. Confirm with Save.
The metric is added at the top of metric collection.
Import Metrics
Note
You can only import metrics available as JSON files. For example, the metric export creates such files.
Read how to export metrics in section Export and import metrics [page 493].
Note
1. To set up the Value widget, follow the instructions in section Add and configure the widget [page 349] and
select the configuration option SIGNAL code.
2. After adding SIGNAL code, choose Save as New Metric.
The code is added to the metric collection of your process. Users can select the metric when they
configure new widgets.
If you select Save as New Metric again, another metric is created.
Related Information
Learn how to edit the query code, the unit, or other settings of a metric, and how to delete metrics from your
process.
• You can edit any metric that you have added to your process.
• Any change applies only to the metric added to your process. The metric in the library remains unchanged.
Edit a Metric
Note
Deleting a metric can't be undone and breaks the widgets in which the metric is used.
Related Information
Learn how to share metrics between SAP Signavio workspaces using the export and import functions.
With these functions, you can use your metrics in different workspaces.
Export Metrics
Import Metrics
Note
You can only import metrics available as JSON files. For example, the metric export creates such files.
Resolving Conflicts
In some cases, conflicts are created when importing metrics. For example:
• A JSON file you are importing contains duplicate metrics within the file.
• A metric you are importing is a duplicate of one of your existing metrics.
• You are importing duplicate metrics from the metric library.
• You are importing invalid metrics (for example, a metric has missing attributes).
In case of any conflicts, the import dialog prompts you to resolve them.
1. If there are conflicts within a JSON file you are importing, the import dialog displays details about the
conflicts. To resolve the conflicts, you must open the JSON file, make the required changes (for example,
renaming duplicate metrics), and save your changes. To continue, select Import, then select the amended
JSON file.
2. If several tabs appear, ensure the Conflicts tab is selected.
3. If a metric you're importing conflicts with an existing metric, a section named Conflicted metrics is
available. Expand this section.
4. For each conflict, the original name is shown with a New metric name beside it. You can change the
suggested name to something else if you prefer.
5. Once all conflicts are resolved, Confirm and then Import.
Note
An imported metric and an existing metric might also be identical in both name and content. This is called a
match. A Match tab appears during import, allowing you to review any matching metrics.
Related Information
Learn how to configure widgets to display the output of metrics, or add a metrics bar to an investigation. Also
read how to delete metrics from the metrics bar.
Note
You need the manager or analyst role for a process to use these functions.
To configure widgets to display the output of one or more metrics, follow the instructions in section Add
widgets to an investigation [page 337]
The metrics bar contains widgets that are preconfigured with metrics. You specify the widgets that are
displayed in the bar.
Note
A user with the manager role needs to add metrics to the process before you can select metrics for the
metrics bar. Read more in section Add Metrics to a Process [page 489].
• The metrics bar is always located at the top of an investigation. You can't move it.
• You can change the order in which the metrics are displayed.
• The widget for each metric is preconfigured. You can't edit these widgets.
3. Under Configure metrics bar, select (delete) for the metric you want to remove.
4. Confirm with Done.
The metric is no longer displayed in the investigation.
Learn how to configure widgets to display the output of metrics, or add a metrics bar to an investigation.
A process view can restrict access to data in such a way that a metric cannot deliver results. In this case, the
metric becomes invalid. If multiple process views are assigned to you, you can check for each process view if
the metrics in the metrics collection are valid.
Related Information
Sometimes metrics work out of the box, other times they require configuration.
• In the metric collection, hover over the Invalid label of a metric to view a tooltip suggesting solutions.
• In the metric settings, the cause of the problem is displayed in an error message below the metric's title.
The metric query refers to variables for which no values were set.
To fix the metric, add missing variable values or customize the existing ones. For more information, see
Assigning Values to Variables [page 503].
Incorrect Query
To fix the metric, customize the SIGNAL query. For support, read more in the following sections:
A metric can be invalid because you do not have access to the queried data. In this case, contact the user with
the manager role for your process and request access to the data. Access to data is provided using process
views, read more in section Define access to process data with process views [page 26].
If metrics are imported into the metric collection before process data is uploaded, all metrics will be marked
with the status 'Invalid'.
Also, a warning message appears in the metrics collection guiding you to import the process data and then
choose your system type in the process settings.
4.8.10 Variables
A variable is an entity where information can be maintained and referenced, for example in the SIGNAL code
editor. This section describes the types of variables you can use in your metrics, and how to create them.
Learn about library variables and how to view and edit them.
Context
A library variable is a variable coming from a metric template created while adding metrics from the metric
library. The variable can be used in a metric's SIGNAL interface and shared between multiple metrics.
Note
Procedure
Custom variables are variables that you create yourself. This section describes how to create and manage
custom variables, and which data types and expressions you can use.
To access your custom variables, open your process, then select Variables. Your custom variables are
displayed, under the following headings:
Select (More) to view more options such as (Duplicate), (Export), (Copy link), and (Delete).
Context
This topic explains how to create a custom variable in the Variables tab of your process.
If you enter a non-existent variable into the SIGNAL code editor, the dialog displays a Create Variable button.
Selecting this button opens the custom variable creation dialog described below.
For more information about which types of expressions and data types are permitted in variables, see
Permitted Expressions and Data Types in Variables [page 500].
Procedure
Related Information
Procedure
Related Information
Learn which expressions and data types you can use in variables.
Expressions in Variables
Expression OR IncidentPriority = 1 OR
"IncidentCategory" = 'Urgent'
Data types
• Strings
• Numbers stored as double precision floating point
• Timestamps stored with millisecond precision, without time zone information.
• Durations stored with millisecond precision
• Booleans
Both case and event attributes can be NULL, indicating the absence of a value or an unknown value.
To access your custom variable's settings page, open your process, select Variables, then select a custom
variable from the list. The variable's settings page opens, displaying the following sections:
You can navigate directly to a section by selecting the tab bearing its name.
The following sections describe the settings and how you can configure them.
General
• Update the name your organization is using for this variable in Business Name. The variable name must be
unique in the associated process. In order to avoid variables being considered duplicates, avoid very similar
names, such as 'Variable 1', 'Variable 1', ‘Variable {1}', and 'Variable_1'. Note that your
variable's Reference Name is generated from the business name at the time the variable is created. If you
update Business Name later, Reference Name remains the same.
• Update the Description field.
Value
In the Value section, a text panel is displayed along with a listing of the process attributes. The attributes are
organized into Case level and Event level.
• You can search the list of attributes by typing into the Search textbox. All attributes not matching the
search criteria are filtered out.
• Update the variable value by altering it in the text panel, then selecting Save.
Usage
In the Usage section, you can see which metrics or widgets a variable is used in. Select to open a metric or
widget.
History
The History section displays a list of the times and dates in which the variable was created or changed. Select
to see details of how the variable was modified, including the name of the changed property, and its original
and updated values.
Context
If a metric's code has variables, you need to assign values to them, like SIGNAL expressions, thresholds,
events, or attributes. Otherwise, the metric can't determine the KPI.
By specifying the system from which your process data originates, default values are provided to metrics
variables automatically.
Note
This function doesn't change the metrics created before a source system was specified.
Procedure
Changes are saved automatically. When you add metrics to your process, matching values are
automatically assigned to variables in the metric. The assigned value can come from one of two places:
• If the variable doesn't already exist in the metric collection, the variable takes the template value for
that system type.
Note
If you alter the variable's value, that change propagates to all objects referencing the variable.
Procedure
The metric settings appear. If the query has variables, each appears in the Signal section in a text field
below the query.
3. Enter the variable values, for example SIGNAL expressions, thresholds, events, or attributes.
4. Confirm with Save.
Procedure
The dialog for editing variables opens, displaying the library variables of all metrics in the process.
3. Search for the variable you wish to add values to. You can search by variable name, filter empty variables,
and see in which metrics a variable is used. Select to view the settings of a specific metric.
Autocompletion
Autocompletion helps you to quickly write SIGNAL queries while minimizing typing and syntax errors.
Autocomplete support is location-specific. For example, if your cursor is in the SELECT area of the query,
suggestions include variables, attributes, and metrics. In other areas of the query, autocompletion supports
query operators, functions, and expressions.
To trigger autocompletion, press Ctrl + Space for Windows or control + Space for Mac. This opens a list and you
can select a completion. The list of items is filtered and narrowed down as you type. The SIGNAL editor in the
widget builder provides the option to show the necessary keyboard shortcut ( Show keyboard shortcut).
On a Mac, if the shortcut doesn't work, this combination of keystrokes is probably mapped to some other
function. Check and change the mapping under System Settings > Keyboard > Keyboard Navigation > Keyboard
Shortcuts.
Color Scheme
The color scheme for code simplifies reading and writing queries.
• keywords are pink, for example, SELECT, GROUP BY, and FROM
• functions are yellow, for example, COUNT, MAX, and FLATTEN
• identifiers are green, for example, attributes and table names
• comments are orange
The SIGNAL code editor provides a linter that parses the code to detect errors. Each error in a row is indicated
by a wavy underscore. If available, additional information is displayed when you hover the error.
Related Information
Value analysis helps you to quantify the business impact of your process mining or transformation projects.
Value analysis enables you to use a structured approach for calculating the savings potential when improving
a certain metric. Important information can be maintained directly in the tool. Value analysis calculations are
performed using proven formulas provided by SAP Value Lifecycle Management (VLM).
Learn how to create a value case for a metric in SAP Signavio Process Intelligence.
Context
Note
You need the manager or analyst role for a process to create value cases.
Creating a value case allows you to perform value analysis on your data. A value case is a small project, based
on exactly one metric and value calculation. Each value case belongs to a specific value driver.
You can create value cases from the metric library, using metrics which are not yet in your metric collection.
You can also create value cases directly from your metric collection.
Procedure
The selected metrics are added to your metric collection and your value cases are created.
If your value case does not have a pre-defined calculation formula, you can enter your own formula. For
more information, see Calculation [page 511].
The selected metrics are added to your metric collection and your value cases are created.
If your value case does not have a pre-defined calculation formula, you can enter your own formula. For
more information, see Calculation [page 511].
• Create a value case from the Value Cases tab:
a. Open your process and select Value Cases.
b. In the Create Value Case dropdown, select From Metrics Library or From Collection as needed.
The metric library or metric collection opens. From here, you can create value cases as described in
the previous sections.
Learn how to delete a value case from SAP Signavio Process Intelligence.
Context
Note
• You need the manager or analyst role for the Process view of a value case to delete value cases.
• Deleting a value case can't be undone.
• A deleted value case is automatically removed from any initiative to which it might be linked.
Procedure
Learn how we calculate the potential value (Profit & Loss or Working Capital) for a value case. The potential
value is the amount of money you can expect to save or gain by implementing a process improvement activity.
Potential value is calculated as the difference between the baseline monetary impact and the target monetary
impact. Both the baseline and target monetary impact are calculated based on a predefined formula, or one
that's customized by you.
• The metric which has been selected for the value case.
• SAP-suggested assumptions (for predefined formulas): These are default assumptions defined by SAP. For
example, hourly rates for manual processing, average process times.
• The annualization factor: Baseline Monetary Impact per Year and Target Monetary Impact per Year are
calculated for a year using the annualization factor. The denominator represents number of days that
you've selected in the baseline date range.
Note
Learn how the potential value can be recalculated by referring to the Calculation section in Edit a Value
Case [page 511].
You can modify the potential value calculation to ensure that it:
Related Information
Learn how to view and interpret your saved value cases in SAP Signavio Process Intelligence.
To view your saved value cases, open your process and select Value Cases. The tab displays a list of your value
cases under the following headings.
Value Driver A value driver is a metric category, with value analysis driv-
ing a specific value. One metric can have multiple value cal-
culations which belong to different value drivers.
Name By default, a value case has the same name as the metric
for which it was created. Select the value case to open its
settings and update its name, if needed.
Baseline Date Range The range for historical data with defined baseline value. The
default baseline (at time of creating the value case) is the
previous 12 months.
Current The current value of the value driver. This value is calculated
based on the last 30 days.
Related Information
Note
• You need the manager or analyst role for the Process view of a value case to edit or filter value cases.
To access the details of a value case in SAP Signavio Process Intelligence, open your process, choose the Value
Cases tab, and select the value case from the list.
On top of the page, see a section with some information about the value case, like its Value Driver, Potential
Value, and so on. For more details about what these terms mean, see View Saved Value Cases [page 509].
Select (Filters) to apply a filter on your value case and recalculate the metric and monetary values. For more
information, see Filters [page 435].
General
From the General tab, configure the settings according to your requirement. For example, you can update the
name, add a description, or change the value driver. However, changing the value driver of a value case doesn't
change its calculation formula.
Currently, each value case is linked to a single Process view. The Process view of a value case is determined
automatically, based on your selection while creating the value case.
Calculation
Calculation
This section allows you to review and update the formula and the values used for the calculation of the
potential value. You can't add any special functions to the formula - it's purely mathematical.
We provide pre-defined formulas for metrics that are assigned to a value driver. For other metrics, you need to
manually add the formula yourself.
Baseline State
This section quantifies the initial performance of the metric measurement, along with the associated monetary
value. It's the value collected from your system before the process improvement efforts, so you can measure
your progress against it.
This section shows the monetary impact based on the target or improvement percentages. You can edit
the target value to automatically update the improvement percentage. This also affects the potential value.
Otherwise, the improvement percentage remains fixed, even when you update the formula or the other values.
Calculation Details
This section shows the calculated values, and how they were calculated.
Note
• Once you save the edits that you made to the formula or the values, the Potential Value is recalculated
and updated. You can also select Cancel to delete all the changes.
• Baseline Monetary Impact per Year and Target Monetary Impact per Year are calculated for a year using
the annualization factor. The denominator represents the number of days that you've selected in the
baseline date range.
Related Information
Learn how to link a SAP Signavio Process Intelligence value case to an initiative in SAP Signavio Process
Transformation Manager.
Prerequisites
Note
The actions available to you in a specific initiative or insight depend on your access rights.
For more information, see User Administration, Authentication, and Authorization in the Security Guide for
SAP Signavio Process Transformation Manager.
You can either link an existing initiative to a value case or create a new initiative for your value case. To link an
initiative to a value case, follow these steps:
1. Open your process and click the Value Cases tab.
2. Select the value case to which you'd like to link an initiative.
3. Select (Add) under Initiative.
You can link your value case to an existing initiative (if available) by choosing Select Initiative. If you choose
this option, you don't need to continue with the steps provided below.
4. To link your value case to a new initiative, select Create New Initiative.
5. Enter your initiative's title and description.
6. Review the rest of the initiative settings. You have the following options:
Note
• When you create an initiative, you are automatically set as the owner.
7. Select Create.
A new initiative is created.
8. Select the newly created initiative from the dropdown.
Results
The value case is linked to the initiative. You can also remove or change the initiative for your value case by
selecting (Cancel) or Change under Initiative.
Related Information
Learn about the levels of access different users have to value cases.
All users who have access to the Process view of a particular value case can use the value case as follows:
Link value cases to initiatives Yes, if they have a Yes, if they have Yes, if they have
SAP Signavio Process SAP Signavio Process SAP Signavio Process
Transformation Manager li- Transformation Manager li- Transformation Manager li-
cense and Editor or Owner cense and Editor or Owner cense and Editor or Owner
role for the initiatives. role for the initiatives. role for the initiatives.
Root cause analysis allows you to understand why a value driver, metric, or KPI is off target.
Uncovering improvement potential and understanding what drives process performance indicators, along with
their monetary values, requires manual slicing and dicing of the event log. Automated root cause analysis
uncovers the subsets of cases from a process (subgroups) that drive process performance and ranks them by
their contribution.
For example, your goal is to improve the automation rate in the Procure to Pay process. Currently, the
automation rate is 40% and you're wondering why it has been below the target of 60% in the last quarter.
By running a root cause analysis, you'll find all improvement potential to achieve your goal.
Restriction
Root cause analysis is currently available only for the Invoice to Pay process in the plug and gain approach
(performance indicator ID: KPPURCH460). For more information, see the Getting Started Guide for the
Plug and Gain Approach.
Context
This procedure orders an analysis to be run on your process data. The analysis identifies a set of subgroups
contributing to the performance of your process or providing insight into why it deviates from the specified
target. For more information, see Subgroups [page 518].
The analysis results are presented in a waterfall chart, which clearly highlights the key subgroups affecting your
current process performance and identifies any deviations from the specified target. Additionally, it illustrates
the improvement potential by showing the gap between your current performance and your desired target
performance. The comprehensive view allows you to identify areas for optimization and prioritize actions to
enhance overall process efficiency.
Procedure
1. Select your process from the list of processes and then choose the Root Causes tab.
When no existing results are available, you're presented with the following screen:
Results of earlier analyses are displayed on this screen. However, results aren't stored permanently and
are subject to automatic removal after a certain duration.
Note
Metrics are pulled from the metric collection. Not all metrics are supported.
Results
A progress indicator is displayed while the analysis is underway. Time until analysis completion depends on
event log size and number of case attributes of type Text.
Upon completion, the results are displayed. You're now ready for the next step: interpreting the results.
Related Information
When an analysis has been completed, you're presented with results like in this example:
This analysis lists root causes that explain why the Average Cost of Production diverges from its target of €70
by +€9.39. The root causes are ranked by their contribution. That is the amount by which the global Average
Cost of Production would improve if the respective cause was behaving on target.
Subgroups
A subgroup is a subset of cases from a process and is specified by a WHERE filter (a feature of the SIGNAL
query language). Each subgroup can positively or negatively affect a process.
The algorithm behind root cause analysis identifies non-overlapping subgroups. In other words, each case
from the process data becomes a member of only one subgroup. This ensures a consistent explanation of the
performance.
Performance contribution is attributed to individual subgroups. For more information, see Contribution [page
519].
The following sections explain how to interpret each area of the results.
• Top 4 Opportunities: The sum of contributions of the four most effective detractors of target
performance. This shows by how much the overall metric could improve if these four detractors performed
on target.
Chart
The chart visualizes both the top opportunities and the top contributions, in other words, the subgroups with
the largest absolute contributions (negative or positive).
To the right of the chart, the definitions of each subgroup are listed. Each definition is presented as the
expression part of a WHERE clause.
Note
To improve readability, the definition omits certain elements of the usual syntax, such as quotation marks.
Contribution
The contribution of a subgroup is the amount by which the global performance would change if the subgroup
were to perform on target.
Example
If invoices for company code 0023 were to be late 17% instead of 54%, then the global performance would
improve by 5% to 29%.
Contribution respects exhaustiveness and exclusivity. In other words, the sum of the contributions of two
independent subgroups equals the contribution of their union. Consequently, the sum of the contributions of
MECE subgroups equals the difference between the reference level and the observed global performance.
Data Grid
The data grid below the top subgroup section shows a hierarchical table of all subgroups that the algorithm
identified as relevant during the analysis. It allows manual exploration of the analysis results with the goal of
finding insights. Find more details in Data Grid [page 520].
Each row in the data grid is either a subgroup or a partition, which is a grouping of subgroups. The data grid
contains the following columns:
• Subgroups: This column contains the subgroup definitions and allows navigation through the results.
Subgroups are displayed hierarchically and arranged into partitions.
• Contribution: Indicates the impact of a partition or subgroup on the overall performance. Depending on its
level in the hierarchy, the contribution column shows one of the following:
• For a subgroup, the column displays the subgroup's absolute contribution.
• For a partition, the column displays the sum of all positive and negative contributions of the subgroups
constituting this partition.
• Signal Strength: Applies to subgroups only. A statistical measure of how the behavior of a subgroup's
population differs from that of its parent. This measure doesn't consider the subgroup's size, in other
words the case count.
• Metric: Applies to subgroups only. Displays the metric value for every subgroup.
• Case Count: Applies to subgroups only. The number of cases within the subgroup.
Available Interactions
The data grid can be adjusted in a number of ways to make exploration easier:
• Arranging columns: To arrange a column, select (move icon) in its column header. This displays a
number of options, including pinning the column in place or moving it left or right.
Note
Since it's used to navigate the grid, the Subgroups column can't be arranged and is always the leftmost
column.
Question Solution
Why can't I look at the subgroups with the largest signal? Signal strength only measures the statistical difference be-
tween a subgroup and its parent. This doesn't necessarily
imply a large impact on the overall performance. Small sub-
groups can deviate significantly from the parent group while
still having only a small effect due to their size. Finding ac-
tionable and impactful insights requires navigating the data
grid by contribution and signal strength at the same time.
I was expecting to see a specific subgroup in the data grid. The underlying root cause analysis algorithm makes no as-
Why doesn't it appear? sumptions about the data or the context of the process. All
results are based on a statistical analysis. The absence of
a subgroup in the data grid indicates that other subgroups
were statistically more relevant. You can validate the results
by comparing subgroups manually in a dashboard.
Why is the sum of all positive and negative contributions Theoretically, every partition explains the full difference be-
different between partitions. Shouldn’t it be the same for all tween the actual performance and the target. However, the
partitions? algorithm ignores subgroups that are too small or statisti-
cally insignificant. Consequently, the results might not con-
tain all subgroups of every partition. In those cases, all posi-
tive and negative contributions in a displayed partition don't
make up the sum of a parent group's contribution.
Use actions to automatically query process data and act on the results, for example, by executing a task or
starting a process. This section describes when to use actions, what you need to set up actions, where to set up
actions, and information about the time format used in actions.
For each action, you specify a SIGNAL query and one or more tasks. The SIGNAL query produces results when
the action is run. These results are then used in the defined tasks. For example, you can define that if there are
more than 10 unprocessed purchase orders, an email is sent to the responsible persons.
For example, you can set up email notifications for tickets that exceed a certain processing time. Then, you can
immediately approach affected customers or even prevent a breach of your service level agreement.
You can set up and manage actions for a process when the following prerequisites are met:
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• Some actions can be set up only when a workspace administrator has integrated the necessary application
with SAP Signavio Process Intelligence.
Where Description
On the Actions overview, choose New Action. Set up an action from scratch; nothing is preselected.
In a process, go to the Actions tab and choose Create. The process is preselected.
The process and the process view from the investigation are
On an investigation, choose (Create Action). preselected.
The process and the process view from the dashboard are
On a dashboard, choose (Create Action). preselected.
Note Any changes you make to an action created that way, are not
reflected in the original widget. The same principle applies in
This option is only available on widgets that display a bar
reverse; changes made later to the widget are not reflected
chart, pie chart, table, or value.
in the action.
To start setting up actions, choose a task from the list in section Action Setup [page 524] and follow the
instructions.
UTC Time
Times are expressed in UTC, for example, execution times that you see on the overview pages or in the history
of an action.
Example
What is UTC?
UTC is Coordinated Universal Time, the main standard by which clocks are globally synchronized.
A UTC timestamp represents time measured at 0° longitude. All time zones are offset from UTC to
calculate local time. For example, Central European Time (CET) is UTC+1. A local time of 09:00 CET would
be 08:00 UTC.
UTC doesn't change with seasons and isn't affected by daylight saving. Therefore, Central European
Summer Time (CEST) is UTC+2. A local time of 10:00 CEST would be 08:00 UTC.
Related Information
Get to know the various tasks, which are available for actions, and learn how to start the action setup.
Start Business Processes and Automations in SAP Build Process Automation [page 542]
Learn how to set up actions that trigger business processes and automations in SAP Build Process
Automation. These actions transform the SIGNAL query result into the complex data structure defined
by a SAP Build Process Automation workflow and provide the capability to facilitate a single level of
nesting.
Learn how to set up e-mail notifications that contain SIGNAL query results.
You can set up e-mail notifications for yourself and others. Every time the action is run, an e-mail with a custom
subject and message is sent. The first five rows of the SIGNAL query result are displayed in the e-mail body.
The full result set is attached as a CSV or XLS file. If you configure the action to send only new data, the
attached file also contains only new data.
Note
Prerequisites
• You have the manger or analyst role for the process for which you want to set up actions.
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
• Under SIGNAL, the condition that must be met for the action to be executed
• Under Task, what happens when the action is executed
See the sections below to learn more about the configuration details.
3. With Add Task, you can add more tasks to your action.
4. You can save actions with incomplete settings as drafts and complete their configuration later.
5. Once all settings are specified, save with Create.
The action is created, and the action overview opens. To view results for your action immediately, you can
run it manually.
General
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Read about the SIGNAL editor functions like autocompletion, color scheme, and error linting in section The
SIGNAL Code Editor [page 505].
To define what happens when the action is run, specify the following:
Setting Description
Caution
Please ensure you are not sharing sensitive internal data
with external parties. Breaching this protocol could jeop-
ardize your company's security and violate your privacy
policy. Always double-check the list of recipients before
saving the action.
Learn how to send messages with SIGNAL query results to Microsoft Teams.
SIGNAL query results can be posted as messages to channels in Microsoft Teams. Every time the action is run,
a message with query results is sent to the specified channel.
Note
Users with access to the Microsoft Teams channel can view the message.
The message in Microsoft Teams includes information about the action and some results. If necessary, a
custom text message can be added. From the message, you can go to the full result set in SAP Signavio
Process Intelligence.
Example
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• An incoming webhook was created for the Microsoft Teams channel to which you like to send messages.
To create an incoming webhook in Microsoft Teams, follow the instructions in the Microsoft Teams
documentation .
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
General
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Read about the SIGNAL editor functions like autocompletion, color scheme, and error linting in section The
SIGNAL Code Editor [page 505].
To define what happens when the action is run, specify the following:
Setting Description
Webhook Endpoint Paste the URL from your Microsoft Teams webhook.
Related Information
Learn how to integrate SAP Signavio Process Intelligence with any SAP and non-SAP system that supports
webhooks. Webhooks are custom callbacks that are used to notify systems of events. In our context, they allow
you to send SIGNAL query results to other systems.
Note
To connect to a secured Web service endpoint, you can use different authentication methods. For example,
when you want an action to trigger custom SAP BTP services that require credentials, you can set up basic
HTTP authentication.
For advanced connectivity options or transformation of the action results, we recommend integrating SAP
Cloud Integration, which can process received messages and route them to other SAP and non-SAP cloud and
on-premise applications. See section Send Messages to SAP Cloud Integration [page 538].
Prerequisites
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• A webhook URL is required to set up the integration. To create a webhook in your SAP or non-SAP system,
follow the instructions in the documentation of your SAP or non-SAP system.
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
• Under SIGNAL, the condition that must be met for the action to be executed
• Under Task, what happens when the action is executed
See the sections below to learn more about the configuration details.
3. With Add Task, you can add more tasks to your action.
4. You can save actions with incomplete settings as drafts and complete their configuration later.
5. Once all settings are specified, save with Create.
The action is created, and the action overview opens. To view results for your action immediately, you can
run it manually.
General
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Read about the SIGNAL editor functions like autocompletion, color scheme, and error linting in section The
SIGNAL Code Editor [page 505].
To define what happens when the action is executed, specify the following:
Setting Description
Columns to Send to Webhook Specify which data from the SIGNAL query result is sent
with the notification.
Use Test Webhook to simulate sending a payload consisting of the column data to the webhook endpoint. If
you've specified authentication details, it's also checked whether the provided credentials are correct.
Related Information
Learn how to send SIGNAL query results to SAP Cloud Integration, which can process received messages and
route them to other SAP and non-SAP cloud and on-premise applications.
Prerequisites
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• Your workspace administrator has integrated an SAP Cloud Integration tenant, otherwise you can't set up
an action of this type. For more information, see Setting Up the Integration with SAP Cloud Integration
[page 560].
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
• Under SIGNAL, the condition that must be met for the action to be executed
• Under Task, what happens when the action is executed
See the sections below to learn more about the configuration details.
3. With Add Task, you can add more tasks to your action.
4. You can save actions with incomplete settings as drafts and complete their configuration later.
5. Once all settings are specified, save with Create.
The action is created, and the action overview opens. To view results for your action immediately, you can
run it manually.
General
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Read about the SIGNAL editor functions like autocompletion, color scheme, and error linting in section The
SIGNAL Code Editor [page 505].
To define what happens when the action is executed, specify the following:
Setting Description
Integration Flows Choose the integration flow that is meant to process the
message.
The list includes only integration flows for receivers that are
REST services.
Select Attributes Specify which data from the SIGNAL query result is sent
with the message.
Use Test Connection to simulate sending a payload consisting of the column data to the SAP Cloud Integration
endpoint, and checking the authentication credentials.
Related Information
Learn how to set up actions that trigger business processes and automations in SAP Build Process
Automation. These actions transform the SIGNAL query result into the complex data structure defined by
a SAP Build Process Automation workflow and provide the capability to facilitate a single level of nesting.
For example, you can use a business process in SAP Build Process Automation to unblock sales orders that
exceed a credit limit. Every time the action is run and blocked sales orders are found, a business process in
SAP Build Process Automation is started, either for each sales order or combining the sales orders that belong
together. In the first case, five business processes are started if the SIGNAL query returns five blocked sales
orders. In the second case, all blocked sales orders belonging to the same case are combined in one business
process, given that the case ID was chosen as the primary key.
Note
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• Your workspace administrator has integrated an SAP Build Process Automation tenant, otherwise you
can't set up an action of this type. Read more in section Setting Up Integration with SAP Build Process
Automation [page 557].
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
• Under SIGNAL, the condition that must be met for the action to be executed
• Under Task, what happens when the action is executed
See the sections below to learn more about the configuration details.
3. With Add Task, you can add more tasks to your action.
4. You can save actions with incomplete settings as drafts and complete their configuration later.
5. Once all settings are specified, save with Create.
The action is created, and the action overview opens. To view results for your action immediately, you can
run it manually.
General
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Note
Every row in the SIGNAL query result maps to one business process instance or automation instance. For
example, a query result with 5 rows will trigger 5 instances, each getting one result row as input.
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Task
To define what happens when the action is executed, specify the following:
Map the values from the SIGNAL query result to the param-
eters of the business process or automation artifact. Busi-
ness process parameters nested up to the first level are
supported.
To map, select the result values on the left and the input
parameters on the right. The mapped result values are then
used as input parameters.
Example
Assume a query result with two distinct case IDs, and
one case ID appears twice.
in-
case_dat item_des voice_st
case_id e item_id c atus
Related Information
Actions of this type send messages with SIGNAL query results to a named queue and optionally to a topic in
SAP Event Mesh. Any SAP or non-SAP application with a queue subscription can then process the data.
For example, let's assume that an action sends messages to SAP Event Mesh when new employees are
hired. And let's also assume that SAP Ariba has a corresponding queue subscription. Then, SAP Ariba can
automatically place hardware and software equipment orders as soon as new employee accounts appear in the
system data.
Note
Prerequisites
• You have the manger or analyst role for the process for which you want to set up actions.
• The process contains data. Without data, you can't set up actions.
• You've SIGNAL knowledge to specify the queries performed with the action.
• Your workspace administrator has integrated an SAP Event Mesh instance, otherwise you can't set up an
action of this type. Read more in section Setting Up the Integration with SAP Event Mesh [page 559].
Note
This option is only available on widgets that display a bar chart, pie chart, table, or value.
2. On the Create New Action page, specify settings for the following:
• Under General, the process and the data view, run frequency, collaborators, and more
• Under SIGNAL, the condition that must be met for the action to be executed
• Under Task, what happens when the action is executed
See the sections below to learn more about the configuration details.
3. With Add Task, you can add more tasks to your action.
4. You can save actions with incomplete settings as drafts and complete their configuration later.
5. Once all settings are specified, save with Create.
The action is created, and the action overview opens. To view results for your action immediately, you can
run it manually.
View The process view that defines which The process view assigned to you is
preselected. If you've multiple process
process data you can view and query.
views assigned, you need to select one.
Run Frequency To specify when to run the action, se- Data Load
lect an option:
Note
The minimum time between
data loads must be 30 mi-
nutes to trigger an action. Data
loads that occur at shorter in-
tervals will not trigger an ac-
tion.
Collaborators Select the users or user groups who No collaborators are specified.
Only send new data Choose whether to send the full result Just new data is sent.
SIGNAL
Enter your SIGNAL query in the code editor. The result is displayed below.
If you've created the action on a widget, the SIGNAL query from the widget is taken over automatically. You can
edit the query here. However, any changes you make are not reflected in the original widget. The same principle
applies in reverse; changes made to the widget are not reflected in the action.
Task
To define what happens when the action is executed, specify the following:
Queue Select the queue to which you want to send SIGNAL query
results.
Columns Specify which data from the SIGNAL query result is sent
with the message.
Get to know the options to manage your actions, for example, how to view actions and edit them, add or
remove tasks, or delete actions. This section also explains how to navigate from an action to the linked process.
Viewing Actions
You can only view the actions that you've set up or to which you're assigned as a collaborator.
Tab Description
Overview List of the actions that you've set up. Here, you can manage
your actions.
Results List of results for all action runs in your workspace. You have
access to the results of the actions that you have set up or to
which you're assigned as a collaborator.
Editing an Action
• On the Actions overview, open the menu of the action that you want to edit and select Edit.
• On the action details page, choose Edit for the action you want to change.
You can save actions with incomplete settings as drafts and complete their configuration later. Drafts are
marked so on the Actions overview.
A saved action can't be put back into draft mode, even if you edit the action later.
Only actions that were saved with complete settings can run and send results.
1. On the Actions overview, open the menu of the action that you want to edit and select Edit.
2. On the configuration page, you have the following options:
• To add a task, scroll down and choose Add Task. Then, specify the task settings.
• To remove a task, choose Delete.
3. Confirm with Update.
The action is updated. Changes are applied when the action is run again.
Each action includes a link to the process for which the action is configured. You can view this link as follows:
Deleting an Action
When you delete an action, the action results are also deleted.
Note
1. On the Actions overview, open the menu of the action you want to delete and choose Delete.
Related Information
Learn how to run an action manually and view the SIGNAL query results, for example, to check whether the
your configuration provides the expected results.
1. On the Actions overview, open the menu of your action and select Run now.
The action and its tasks are executed.
Related Information
You can view the results of an action in SAP Signavio Process Intelligence when you've set up the action or are
assigned as a collaborator.
• On the Actions overview, open the Results tab and choose from the list of all actions that were run.
• Open your action and on the Results tab, choose from the list of result sets.
Related Information
The action history provides details like changes to actions, run times and statuses, as well as error messages
for failed runs.
Related Information
If you want to stop an action from running, you can deactivate it. For that, open the Actions overview and switch
the toggle in the Inactive / Active column off.
Find instructions for workspace administrators on how to set up integrations with other SAP applications.
Then, users can set up actions that send SIGNAL query results to the integrated applications.
Note
On the Actions overview, select in the header menu. This opens the Configurations page where you set up
integrations with other SAP applications.
Setting Up the Integration with SAP Build Process Automation [page 557]
Integrate SAP Signavio Process Intelligence with SAP Build Process Automation as a workspace
administrator. This integration allows users to set up actions that trigger business processes and
automations in your SAP Build Process Automation tenant.
Integrate SAP Signavio Process Intelligence with SAP Build Process Automation as a workspace administrator.
This integration allows users to set up actions that trigger business processes and automations in your SAP
Build Process Automation tenant.
Prerequisites
• You have an SAP BTP account and a subaccount that you can use to subscribe to the SAP Build Process
Automation service.
Context
You can integrate one SAP Build Process Automation tenant per SAP Signavio workspace.
Procedure
If the service key is valid, the integration is created, otherwise, the integration setup fails.
You and other users can now set up actions that trigger business processes in your SAP Build Process
Automation tenant.
Related Information
Start Business Processes and Automations in SAP Build Process Automation [page 542]
Integrate SAP Signavio Process Intelligence with SAP Event Mesh as a workspace administrator. This
integration allows users to set up actions that send messages to a queue and optionally to a topic in your
SAP Event Mesh tenant.
Prerequisites
Context
You can integrate one SAP Event Mesh tenant per SAP Signavio workspace.
Results
If the service key is valid, the integration is created, otherwise, the integration setup fails.
You and other users can now set up actions that send messages to your SAP Event Mesh tenant.
Related Information
Integrate SAP Signavio Process Intelligence with SAP Cloud Integration as a workspace administrator. This
integration allows users to set up actions that send messages to your SAP Cloud Integration tenant.
Prerequisites
• You've two service instances, one with plan 'integration-flow' and one with plan 'api' for your SAP Cloud
Integration tentant. They contain the service keys, which you need to set up this integration.
Find detailed instructions in section Creating Service Instance and Service Key for Inbound Authentication.
• Integration flows have the sender adapter type 'HTTPS'.
Context
You can integrate one SAP Cloud Integration tenant per SAP Signavio workspace.
Results
If the service key is valid, the integration is created, otherwise, the integration setup fails.
You and other users can now set up actions that send messages to your SAP Cloud Integration tenant.
Related Information
Some actions require integration with SAP or non-SAP applications. Read how to modify or delete these
integrations.
Prerequisites
Procedure
Context
Note
Procedure
How to ingest data extracted from data intelligence into process intelligence.
The integration of SAP Signavio Process Intelligence and SAP Data Intelligence Cloud provides you with the
following capabilities:
• connecting and extracting data from your source systems with SAP Data Intelligence Cloud
• extracting data from tables with more than 100 million rows
• collecting data from source system that are not supported in Process Data Management
For more use cases, see the Integrating SAP Signavio Process Intelligence and SAP Data Intelligence Cloud
blog post.
This section outlines how to feed data into SAP Signavio Process Intelligence using a SAP Data Intelligence
pipeline.
1. Connect SAP Data Intelligence and the source systems from which you want to extract the data. To do so,
follow the instructions in Create a Connection section in SAP Help portal.
2. Configure data extraction in SAP Data Intelligence Cloud. To do so, follow the instructions in Replicating
Data section in SAP Help Portal.
3. Ingest data into SAP Signavio Process Intelligence. You can do this in either of the following ways:
4. • Send data from SAP Data Intelligence Cloud system to SAP Signavio process data management
system through Ingestion API. To ingest data, you must first create an SAP DI Python operator. This
operator converts the extracted data into CSV format and triggers the Ingestion API.
To create a custom DI Python operator, refer to Custom python operator for beginners blog post
and Configure Python3 operator V2 topic in SAP Help Portal. For information about sending data via
Ingestion API, refer to the Ingestion API documentation.
• Send data to SAP Signavio Process Intelligence through AWS S3 buckets. For more information on
configuration, refer to Connector - AWS S3 [page 89] section.
To ingest data, in the SAP Signavio Process Intelligence, you need to create a new connection with
the source system AWS Connect to S3. Read more about adding new connection in the Create a
connection [page 128] section. Then, create source data with the same source system and add table
by importing the CSV file. This is the CSV file produced by SAP Data Intelligence Cloud. After you have
added all the columns and set the primary key, then extract to view the data. Read more about creating
new source data in the Create, edit, and delete source data [page 171] section.
The value accelerator library for SAP Signavio solutions, an embedded platform within SAP Signavio Process
Transformation Suite, functions as a central repository for value accelerators.
With the value accelerator library, you can explore available value accelerators, install them in SAP Signavio
Process Intelligence or SAP Signavio Process Manager, and tailor them as per your needs in your workspace.
Note
Using value accelerators is optional and not part of the business functionality of the products of SAP
Signavio Process Transformation Suite. Value accelerators are subject to change and may be changed,
discontinued, or replaced by SAP at any time for any reason without notice.
The library is accessible from within SAP Signavio Process Collaboration Hub. For more information, see
Required Licenses and Authorization.
Related Information
Learn how to access the content imported from the value accelerator library into SAP Signavio Process
Intelligence and how to delete it.
By default, the newly created processes are displayed first in the list. The sorting indicator next to
the Last Edited column name uses the process creation date and time to determine the ascending or
descending order of the list.
2. Select next to the Last Edited column name to view the processes from oldest to newest.
3. Select next to the Last Edited column name to view the processes from newest to oldest.
For more information about viewing metrics, see The Metric Collection.
For more information about viewing process data pipelines, see Viewing and Managing Process Data Pipelines.
Note
• When installing the accelerator into an existing process, make sure you remember the list of
dashboards, metrics, and process data pipelines included in that accelerator. Currently, it's not
possible to distinguish between existing content and newly installed content in SAP Signavio Process
Intelligence.
• A new process with a value accelerator appears in the processes list with the value accelerator name.
To delete accelerators that were installed in SAP Signavio Process Intelligence, you have the following options:
Caution
Therefore, check for any dependencies before deleting. For example, deleting a metric will break the
widgets that use the metric. After existing dependencies have been resolved, you can delete items that are
no longer needed.
Note
After you've deleted a value accelerator from your workspace, all entries referring to the previous
installation or installation attempts will still be displayed under Value Accelerator Library Installed
Accelerators in the SAP Signavio Process Collaboration Hub settings.
Understand what the plug and gain approach is and how the value accelerators for the plug and gain approach
can help you.
The plug and gain approach offers an accelerated approach to business transformation and continuous
improvement. It combines the strengths of SAP Signavio Process Transformation Suite to provide you with
a predefined starting point for transformation and continuous improvement projects based on data from SAP
ERP Central Component (SAP ECC), SAP S/4HANA, or SAP Ariba. Using the plug and gain approach therefore
lets you combine the capabilities of SAP Signavio Process Insights and SAP Signavio Process Intelligence. You
benefit from being able to get fast data insights from SAP Signavio Process Insights and then combine it with
the flexibility offered by the features of SAP Signavio Process Intelligence.
The plug and gain approach lets you load data from SAP Signavio Process Insights to SAP Signavio Process
Intelligence quickly and easily. It also allows you to do a deep-dive analysis on process performance and
how your processes are actually run. This reduces the time needed to prepare for your transformation and
continuous improvement projects significantly, by supporting the process to prepare and deploy faster and
promote continuous process improvement. Typical use cases include the following:
• To help reduce the complexity and cost of SAP S/4HANA transformation projects
• To address challenges arising from mergers and acquisitions, resulting in distributed system landscapes
and limited process harmonization
• To handle the challenge of ERP systems that have grown historically with custom development, and
obsolete configuration settings and data
• To avoid costly process mining projects that require significant setup time before processes can be
improved
For detailed information about the plug and gain approach and how to install the corresponding value
accelerators for process landscape analysis and specific processes, see the Getting Started Guide for the Plug
and Gain Approach.
There are several ways to find more information and get support for SAP Signavio Process Intelligence.
In-App Help
SAP Signavio Process Intelligence provides on-screen explanations of features and interface elements.
Restriction
• What's New information on the main page of SAP Signavio Process Intelligence
• Several help topics and a guided tour about the widget builder on any dashboard
To see any of this content, navigate to the main page or any dashboard and open the in-app help.
• (Help Topics): Quick reference information about specific user interface elements to help you perform
your tasks
• (Guided Tours): Guided tours of more complex procedures
You can find an overview of all related documentation at SAP Signavio Process Intelligence.
To learn how to get your questions answered when using SAP Signavio products and how to create a support
case, see SAP Signavio Support.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering an SAP-hosted Web site. By using
such links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities,
genders, and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.