Destinations
Setting up writeback destinations in Inforiver Enterprise
Last updated
Setting up writeback destinations in Inforiver Enterprise
Last updated
Inforiver supports fast writeback setup and execution for several types of data destinations
Database/ Data warehouses/ Data lake: Azure SQL, SQL Server, Synapse Analytics Dedicated SQL Pool, Azure Data Lake, Fabric lakehouse, Fabric warehouse, Dataverse, Databricks, Snowflake, BigQuery, Amazon Redshift, SAP HANA, Oracle, SingleStore, PostgreSQL, MySQL,
File Destinations: OneDrive, SharePoint
Webhook URLs: Can be used to trigger workflows in iPaaS such as Power Automate and Logic Apps)
Configuring writeback destination(s) is straightforward and involves the same procedure (picture below) for any destination.
A list of destinations will be displayed for the users to select from.
Configuring destinations for writeback is a straightforward process that will be covered in the upcoming sections.
After configuring the destination, you can choose whether the destination should be client-managed or inforiver-managed. If you wish to restrict access to the database, opt for client-managed destinations.
For Inforiver-managed destinations, creating the writeback table will be done automatically by Inforiver. Requisite permissions on the table need to be provided to Inforiver.
For client-managed destinations, Inforiver will generate the scripts. You can manually execute them against the database.
Inforiver rounds off all numeric values including percentages to a specified number of decimal points as per your configuration instructions. When you create the first connection, you can specify decimal precision for all connections in that report.
When you apply the decimal precision shown in the image above, Inforiver writes back values (in this example, to Snowflake) rounded off to 5 decimal places.
When you configure individual writeback destinations, you'll notice an option for batched writeback wherever applicable (batched writeback is not possible for destinations like Sharepoint, Rest API, OneDrive, etc). If your writeback payload exceeds 50k records, Inforiver can split the payload into multiple chunks and write them back batch-by-batch.
You can specify whether you will use a temporary table to hold the batched data.
To use this feature, enable the checkbox as shown in the image above.
You can specify a custom table name or use the default table created by Inforiver to store batched data.
You can analyze the writeback log, which will capture how the payload has been split into multiple chunks and processed in parallel for a performance boost.