Setting up writeback destinations in Inforiver Enterprise

Inforiver supports fast writeback setup and execution for several types of data destinations

  • Database/ Data warehouses/ Data lake: Azure SQL, SQL Server, Synapse Analytics Dedicated SQL Pool, Azure Data Lake, Fabric lakehouse, Fabric warehouse, Dataverse, Databricks, Snowflake, BigQuery, Amazon Redshift, SAP HANA, Oracle, SingleStore, PostgreSQL, MySQL,

  • File Destinations: OneDrive, SharePoint

  • Webhook URLs: Can be used to trigger workflows in iPaaS such as Power Automate and Logic Apps)

Configuring writeback destination(s) is straightforward and involves the same procedure (picture below) for any destination.

A list of destinations will be displayed for the users to select from.

Configuring destinations for writeback is a straightforward process that will be covered in the upcoming sections.

After configuring the destination, you can choose whether the destination should be client-managed or inforiver-managed. If you wish to restrict access to the database, opt for client-managed destinations.

  • For Inforiver-managed destinations, creating the writeback table will be done automatically by Inforiver. Requisite permissions on the table need to be provided to Inforiver.

  • For client-managed destinations, Inforiver will generate the scripts. You can manually execute them against the database.

Inforiver rounds off all numeric values including percentages to a specified number of decimal points as per your configuration instructions. When you create the first connection, you can specify decimal precision for all connections in that report.

When you apply the decimal precision shown in the image above, Inforiver writes back values (in this example, to Snowflake) rounded off to 5 decimal places.

Last updated