Orchestrate and transform your entire data workflow using Logic Rivers

Rivery enables the orchestration and transformation of data workflows with its Logic River capabilities. A Logic River allows users to create a data workflow that includes both the ingestion of data to the cloud and in-database transformations.

THE BASICS

In a Logic River, users can orchestrate a series of logical steps without any coding. Each step can perform a different type of process on your data.

  • River: Trigger an existing river in your account to run.
  • SQL / Script: Run an in-database query and store the results into a table or file storage, or run custom SQL script in the syntax of your cloud database.
  • Action: Make any custom REST call
  • Condition: Perform conditional logic in your workflow. For example, skip a step in the process if a certain condition isn’t met.

Once you create your first steps of a Logic River, you can wrap them in a container to organize your workflow. Containers enable you to group different logic steps together to perform the same action on them. For example, you can group multiple Data Source to Target Rivers (this is the ingestion river type) together in a single container and set the container to run in parallel, so that all jobs are kicked off at the same time. When all steps in the container have completed successfully, the following step will run.

Another user-friendly feature is the ability to disable any container or logic step in the workflow. This makes the building of a Logic River a truly modular process, as I can disable longer running parts of my river while developing and testing a different step.

When a logic step is set to ‘SQL / Script,’ you can perform data transformations using the resources of a cloud data database. This step type allows for complete flexibility and customization of queries to cleanse, prep or blend data.

The results of your source query can be stored into a database table, file storage, or a variable.

When saving into a database table, all you have to do is specify the desired table name and location for the query results to land. Rivery supports multiple loading modes (Overwrite, Upsert-Merge, and Append). This takes no initial setup on the database side - if a table doesn’t yet exist, Rivery will create it on the fly. No need to pre-define your table metadata beforehand!

BUT…THERE’S MORE

In addition to enabling the orchestration of the ingestion and transforms of an entire data pipeline, Logic Rivers have a few ‘tricks up their sleeve’ that make for the user-friendly development of dynamic and generic data processes.

Logic Rivers provide the ability to store data values into variables and use them throughout the workflow.

In addition to containers allowing for organization and grouping of logic steps, they can be used as a mechanism for looping or adding conditional logic to your river (check out a looping use case here). Since the variable values themselves are recalculated every time the river runs, it is a truly dynamic way to develop a data pipeline.

4 Likes