Hello all,
is it possible to load a ZIP-archiv with different structured ASCII files with fixed field length in different tables on the Snowflake?
For example:
Source: ZFILE.zip contains file1.txt and file2.txt on SFTP
Target: table_for_file1 and table_for_file2 on the Snowflake.
Is it possible to unzip the data on SFTP with Rivery? With Logic river or smth like that?
Thanks a lot!
hi @Oleksandra!
it is not possible to split these into two different tables upon load, however this can be done using a logic step after the fact.
once you load the initial .txt files (note - Rivery will treat this as one table so the schemas should match), you can use a Logic SQL step to parse out into multiple tables. Check out this article on how to use looping to do so: Dynamically create and populate tables using a looping container (this can be done with any Rivery DWH target although the example is using Snowflake)
hope that helps!
Taylor
1 Like
Thank you very much for your answer and very good tip about a looping, @taylor.mcgrath !
The upload as one table - works. But logic step is in not possible in our case. The prozessing with creating different tables assumes there is a differentiator, which can be used in the variable.
But for the files like this (s. below) there is no differentiator.
File 1
| ID | Name |
| 1 | Anton |
| 2 | Boris |
| 3 | Chris |
File 2
| ID | Manu |
| 1 | Audi |
| 2 | Ford |
| 3 | Mazda |
As one table there is everything in one table like this (s. below) and it is not possible to find out the original context.
Target_table
| ID | Name |
| 1 | Anton |
| 2 | Boris |
| 3 | Chris |
| 1 | Audi |
| 2 | Ford |
| 3 | Mazda |
If it could be possible to read the metadata during the upload, as file-name or file-number (1,2,3…). Hier was a related topic, where “short-term roadmap” was stated: SFTP->Snowflake: get source (file) name
How long could it take?