Using an s3 event and python based lambda to load data into PostgreSQL – or – Redshift
This post is just a reference to another post found that does that job in a rather basic way. Note […]
This post is just a reference to another post found that does that job in a rather basic way. Note […]
From Redshift usually as a superuser issue this create external schema sql: –Creating external schema create external schema myspectrum_schema from
Redshift Spectrum can do partition pruning if you create partitioned tables and you partition the underlying s3 data into folders.
Install packages into <dir>. By default this will not replace existing files/folders in <dir>. Use –upgrade to replace existing packages
https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html#:~:text=ALTER%20DEFAULT%20PRIVILEGES%20allows%20you,objects%20created%20in%20specified%20schemas.
Materilizations – define how a model persists or does not persist data. Generally a model is just SQL, so based
Lambda code will look like this: import json import psycopg2 import os def lambda_handler(event, context): print(“event collected is {}”.format(event)) for