(required) Exposure to AWS Glue, a Spark based serverless ETL engine and service Exposure to AWS Code Commit and
Perform data extraction, transformation, and loading (ETL) operations Analyze complex data sets to identify in a similar BI Analyst role Strong knowledge of ETL data flow processes Demonstrable experience in identifying
Requirements: · Terraform; Python 3x; Py Spark · Boto3; ETL · Docker; Linux / Unix; Big Data; Powershell / Bash importance): · Trino Distributed SQL queries. · Glue (ETL Scripting) · CloudWatch; SNS · Athena; S3 · Kinesis
solutions
languages such as Python and Big Data pipelines such as ETL, SQL. Strong working knowledge with software development 3 years' experience building big data pipelines (ETL, SQL, etc) ADVANTAGEOUS TECHNICAL SKILLS Understanding
modelling, ETL processes, and data warehousing
data modelling, ETL processes, and data warehousing Design, implement and maintain ETL processes for data
responsible for driving, designing and building scalable ETL systems for a big data warehouse to implement robust methodology. Adept at design and development of ETL processes. SQL development experience, preferably
mapping, field mapping and value mapping for the ETL process. Ability to manage, facilitate, and drive
with a minimum of 5 years relevant experience in ETL,SQL, Data Cleansing Responsibilities Data profiling