Our client requires the services of a Data Engineer/Scientist (Senior) – Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Oracle/PostgreSQL Py Spark Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash Experience in working with Enterprise Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV etc. Experience working with Data Quality
Our client requires the services of a Data Engineer/Scientist ( Senior ) - Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Unix Big Data Powershell / Bash ADVANTAGEOUS TECHNICAL SKILLS Demonstrate expertise in data modelling complex data sets. Perform thorough testing and data validation to ensure the accuracy of data transformations
Job Title: AWS Data Engineers Required - Project Based - Contractual-to start asap-JHB - Hybrid Location: develop, and implement AWS cloud solutions for data engineering projects. 2. Utilise AWS services such and API Gateway to build scalable and efficient data pipelines. 3. Collaborate with cross-functional Required Qualifications: Proven experience as an AWS Data Engineer or similar role Strong understanding of scheduling and identifying data/job dependencies. Knowledge / Experience with data engineering using AWS platform
Our client requires the services of a Data Engineer/Scientist ( Expert) - Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV. Experience working with Data Quality Tools
Linux / Unix
- Big Data
- Powershell / Bash
- Cloud Data Hub (CDH)
- CDEC Blueprint
- Expertise in data modelling with Oracle SQL.
- Exceptional analytical
complex data sets.
- Thorough testing and data validation to ensure the accuracy of data transformations
Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV, etc.
- Experience with Data Quality
APIs.
- Experience building data pipelines using AWS Glue or Data Pipeline, or similar platforms.
Description Data Steward Location: Midrand Salary: R25 000 – R35 000CTC Mission The Data Steward will the local ERP system Provide general and technical data on existing part numbers, add local translation
Analytics Cloud
2. Utilise AWS services and API Gateway to build scalable and efficient data pipelines.
3. Collaborate with cross-functional Qualifications:
Job Title: AWS Data Engineers Required - Project Based - Contractual-to start asap-JHB - Hybrid Location: develop, and implement AWS cloud solutions for data engineering projects. 2. Utilise AWS services such and API Gateway to build scalable and efficient data pipelines. 3. Collaborate with cross-functional Required Qualifications: Proven experience as an AWS Data Engineer or similar role Strong understanding of scheduling and identifying data/job dependencies. Knowledge / Experience with data engineering using AWS platform
as an AWS Data Engineer (Expert) and leverage your skills in building and maintaining Big Data Pipelines expertise in Python, SQL, and AWS services to drive our data initiatives forward. Be part of a collaborative where your contributions ensure accurate and secure data management. ESSENTIAL SKILLS REQUIREMENTS: - Technical ETL - Docker - Linux / Unix - Big Data - Powershell / Bash - Cloud Data Hub (CDH) - CDEC Blueprint - Any ADVANTAGEOUS SKILLS REQUIREMENTS: - Expertise in data modelling with Oracle SQL. - Exceptional analytical