Our client requires the services of a Data Engineer/Scientist (Senior) – Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Oracle/PostgreSQL Py Spark Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash Experience in working with Enterprise Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV etc. Experience working with Data Quality
accordance with the agreed hours of operation within a shift model Usage of automation tools to monitor and observe and managing applications on Kubernetes clusters Data modelling and Database technologies (relational
Our client requires the services of a Data Engineer/Scientist (Entry) - Midrand/Menlyn/Rosslyn/Home soon as possible Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV etc. Experience working with Data Quality
Our client requires the services of a Data Engineer/Scientist ( Expert) - Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV. Experience working with Data Quality Tools
Our client requires the services of a Data Engineer/Scientist ( Senior ) - Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Unix Big Data Powershell / Bash ADVANTAGEOUS TECHNICAL SKILLS Demonstrate expertise in data modelling complex data sets. Perform thorough testing and data validation to ensure the accuracy of data transformations
Requirements Solid 5 years of experience with Azure Data Factory, Databricks, and Power BI. Degree in Computer context of data manipulation and notebook-based analytics. A thorough understanding of cloud data security practices. Experience in optimizing cloud-based data platforms for high performance and reliability. especially those related to data engineering Responsibilities Leverage Azure Data Factory to orchestrate automate data workflows, ensuring seamless ETL processes. Utilize Databricks to perform complex data analytics
as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Computer Science, Data Engineering or a comparable field of study with a focus on data intensive applications Experience in architecting and implementing scalable data pipelines in cloud environments, preferably Azure Azure. ESSENTIAL SKILLS: Programming skills in data related programming languages and frameworks, such as
knowledge across all SAP modules Initial focus on master data. Ability to assist with problem identification and advantageous) SAP FIORI (Advantageous) SAP MDG Master Data Governance (Advantageous) ADVANTAGEOUS TECHNICAL (advantageous) Flexibility to work some weekends / shifts or longer hours if required. Experienced in Agile
organize and support test case creation Coordinate test data creation with the developers and test analysts Ensure travel internationally Willing and able to work shifts, after hours and on public holidays
Writing test cases, Test Execution, and Defect capture. Planning and effort estimation for test case execution completion from a testing perspective Coordinate test data creation with the developers Track new/changed requirements