COMMENCEMENT: As soon as possible ROLE: Responsible for CA Data Management of central sector and all plants and guidelines Consult users how to store the CA data and permanent data archives as well as run reports Assign (Windows only) Create CA data structure as per GROUP guidelines Rearrange CA data structure as per project obsolete CA data using web application CA Cleanup Tool (Unix only) Backup and restore CA data Allow users work on weekends (changes) Serve as contact for data management related questions in the cloud Serve
as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Business Intelligence (BI) Experience Technical data modelling and schema design (“not drag and drop”) Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint
Our client requires the services of a Data Scientist/Engineer (Entry) – Midrand/Menlyn/Rosslyn/Home Office
and managing applications on Kubernetes clusters Data modelling and Database technologies (relational
Our client requires the services of a Data Engineer/Scientist (Senior) - Midrand/Menlyn/Rosslyn/Home as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Business Intelligence (BI) Experience Technical data modelling and schema design (“not drag and drop”)
equipment. You will also be reimbursed for your monthly data. Apart from the above, you will also be required effectively. Strong attention to detail. Accurate typing skills. Excellent written abilities. A minimum
com) for jobs seekers and employers. Do not pay any type of payment to get the job or hire someone. We are multiple sources and we are not responsible for any type of scam. Candidates who qualify will be contacted
com) for jobs seekers and employers. Do not pay any type of payment to get the job or hire someone. We are multiple sources and we are not responsible for any type of scam.
com) for jobs seekers and employers. Do not pay any type of payment to get the job or hire someone. We are multiple sources and we are not responsible for any type of scam.
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL pipeline development Knowledge of multiple database types such as structured and unstructured databases Linux Working Model (AWM) Charter ADVANTAGEOUS SKILLS Data and API Mining Knowledge on Security best practices setting up alerting pipelines. Be comfortable with Data Structures and Algorithms Understanding of integration Docker container creation and usage Familiar with data streaming services such as Apache Kafka Coordination