a large financial institute has a vacancy for a Data Engineer on a contract basis.
The Company
incumbent will be responsible for delivering central data management, governance and reporting within the
organisation.
Designing and implementing data strategies and systems, through collaboration with
Management) and Business teams, to create and maintain the data architecture that will drive various initiatives
automate extremely high volumes of data delivery and creatively solve data volume and scaling challenges.
Our client in Pretoria is recruiting for Data Scientist-AI Platform (Entry) to join their team Cloud
organisational capabilities with agile, modern, data-driven solutions and services built according to and beyond expectations are recruiting for an AWS Data Engineer to join an environment with cutting edge expertise in data modelling. Develop technical documentation and artefacts. Knowledge of data formats such Parquet, AVRO, JSON, XML, CSV etc. Working with Data Quality Tools such as Great Expectations. Developing Building data pipelines using AWS Glue or Data Pipeline, or similar platforms. Familiar with data stores
Ride the Data Wave: Become Our Next Streaming Platform Engineer. Our client is looking for a Data Streaming Knowledge of event streaming e.g., Apache Kafka Data modelling and Database technologies (relational
organisational capabilities with agile, modern, data-driven solutions and services built according to beyond expectations are recruiting for an AWS Data Engineer to join an environment with cutting s:
Pretoria is recruiting for AWS Data Engineer (Chief Expert) to join their team. Data Engineers are responsible maintaining Big Data Pipelines using client's Data Platforms. Data Engineers are custodians of data and must must ensure that data is shared in line with the information classification requirements on a need-to-know Boto3 ETL Docker Linux / Unix Big Data PowerShell / Bash Cloud Data Hub (CDH) CDEC Blueprint Basic ex Business Intelligence (BI) Experience Technical data modelling and schema design (“not drag and drop”)
(Identification of data sources internally as well as externally to the industry, Daily data collection from Clean-up (Facilitation of data clean-up, Importation of data sets where applicable, Data Quality assurance)
interface (API) to enable the solutions to source data from other internal systems. The individual would
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL
managing complex IT infrastructure projects, including data centers, networks, servers, storage, and cloud technologies