Our client requires the services of a Data Scientist/Engineer (Entry) – Midrand/Menlyn/Rosslyn/Home Office
as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Business Intelligence (BI) Experience Technical data modelling and schema design (“not drag and drop”) Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint
Dialog Programming SapScript and Smartforms Batch Data Capture (BDC) Function Modules and BAPI's Enhancements Dialog Programming SapScript and Smartforms Batch Data Capture (BDC) Function Modules and BAPI's Enhancements Programming • SapScript and Smartforms • Batch Data Capture (BDC) • Function Modules and BAPI's • Enhancements Programming • SapScript and Smartforms • Batch Data Capture (BDC) • Function Modules and BAPI's • Enhancements
and support test case creation. Coordinate test data creation with the developers and test analysts. feasibility. Technical Test Case creation. Clear defect capturing. Defect workflow adherence. Managing and communicating Identification, Creation & Sanitation of Test Data Manual & Automatic Test Execution. Maintenance
Writing test cases, Test Execution, and Defect capture. Planning and effort estimation for test case execution completion from a testing perspective Coordinate test data creation with the developers Track new/changed requirements
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL software development ESSENTIAL SKILLS: Expertise in Data Intelligence and Business Intelligence Knowledge experience At least 3 years' experience building big data pipelines (ETL, SQL, etc) ADVANTAGEOUS TECHNICAL
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL Working Model (AWM) Charter ADVANTAGEOUS SKILLS Data and API Mining Knowledge on Security best practices setting up alerting pipelines. Be comfortable with Data Structures and Algorithms Understanding of integration Docker container creation and usage Familiar with data streaming services such as Apache Kafka Coordination Knowledge of Jira, Confluence and Agile methodologies Data Analysis ITSM knowledge User support ticket management
artifacts and activities. Maintenance of customer base data. Engage with stakeholders of TLM Crisis Management practical experience of IT Infrastructure i.e., Data Centres, Networks, Servers, Storage, Platform, Middleware IT Process Governance Data Analysis – The ability to analyse and visualise data sets. (Excel, Power-BI
environments to build a data base access monitoring solution for PostgreSQL data bases In-depth knowledge and presentation skills. Knowledge of data modelling and data visualisation tools. Cloud Experience:
Ansible Experience with Business Intelligence and data visualization tools (e.g. AWS Quicksight) Experience workflows for database systems Experience with common data formats, e.g. YAML, JSON Experience in managing managing the integration of database systems, including data flow management Understanding of various database preferably in Python (e.g. to transform and share data between databases) Experience with restful APIs external interface partners Plan the integration and data flows between multiple CMDBs and network management