Our client requires the services of a Data Scientist/Engineer (Entry) – Midrand/Menlyn/Rosslyn/Home Office
as possible ROLE: Data Engineers are responsible for building and maintaining Big Data Pipelines using using GROUP Data Platforms. Data Engineers are custodians of data and must ensure that data is shared in line solution architect expertise. Be able to execute any Department related task as and when required by the Delivery Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH) GROUP CDEC Blueprint Business Intelligence (BI) Experience Technical data modelling and schema design (“not drag and drop”)
and support test case creation. Coordinate test data creation with the developers and test analysts. feasibility. Technical Test Case creation. Clear defect capturing. Defect workflow adherence. Managing and communicating Identification, Creation & Sanitation of Test Data Manual & Automatic Test Execution. Maintenance
Writing test cases, Test Execution, and Defect capture. Planning and effort estimation for test case execution completion from a testing perspective Coordinate test data creation with the developers Track new/changed requirements
- from hardware, to software, to business or department-specific applications incl. “Critical Applications” to analyze and troubleshoot potential solutions. Data analytics and trend analysis Presentation skills
understanding of Business requirements. Create test data reflecting various scenarios. Develop systems solutions responsibilities as per AWM Charter or as decided by department manager. Important: A clear criminal record is
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL software development ESSENTIAL SKILLS: Expertise in Data Intelligence and Business Intelligence Knowledge experience At least 3 years' experience building big data pipelines (ETL, SQL, etc) ADVANTAGEOUS TECHNICAL
Ability to develop in Data Drive Programming languages such as Python and Big Data pipelines such as ETL Working Model (AWM) Charter ADVANTAGEOUS SKILLS Data and API Mining Knowledge on Security best practices setting up alerting pipelines. Be comfortable with Data Structures and Algorithms Understanding of integration Docker container creation and usage Familiar with data streaming services such as Apache Kafka Coordination Knowledge of Jira, Confluence and Agile methodologies Data Analysis ITSM knowledge User support ticket management
artifacts and activities. Maintenance of customer base data. Engage with stakeholders of TLM Crisis Management practical experience of IT Infrastructure i.e., Data Centres, Networks, Servers, Storage, Platform, Middleware IT Process Governance Data Analysis – The ability to analyse and visualise data sets. (Excel, Power-BI
environments to build a data base access monitoring solution for PostgreSQL data bases In-depth knowledge and presentation skills. Knowledge of data modelling and data visualisation tools. Cloud Experience: