leverage your skills in building and maintaining Big Data Pipelines using advanced cloud platforms. We PySpark - Boto3 - ETL - Docker - Linux / Unix - Big Data - Powershell / Bash - Cloud Data Hub (CDH) - Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. They are custodians
ETL
- Docker
- Linux / Unix
- Big Data
- Powershell / Bash
- Cloud Data
Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. They are custodians
Degree 5-8 years' progressive experience within a big data environment (Banking, Finance, Mobile Network
/>- Artificial intelligence, machine learning and big data technologies and architectures
- Management
awareness
- Ability to understand long-term (big picture) and short-term perspectives of events and
Engineers are responsible for building and maintaining Big Data Pipelines using GROUP Data Platforms. Data Engineers Oracle/PostgreSQL Py Spark Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash GROUP Cloud Data Hub (CDH)
Oracle/PostgreSQL Py Spark Boto3 ETL Docker Linux / Unix Big Data PowerShell / Bash Glue CloudWatch SNS Athena
Skills: General AWS experience in the DataScience / big data context. Docker, & Kubernetes Experience
a friend who is a technology specialist? We pay BIG CASH to you if we place a friend that you sent us
Apache/NGINX, MySQL/MariaDB, Bash, Python). 4. Big Data / Analytics / Monitoring experience using Elastic
organization levels Communication methodology skills in big organisations/initiatives IT Infrastructure Operations