pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka a wide variety of data sources using SQL, AWS big data technologies and Kafka CC.
ideal candidate will have a strong background in Big Data and a proven track record of successful IT management
system.Big Data Experience:
meetings
pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka a wide variety of data sources using SQL, AWS big data technologies and Kafka CC. Create and support and Kafka CC. Continual research of the latest big data and visualization technologies to provide new across the development lifecycle Experience with big data tools is a must: Delta.io, PySpark, Kafka, etc
management experience
client is looking for someone that has ML and Big Data experience. We require a candidate with: BSc/BCom
client is looking for someone that has ML and Big Data experience. We require a candidate with: BSc/BCom
engineering
/>- ETL
- Docker
- Linux / Unix
- Big Data
- PowerShell / Bash
- Enterprise Collaboration
Hub are responsible for building and maintaining Big Data Pipelines using Data Platforms. They ensure data
data engineering Proficiency in SQL, Python, and big data tools Strong problem-solving skills and a passion