on the lookout for a AWS Data Cloud Engineer (big Data Engineer)Â
Knowledge /Qualifications
department to help develop the strategy for long term Big Data platform architecture Document and effectively High Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, BigQuery and Azure Synapse Analytics Experience of Big Data technologies such as Hadoop, Spark and Hive At Julia, T-SQL, PowerShell Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous
South Africa. Design, implement, and optimize Big Data Pipelines using AWS services. Ensure data integrity Responsibilities: - Design, implement, and optimize Big Data Pipelines using AWS services. - Ensure data integrity ETL. - Experience with Docker, Linux/Unix, and Big Data technologies. - Excellent communication and problem-solving
Responsibilities:
- Design, implement, and optimize Big Data Pipelines using AWS services.
- Ensure data
- Experience with Docker, Linux/Unix, and Big Data technologies.
- Excellent communication and
platforms (preferably Azure), programming skills, and big data technologies, is essential. PLEASE NOTE: This processing systems. Implement best practices for big data processing, storage, and analysis. Develop and technologies, preferably Azure. Strong knowledge of big data processing, storage, and analysis technologies
The individual must be comfortable working with Big Data and clean, analyse, interpret and report back Consulting services to customers in the form of Big Data Analytics, Supply Chain Opportunity Assessments
Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience. At least 3 years' experience building big data pipelines (ETL, SQL, etc). Salary Market Related
Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience. At least 3 years' experience building big data pipelines (ETL, SQL, etc). Salary Market Related
leverage your skills in building and maintaining Big Data Pipelines using advanced cloud platforms. We seek PySpark - Boto3 - ETL - Docker - Linux / Unix - Big Data - Powershell / Bash - Cloud Data Hub (CDH) - CDEC Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. They are custodians
Data Engineer Our client is seeking a skilled Big Data Architect to design, implement, and maintain robust certification is advantageous. Experience with Big Data frameworks; certification is advantageous. Proficiency Programming Languages: Python, PySpark, Scala Big Data Processing Frameworks: Apache Hadoop, Spark, Flink