on the lookout for a AWS Data Cloud Engineer (big Data Engineer)Â
Knowledge /Qualifications
department to help develop the strategy for long term Big Data platform architecture Document and effectively High Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, BigQuery and Azure Synapse Analytics Experience of Big Data technologies such as Hadoop, Spark and Hive At Julia, T-SQL, PowerShell Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous
(Expert) You'll design and manage cutting-edge Big Data pipelines, ensuring secure and efficient data pipelines using AWS technologies. Implement and manage Big Data solutions. Collaborate within Agile teams to deliver Experience with ETL processes, Docker, Linux/Unix, and Big Data tools. Familiarity with AWS services like Glue
using AWS technologies.
Implement and manage Big Data solutions.
Collaborate within Agile teams
/>Experience with ETL processes, Docker, Linux/Unix, and Big Data tools.
Familiarity with AWS services like
Responsibilities:
- Design, implement, and optimize Big Data Pipelines using AWS services.
- Ensure data
- Experience with Docker, Linux/Unix, and Big Data technologies.
- Excellent communication and
platforms (preferably Azure), programming skills, and big data technologies, is essential. PLEASE NOTE: This processing systems. Implement best practices for big data processing, storage, and analysis. Develop and technologies, preferably Azure. Strong knowledge of big data processing, storage, and analysis technologies
Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience. At least 3 years' experience building big data pipelines (ETL, SQL, etc). Salary Market Related
Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience. At least 3 years' experience building big data pipelines (ETL, SQL, etc). Salary Market Related
leverage your skills in building and maintaining Big Data Pipelines using advanced cloud platforms. We seek PySpark - Boto3 - ETL - Docker - Linux / Unix - Big Data - Powershell / Bash - Cloud Data Hub (CDH) - CDEC Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. They are custodians
Prgramming Languages, Machine Learning Frameworks, Big Data Tools, and Data Storage Solutions.