is on the lookout for a AWS Data Cloud Engineer (big Data Engineer)Â
Knowledge /Qualifications
department to help develop the strategy for long term Big Data platform architecture Document and effectively Engineering, High Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift BigQuery and Azure Synapse Analytics Experience of Big Data technologies such as Hadoop, Spark and Hive Julia, T-SQL, PowerShell Experience working with Big Data Cloud based (AWS, Azure etc) technologies is
Engineer (Expert) You'll design and manage cutting-edge Big Data pipelines, ensuring secure and efficient data pipelines using AWS technologies. Implement and manage Big Data solutions. Collaborate within Agile teams to Experience with ETL processes, Docker, Linux/Unix, and Big Data tools. Familiarity with AWS services like Glue
equivalent. Four years of experience as a tester. Two years of experience as a test analyst. Four years of
Responsibilities:
- Design, implement, and optimize Big Data Pipelines using AWS services.
- Ensure
- Experience with Docker, Linux/Unix, and Big Data technologies.
- Excellent communication
using AWS technologies.
Implement and manage Big Data solutions.
Collaborate within Agile teams
/>Experience with ETL processes, Docker, Linux/Unix, and Big Data tools.
Familiarity with AWS services like
Mobile Digital solutions 5. Business Intelligence, Big Data and Data Analytics 6. Workflow and Robotics Systems/Informatics/Computer Science), etc. • At least four years broad technical/experience with extensive
Mobile Digital solutions 5. Business Intelligence, Big Data and Data Analytics 6. Workflow and Robotics Systems/Informatics/Computer Science), etc. • At least four years broad technical/experience with extensive
platforms (preferably Azure), programming skills, and big data technologies, is essential. PLEASE NOTE: This processing systems. Implement best practices for big data processing, storage, and analysis. Develop and technologies, preferably Azure. Strong knowledge of big data processing, storage, and analysis technologies
Data Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience. At least 3 years' experience building big data pipelines (ETL, SQL, etc). Salary Market Related