Engineer Job Profile
Kafka engineer is a big data engineer who specializes in developing and managing You will also be required to work with other big data technologies such as Hadoop, Spark, and Storm Kafka-based data pipelines
department to help develop the strategy for long term Big Data platform architecture Document and effectively High Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, BigQuery and Azure Synapse Analytics Experience of Big Data technologies such as Hadoop, Spark and Hive At Julia, T-SQL, PowerShell Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous
pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka a wide variety of data sources using SQL, AWS big data technologies and Kafka CC.
ideal candidate will have a strong background in Big Data and a proven track record of successful IT management
system.Big Data Experience:
meetings Manage Junior Developer Candidates from Big Data, collections, Fraud and Consulting industries 2 Years Management experience Candidates from Big Data, collections, Fraud and Consulting industries
meetings
pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka a wide variety of data sources using SQL, AWS big data technologies and Kafka CC. Create and support and Kafka CC. Continual research of the latest big data and visualization technologies to provide new across the development lifecycle Experience with big data tools is a must: Delta.io, PySpark, Kafka, etc
management experience
client is looking for someone that has ML and Big Data experience. We require a candidate with: BSc/BCom