Azure, GCP). - Knowledge of big data technologies (Hadoop, Spark). Working Conditions: - Onsite: This role
data stakeholders handle big data technologies (Hadoop), streaming (KAFKA) and data Replication (IBM Inphosphere Experience with big data technologies such as Hadoop, Spark, and Hive. Experience with programming languages data stakeholders handle big data technologies (Hadoop), streaming (KAFKA) and data Replication (IBM Inphosphere Experience with big data technologies such as Hadoop, Spark, and Hive. Experience with programming languages
stakeholders
stakeholders
data stakeholders handle big data technologies (Hadoop), streaming (KAFKA) and data Replication (IBM Inphosphere Experience with big data technologies such as Hadoop, Spark, and Hive. Experience with programming languages
using Big Data technologies such as Spark, Kafka, Hadoop, Storm, etc. Experience with ELK, New Relic or using Big Data technologies such as Spark, Kafka, Hadoop, Storm, etc. Understanding of modern software engineering
using Big Data technologies such as Spark, Kafka, Hadoop, Storm, etc. Experience with ELK, New Relic or using Big Data technologies such as Spark, Kafka, Hadoop, Storm, etc. Understanding of modern software engineering
Experience with big data technologies such as Hadoop, Spark, and Hive. Experience with programming languages data stakeholders handle big data technologies (Hadoop), streaming (KAFKA) and data Replication (IBM Inphosphere
computing platforms and big data technologies (e.g. Hadoop, Spark, AWS, Azure) is a plus. Ability to work