department to help develop the strategy for long term Big Data platform architecture Document and effectively High Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, BigQuery and Azure Synapse Analytics Experience of Big Data technologies such as Hadoop, Spark and Hive At Julia, T-SQL, PowerShell Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous
Scientist Dive into cutting-edge projects, analyze Big Data, and craft actionable insights. Whether you're Design, develop, and maintain projects using Big Data technologies. Utilize machine learning, data mining technical and client interactions Technologies: Big Data technologies (e.g., Hadoop, Spark) Machine learning
The individual must be comfortable working with Big Data and clean, analyse, interpret and report back Consulting services to customers in the form of Big Data Analytics, Supply Chain Opportunity Assessments
Experience in Project Management Good knowledge on big data and how to use this effectively to streamline required. Good programming skills, specifically for big data and automation of reports Great at working under
Python, R, SQL, etc.
designing and building scalable ETL systems for a big data warehouse to implement robust and trustworthy knowledge of emerging trends across Data/Analytics (Big Data, Machine Learning, Deep Learning, AI) Responsible
product, environment. Strong analysis skills - big data - Strong Excel a must; and PowerBI highly advantageous
such as Python, R, SQL, etc. Big Data Technologies: Knowledge of big data technologies such as Hadoop
Degree 5-8 years' progressive experience within a big data environment (Banking, Finance, Mobile Network
experience working with data systems. Familiarity with big data technologies such as Hadoop, Spark, and Kafka