Engineer. As a Big Data Engineer, you will be responsible for building and maintaining robust Big Data Pipelines basis. Designing, implementing, and maintaining Big Data Pipelines using BMW Data Platforms. Collaborating (Oracle/PostgreSQL), PySpark, Boto3, ETL, Docker, Linux/Unix, and Big Data technologies. Experience with PowerShell/Bash
Data Drive Programming languages such as Python and Big Data pipelines such as ETL, SQL etc. Strong working experience At least 3 years' experience building big data pipelines (ETL, SQL, etc) Advantageous Skills
designing and building scalable ETL systems for a big data warehouse to implement robust & trustworthy knowledge of emerging trends across Data/Analytics (Big Data, Machine Learning, Deep Learning, AI)
Oracle/PostgreSQL Py Spark Boto3 ETL Docker Linux / Unix Big Data Powershell / Bash Cloud Data Hub (CDH) CDEC
classification, models for supervised learning;
Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. Data Engineers
Engineers are responsible for building and maintaining Big Data Pipelines using Data Platforms. Data Engineers
market for quotes. Processing policies. Renewals (Big part of daily responsibilities). New Business (No
market for quotes. Processing policies. Renewals (Big part of daily responsibilities). New Business (No
querying and manipulation. Familiarity with PySpark for big data processing and analysis. Knowledge of Boto3 with Linux/Unix operating systems. Understanding of Big Data technologies and concepts. Proficiency in Powershell/Bash