Key Responsibilities:- Develop and maintain data models for BI and analytics solutions.
- Code in SQL and Python on Palantir Foundry.
- Assemble large, complex data sets to meet business requirements and advanced analytics use cases.
- Build robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms.
- Collaborate within a cross-functional international team to design and implement data-driven workflow solutions.
Requirements:- Hands-on experience with PySpark, Python, and SQL.
- Experience with Big Data Hadoop tools such as Pig, Spark, or HDFS.
- Proficiency with cloud services such as AWS, Azure, or GCP.
- Familiarity with agile methodologies, scrum, and problem-centric solution design.
- Proven knowledge of data engineering best practices.
- Fluent in English; German language skills are a plus.
Additional Assets:- Previous exposure to Palantir Foundry and AIP.
- Experience with Kubernetes or Kafka.
- Knowledge of the media, advertising, or marketplaces industry.
If you feel addressed and are ready for a new challenge, we look forward to receiving your application!
How to apply:
Apply Now