We’re looking for an experienced Data Engineer with hands-on expertise in Databricks and modern cloud data platforms. You’ll be part of a cross-functional team working on large-scale data initiatives, from pipeline design and optimization to infrastructure setup and integration.
What you’ll do
- Design, implement, and maintain scalable data platforms on Databricks, leveraging Unity Catalog
- Develop and optimize PySpark jobs for performance and reliability
- Build and manage end-to-end data pipelines and integration workflows
- Configure and maintain cloud environments (Azure, AWS, or Google Cloud)
- Collaborate with data scientists, analysts, and DevOps engineers to ensure seamless data flow and accessibility
- Contribute to infrastructure setup and automation (IaC experience is a strong plus)
What you bring
- 3+ years of experience as a Data Engineer or similar role
- Proven track record in Databricks implementation and maintenance
- Strong proficiency in Python and PySpark
- Hands-on experience with Spark optimization
- Familiarity with cloud infrastructure management (Azure, AWS, or GCP)
- Solid understanding of data integration, ETL/ELT, and workflow orchestration
- Excellent communication skills in English and ability to work both independently and within a team
Why join
You’ll be working on technically challenging projects in a collaborative, international environment, where innovation, autonomy, and growth are valued.