Data Engineer - Databricks A major organisation is delivering a large-scale digital transformation, rebuilding core front-end and back-end systems and modernising how content metadata is ingested, transformed, and delivered.
About the Role
We’re looking for a
hands-on Databricks Engineer to build and maintain scalable data pipelines across a modern Lakehouse architecture. The role requires daily, practical experience with Databricks.
Technical Experience Required- Hands-on Databricks experience
- Python, PySpark, Spark Structured Streaming.
- ETL/ELT pipeline development across different layers.
- Delta Tables, Delta Live Tables, and ideally Databricks Lakeflow.
- Strong data quality practices: schema enforcement, validation, monitoring.
Key Responsibilities- Manage daily loads, batch processes, and streaming pipelines.
- Handle updates, merges, deletes, and transformations across Lakehouse layers.
- Ensure data accuracy and pipeline reliability.
- Solve complex data movement and optimisation challenges.
- Work directly inside Databricks platform.
Why This Role is Exciting- Be part of a high-impact, national-scale transformation.
- Build and optimise modern, large-scale Lakehouse pipelines.
- Work with advanced Databricks tools and engineering patterns.
- Gain highly valuable, career-boosting experience.
How to Apply
If this sounds like a role you’d love, please
"APPLY NOW" or send your CV to:
sara@84recruitment.co.nz for a confidential discussion.