
Build and scale the global data backbone for AI and workforce intelligence
Are you a Data Engineer who thrives in a modern Databricks environment and enjoys building scalable, cloud-native data platforms? Do you want to work at the intersection of data engineering, AI enablement, and platform architecture in an international organization?
We are looking for a Data Engineer to join the Global Data & AI Team at HeadFirst & Impellam Group. In this role, you will help design, build, and evolve our global Headless Data Architecture (HDA), which serves as the foundation for analytics, AI agents, and intelligent workforce solutions across Europe, North America, and APAC.
Your impact
As a Data Engineer, you play a key role in shaping and scaling the data platform that powers our global Workforce-as-a-Service (WaaS) strategy. You work on our Headless Data Architecture (HDA), a scalable 9-layer, API-first lakehouse architecture built on Azure Databricks, Unity Catalog, and Delta Lake.
You are responsible for building reliable, governed, and production-ready data pipelines across our global lakehouse ecosystem. From Medallion-layer engineering to data contracts and platform observability, you help ensure that data products are scalable, reusable, and AI-ready.
Working in a highly collaborative international environment, you will contribute directly to the development of our global data platform and support advanced analytics, semantic search, and AI workloads used throughout the company.
A key part of your role is combining strong engineering fundamentals with an automation-first mindset. You work with modern AI-assisted development tools and help build a platform where data engineering, governance, and AI capabilities are fully integrated.
You work closely with ML/AI engineers, platform engineers, AI builders, cloud engineers, and stakeholders across regions to continuously improve the platform’s scalability, reliability, and quality.
At the same time, you help build a future-proof and well-managed data ecosystem that integrates data quality, lineage, security, and compliance standards into every layer of the platform.
What you will do
Design, build, and maintain Bronze → Silver → Gold medallion pipelines on Azure Databricks
Develop deployable and version-controlled data products using Lakeflow, Spark Declarative Pipelines, and Databricks Asset Bundles (DABs)
Implement and maintain scalable data contracts and data product patterns across the platform
Manage and optimize Unity Catalog governance, lineage, and access control
Build and maintain vector and relational storage capabilities that support AI and semantic search workloads
Contribute to Databricks' vector search and AI-powered retrieval capabilities
Collaborate with global teams across Europe, the UK, the US, and the Asia-Pacific region on cross-regional data exchange and platform scalability
Participate in Git-based development workflows, code reviews, and CI/CD practices
Monitor pipeline health, observability, and platform reliability from a DataOps perspective
Work with AI-assisted development tools such as Claude Code and the Databricks AI Dev Kit
About the role
You will be part of a growing Global Data & AI Team that plays a central role in transforming HeadFirst & Impellam Group into a truly AI-enabled and data-driven organization.
Working with technologies such as Azure Databricks, Unity Catalog, Delta Lake, and Lakeflow, you help build and scale a shared global data platform that supports analytics, AI products, and operational intelligence across multiple regions.
This role offers a strong combination of hands-on data engineering, platform ownership, and exposure to modern AI-enabled data architectures.
What you bring
You are a hands-on Data Engineer with a strong technical foundation and a pragmatic approach to building scalable, governed, and production-ready data platforms.
3+ years of hands-on experience with Databricks in a production environment
Extensive experience with Medallion Architecture (Bronze / Silver / Gold)
Solid understanding of Apache Spark and Delta Lake concepts, including optimization, partitioning, and ACID transactions
Experience with Databricks SQL and Unity Catalog governance
Familiarity with Lakeflow Connect, Spark Declarative Pipelines, and Databricks Asset Bundles (DABs)
Strong SQL and Python skills for large-scale data engineering workloads
Experience with Git-based development workflows and CI/CD practices
Understanding of data contracts, versioning, and scalable data product design.
Experience with monitoring, observability, and DataOps practices
Experience with vector search, semantic search, or AI-oriented data workloads is a plus.
Experience working in a global, multi-lakehouse Databricks environment—ideally a hub-and-spoke architecture—with Delta Sharing across regions.
Familiarity with Azure Synapse Analytics—you will work with Synapse as part of the existing stack while the platform transitions to Databricks-native tools (Lakeflow, DABs).
Hands-on experience with SnapLogic—used for integration flows across the organization outside of Databricks, including connectors, pipelines, and the SnapLogic Designer.
What you offer as a professional
Strong communication skills, with the ability to clearly explain technical concepts and trade-offs
A structured and quality-driven approach to engineering and documentation
A collaborative mindset, working effectively across global technical and business teams
A pragmatic approach to building scalable and maintainable data solutions
Interest in AI-native engineering and modern platform development
A strong desire to continuously learn and contribute to an evolving architecture
Submit application
We will contact youwithin 48 hours.
On call
Assessment
On call
Offer
Welcome to HeadFirst Group!
AND the possibility of adding or selling days. We work hard, but don't forget to relax.
With more than 450 colleagues, we always have something to to celebrate. As one team, we celebrate birthdays, anniversaries and other successes!
Work on your mental health with access to the platform OpenUp. We learn every day, which is why we offer you free trainings and courses to keep developing yourself.
Achieve your goals and see this reflected in the form of a bonus? That's possible with us! Hard work is rewarded.
Work at one of our 4 locations, at home, or at any wherever you feel comfortable. Of course you will receive a mobility and home working allowance from us.
Take advantage of the free sports facilities, fine workstations and enjoy a delicious lunch buffet. A formidable competitor of home!
Mattijs Wassenburg
Managing Director
Christine Koekkoek
Business Analyst
For this assignment you need to place a bid on Striive. Striive is the largest assignment platform in the Benelux where more than 20,000 assignments are published annually.