- Startseite
- Alle Remote Jobs
- Data Engineer
Data Engineer
High-growth scaleup
Hybrid - 2 days in office, 3 days work from home
Competitive salary with Pension Scheme, Private Healthcare
Are you at your best when you're building data foundations that dozens of teams rely on---and you can see the impact in real user behaviour, not just dashboards?
Do you want to work on a data platform at scale where reliability, cost, and delivery speed all matter, and where you're trusted to improve the way things are built?
About the Company
This is a high-growth, international technology business operating in the travel marketplace space. They're well past the early startup phase: hundreds of employees across multiple locations, strong year-on-year growth, and a product used by millions of customers annually.
The environment blends scale-up pace with the stability of a proven model---meaning there's real complexity to solve, enough data volume to make engineering choices meaningful, and the runway to modernise and iterate rather than just "keep the lights on". Teams are cross-functional, globally distributed, and focused on building products that are measurably used.
About the role
They're hiring a Data Engineering Developer to help evolve a modern data platform that supports analytics, data science, and business decision-making across the organisation.
You'll be working on the core pipeline capabilities while also enabling other teams to build high-quality datasets safely and quickly. This is a role for someone who enjoys pragmatic engineering: solid architecture, great developer experience, and strong operational thinking---without losing sight of cost and velocity.
What you'll do
- Design, build, and optimise robust batch and streaming data pipelines (from ideation through release and ongoing monitoring)
- Improve platform capabilities and development toolchains so other teams can create reliable datasets with confidence
- Drive DataOps practices: CI/CD for pipelines, IaC, automated testing, observability, and data quality approaches
- Evaluate and introduce improvements to the data stack with an eye on cost-efficiency, productivity, and delivery speed
- Support migration work from legacy pipeline patterns to modern dbt-based approaches where relevant
Tech stack
- Data Pipelines: Airflow, dbt-core, Python
- Data Storage & Querying: Redshift, Athena, DuckDB
- Cloud & DevOps: AWS, Terraform, Docker, Jenkins, AWS EKS (Kubernetes)
- Monitoring/On-call: ELK, Grafana, Looker, OpsGenie, plus internal tooling
- Ingestion: Kafka-based event systems, Airbyte, Fivetran
- Automation & AI tooling: Claude, Copilot, Codex
About you
- A degree in Computer Science (or similar) or equivalent practical experience
- Strong programming skills in Python and SQL
- Experience building batch and/or streaming pipelines with tools like Airflow, dbt, Kafka, Redshift, Athena/Presto, Firehose, Spark (or close equivalents)
- Familiarity with Lakehouse-style architectures in AWS or comparable cloud setups
- DataOps mindset: Infrastructure as Code, CI/CD, monitoring/observability, automated testing, and data quality practices
- Interest in using modern LLM/agent tooling to improve productivity and engineering workflows
For you
- Meaningful scale: work on a platform that supports millions of users and large volumes of data, where good engineering decisions have visible impact
- Strong engineering focus: high standards on quality, velocity, monitoring, and cost-awareness (not just "build it once")
- Hybrid flexibility: Munich hybrid setup with collaboration in-office and flexibility built into how work gets done
- Modern tooling: a stack that includes cloud-native infrastructure, strong observability, and practical AI tooling as part of day-to-day work
