Position Overview
We are looking for a Senior Data Engineer to help transform our cloud data platform, move from batch processing to real-time streaming, and implement modern lakehouse architectures for analytics and AI-powered products. You will work with cutting-edge Azure tools and streaming technologies, influence architecture decisions, and help shape a strong engineering culture.
About the Project
The platform processes massive datasets for a global product, integrating Azure Synapse, Databricks, ADLS Gen2, and Confluent Cloud. The goal is to improve performance, scalability, and reliability, enable real-time processing, and enhance data availability for analytics, automation, and AI.
Your Role
- Optimize and refactor big data pipelines in Azure Synapse for performance, scalability, and maintainability
- Lead the migration of the data warehouse to a private network (Azure Synapse, Azure SQL, related services)
- Drive the transition from batch ETL to real-time streaming (Kafka, Confluent Cloud)
- Design and implement lakehouse architectures for structured and semi-structured datasets
- Write and optimize complex SQL queries for large-scale data transformations
- Mentor team members and conduct code reviews
- Evaluate and implement modern data tools and architectures
What We Expect
- 6+ years in Data Engineering / Big Data
- Strong expertise in Azure Data Factory, Synapse, Databricks, ADLS Gen2
- Hands-on experience with Kafka / Confluent Cloud
- Excellent SQL and data modeling skills
- Knowledge of data security, governance, and cloud best practices
Nice to Have: Microsoft Fabric, Delta Lake, Apache Iceberg, Data Mesh, Terraform, GitHub Actions, dbt, Airflow
What We Offer
- Fully remote work
- Minimal bureaucracy, maximum impact on architecture decisions
- Work with a global team of skilled professionals
- Company-funded training and certifications
- Paid vacation, sick leave, and public holidays