Project description
We are seeking an experienced engineer with a strong background in Azure Databricks role combines hands-on development, performance, and scalability of enterprise data platforms.
You will be responsible for designing, developing, deploying, and maintaining solutions for Databricks platform for big data analytics, while ensuring best practices in security, automation, and DevOps.
Responsibilities
Design, build and manage large-scale batch data pipelines using Azure Databricks (Spark, PySpark)
Optimize Spark jobs for performance and cost efficiency
Work with Azure services such as Azure SQL Server, Key Vault, Blob Storage
Monitor platform health, troubleshoot performance issues, and implement alerts & dashboards via Azure Monitor, Log Analytics.
Skills
Must have
6+ years of hands-on experience in software/data engineering roles with exposure to data platforms with Python/Java.
Strong hands-on experience with Apache Spark using PySpark.
Solid SQL programming skills for query tuning, indexing strategies, and complex joins.
Hands-on with DevOps practices for data platforms—CI/CD pipelines for database and Databricks code, Infrastructure as Code (Terraform/ARM).
Strong troubleshooting and debugging skills across data, compute, and networking layers in Azure.
Nice to have
Azure SQL Server with Hyperscale
Kafka
banking experience as a plus
Other
Languages
English: B2 Upper Intermediate
Seniority
Senior
Wroclaw, Poland
Req. VR-114268
SQL (PL/SQL - Oracle)
BCM Industry
16/06/2025
Req. VR-114268