Join us, and solve real-world challenges with smart data solutions!
Kraków – based opportunity with hybrid work model (2 days/week in the office).
As a Data Engineer you will be working for our client, a global financial services organization, on a high-impact data platform supporting business decision-making and regulatory compliance. You will be contributing to the design and development of scalable data pipelines using big data technologies within an Agile environment. The project involves close collaboration with data analysts, engineers, and business stakeholders to ensure data integrity, system performance, and timely delivery of insights. You’ll be engaging in architectural decisions, mentoring peers, and ensuring best practices in data engineering are followed.
Your main responsibilities: Building and optimizing scalable data pipelines using Pyspark and Hadoop components
- Designing and developing data processing workflows using Spark, Hive, and SQL
- Collaborating with Business Analysts to interpret and implement data requirements
- Participating in Agile ceremonies including planning, sprint reviews, and retrospectives
- Conducting code reviews and promoting development standards across the team
- Monitoring and troubleshooting production data jobs and workflows
- Integrating scheduling tools such as Airflow to orchestrate workflows
- Implementing automated testing frameworks for data components
- Supporting DevOps processes including CI/CD using Jenkins and Ansible
- Contributing to architectural discussions and technical strategy
You’re ideal for this role if you have:
- 5+ years’ experience in data engineering within Agile and DevOps environments
- Strong proficiency in Pyspark or Scala development
- Hands-on experience with Apache Spark, Hive, Hadoop, and YARN
- Solid knowledge of SQL and ETL frameworks
- Experience using version control tools like Git/GitHub
- Proficiency with workflow orchestration tools such as Airflow
- Strong understanding of big data modeling with relational and non-relational databases
- Familiarity with RESTful APIs and integration techniques
- Ability to work on Unix/Linux platforms
- Strong problem-solving skills with experience in debugging data pipelines
It is a strong plus if you have:
- Experience with Elasticsearch
- Knowledge of Java APIs and backend integration
- Exposure to ingestion frameworks and practices
- Familiarity with Cloud architecture and design patterns
- Understanding of Agile methodologies such as Scrum and Kanban
#GETREADY to meet with us!
We would like to meet you. If you are interested please apply and attach your CV in English or Polish, including a statement that you agree to our processing and storing of your personal data. You can always also apply by sending us an email at recruitment@itds.pl.
Internal number #6839
Address:
SKYLIGHT BUILDING | ZŁOTA 59 | 00-120 WARSZAWA
BUSINESS LINK GREEN2DAY BUILDING | SZCZYTNICKA 11| 50-382 WROCŁAW
Contact:
INFO@ITDS.PL
+48 883 373 832