Join us, and transform how data drives global finance!
Kraków – based opportunity with hybrid work model (2 days/week in the office).
As a Cloudera Full Stack Engineer, you will be working for our client, a global financial services leader undergoing a large-scale data transformation program. You are joining a cross-functional Agile team that is designing and building enterprise-grade Big Data platforms using Cloudera technologies. You are contributing to the full development lifecycle—from system setup and configuration to deployment, monitoring, and continuous optimization—while ensuring 24/7 environment availability. You are also helping the client evolve its data ecosystem by integrating scalable storage solutions, enabling faster processing and analytics at a massive scale.
Your main responsibilities: Developing and testing Big Data components in a Cloudera-based environment
- Creating and configuring new Hadoop users and managing Kerberos authentication
- Setting up and maintaining Spark, HDFS, and MapReduce access
- Monitoring and tuning performance of Apache Spark jobs
- Executing end-to-end Cloudera cluster installation and capacity planning
- Automating deployments using Jenkins and Ansible
- Managing platform security using Apache Ranger, Knox, and Kerberos
- Troubleshooting and resolving system-level issues and incidents
- Collaborating with data engineers to optimize processing pipelines
- Incorporating centralized S3-based storage across the processing stack
You’re ideal for this role if you have:
- 5+ years of experience with Big Data solutions in on-prem or cloud environments
- Hands-on expertise with Cloudera tools including Hive, Spark, HDFS, and Kafka
- Proficiency in Linux system administration and shell scripting
- Strong understanding of Hadoop platform security and encryption practices
- Experience implementing CI/CD pipelines using Jenkins and Ansible
- Ability to troubleshoot performance and infrastructure issues
- Knowledge of Agile and DevOps principles
- Strong problem-solving skills and independent thinking
- Excellent communication and collaboration skills
- Familiarity with capacity planning and operational support in data platforms
It is a strong plus if you have:
- Exposure to centralized S3 data storage systems like VAST
- Experience optimizing Spark workloads for large-scale processing
- Background in working with Apache Ranger and Knox for advanced security
- Familiarity with Agile and Kanban project methodologies
#GETREADY to meet with us!
We would like to meet you. If you are interested please apply and attach your CV in English or Polish, including a statement that you agree to our processing and storing of your personal data. You can always also apply by sending us an email at recruitment@itds.pl.
Internal number #7403
Adres:
SKYLIGHT BUILDING | ZŁOTA 59 | 00-120 WARSZAWA
BUSINESS LINK GREEN2DAY BUILDING | SZCZYTNICKA 11| 50-382 WROCŁAW
Kontakt:
INFO@ITDS.PL
+48 883 373 832