Job Description
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI / ML, and our people-first culture has earned us multiple Best Place to Work awards.
WHY JOIN US
If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!
ABOUT THE ROLE
As a Middle Java Engineer, you’ll contribute to building and maintaining high-throughput, event-driven systems that power the world’s largest job platform. Your work will directly impact millions of users by ensuring reliable, scalable, and efficient data processing pipelines. This role offers the opportunity to collaborate with a talented global team, tackle complex backend challenges, and grow your expertise in Java, AWS , and modern infrastructure practices within a supportive and innovative environment.
WHAT YOU WILL DO
- Drive operational excellence by proactively monitoring and optimizing system performance at scale;
- Ensure system stability and reliability by integrating modern testing techniques throughout the development lifecycle;
- Design, implement, and maintain backend services supporting large-scale event streaming systems using Java and Kafka;
- Establish and maintain SLAs / SLOs, track system health, and build tooling to measure and improve reliability;
- Define and maintain infrastructure using Terraform for reproducible, scalable deployments in AWS;
- Collaborate with engineers and SREs to design event-driven solutions that meet functional and non-functional requirements;
- Automate environment provisioning and configuration through GitLab CI / CD pipelines.
MUST HAVES
At least 3 years of experience with Java;Experience with event-driven architectures handling high-throughput data streams ( Kafka, SQS / SNS );Experience with Terraform ;Hands-on experience with AWS ;Proven ability to collaborate across roles and teams to design solutions that meet product requirements;Solid understanding of measuring reliability through SLAs / SLOs and fostering a culture driven by operational metrics;Upper-Intermediate English level.NICE TO HAVES
Experience with Rust and / or Golang;Familiarity with big data processing technologies (e.g., Apache Spark);Experience building change data capture (CDC) pipelines;Experience with Kubernetes and ArgoCD.PERKS AND BENEFITS
Professional growth : Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.Competitive compensation : We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.A selection of exciting projects : Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.Flextime : Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.Requirements
At least 3 years of experience with Java; Experience with event-driven architectures handling high-throughput data streams (Kafka, SQS / SNS); Experience with Terraform; Hands-on experience with AWS; Proven ability to collaborate across roles and teams to design solutions that meet product requirements; Solid understanding of measuring reliability through SLAs / SLOs and fostering a culture driven by operational metrics; Upper-Intermediate English level.