Aark Connect is a technology company focused on providing innovative solutions for digital communication and connectivity.
Hyderabad, Telangana, India
Not disclosed
On-site
Full Time
Company Overview At Aark Connect, it’s not just about building software it’s about creating real solutions that make an impact. We help businesses build, scale, and optimize their technology. Whether it’s developing custom software, building web and mobile apps, or setting up cloud and DevOps solutions, we make tech simple and effective. Our team specializes in IT consulting and services, making sure our clients have the right technology and people to move forward. From startups to enterprises, we work closely with our partners to deliver software solutions that fit their needs and help them grow. Position Summary The Senior Software Engineer will be responsible for designing, developing, and maintaining backend applications using Java and Spring technologies. The role requires working closely with cross-functional teams to build scalable and efficient software solutions. Location: Hyderabad, India Experience Required: 3+ years Salary Range: ₹5 LPA - ₹10 LPA Key Responsibilities Develop and maintain SOA-based and RESTful applications using Java, Spring Framework, and Spring Boot. Identify and resolve performance bottlenecks, debug system issues, and optimize application performance. Collaborate with product managers, designers, and frontend engineers to deliver well-integrated features. Follow coding standards and best practices to ensure maintainability and security. Mentor junior engineers and contribute to team discussions. Document system designs, workflows, and code structures. Required Qualifications Technical Skills: Proficiency in Java, Spring Framework, and Spring Boot. Database Management: Experience with MySQL or PostgreSQL and ORM tools like Hibernate. Version Control & CI/CD: Familiarity with Git, CI/CD pipelines, and deployment processes. Problem-Solving: Strong understanding of data structures, algorithms, and system design. Communication & Collaboration: Ability to work independently and interact effectively with cross-functional teams. First 30 Days Complete onboarding and gain access to internal tools and repositories. Review existing documentation, projects, and team workflows. Work on minor bug fixes or enhancements to understand the codebase. Next 60 Days Take ownership of a feature or module within an ongoing project. Optimize system performance and address any identified issues. Participate in technical discussions and provide recommendations. Next 90 Days and Beyond Lead the development of new features and propose system improvements. Contribute to architectural decisions and performance enhancements. Mentor junior engineers and support team-wide initiatives. Why Join Aark? Aark provides a work environment that promotes technical growth, collaboration, and problem-solving . Engineers work on challenging projects and contribute to building scalable solutions while following best software development practices. Show more Show less
Hyderabad, Telangana
INR Not disclosed
Remote
Full Time
A US IT Bench Recruiter is primarily responsible for marketing candidates (consultants) who are already on a company's payroll ("on the bench") to find them contract job opportunities with clients or through third-party vendors. Their main goal is to place these consultants as quickly as possible to generate revenue for the company. Key Responsibilities Consultant Marketing: Proactively market bench candidates (W2, 1099, and C2C consultants) to staffing companies and direct clients. Develop relationships with vendors, clients, and implementation partners. Submit candidates to job openings shared by vendors, direct clients, or through job portals. Vendor and Client Relationship Management: Build and maintain relationships with preferred vendors and direct clients. Negotiate rates and contract terms for candidates. Coordinate interviews and follow-ups until the consultant gets placed. Candidate Support: Prepare consultants for interviews by sharing job requirements and company profiles. Assist with resume formatting and tailoring to specific job descriptions. Provide updates and career advice to consultants on the bench. Database and Job Portal Management: Regularly update the company's internal database with consultant profiles, submissions, and interview schedules. Utilize portals like Dice, Monster, CareerBuilder, Indeed, LinkedIn, and JobDiva for new requirements and marketing. Documentation and Compliance: Ensure all documentation (e.g., immigration status like H1B, CPT, OPT, GC, US Citizens) is complete and compliant with US employment laws. Skills and Qualifications Experience: Typically 1-5+ years in IT staffing, especially in marketing H1B, OPT, CPT, GC, and US citizen consultants. Technical Knowledge: Understanding of various IT technologies (like Java, .NET, DevOps, Cloud, Data Science, etc.) to match consultants to job roles. Communication Skills: Excellent English communication (both verbal and written) to interact with vendors, clients, and candidates. Negotiation Skills: Ability to negotiate rates and contract terms effectively. Sales Attitude: Strong persuasion and relationship-building skills, with a goal-driven approach. 4. Tools Used ATS (Applicant Tracking Systems) CRM tools Job boards like Dice, Monster, CareerBuilder, Indeed LinkedIn Recruiter Email marketing platforms (for mass resume submission) 5. Work Environment and Shift Timing Typically US time zones (EST, CST, MST, PST) as they deal with US clients. Often remote or hybrid working models. Fast-paced, target-driven environment. Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹15,000.00 - ₹30,000.00 per month Benefits: Paid sick time Schedule: Evening shift Monday to Friday Night shift US shift Supplemental Pay: Performance bonus Experience: BENCH SALES: 1 year (Required) Location: Hyderabad, Telangana (Required) Work Location: In person
Musheerabad, Hyderabad, Telangana
INR Not disclosed
On-site
Full Time
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person
Musheerabad, Hyderabad, Telangana
INR Not disclosed
On-site
Full Time
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 01/06/2025
Hyderabad
INR 8.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Role & responsibilities We are looking for a Senior Full Stack Developer with strong experience in backend development, particularly in Node.js and Java. The ideal candidate will have hands-on expertise in AWS, JavaScript, and TypeScript, and be capable of building scalable, high-performance systems. You'll work in a collaborative environment, partnering with front-end developers, product managers, and other stakeholders. Key Responsibilities * Backend Development: Design, develop, and maintain scalable back-end systems using Node.js, Java and GraphQL. * Server-Side Logic: Implement and manage server-side logic, ensuring high performance and responsiveness. * API Development: Develop and maintain Node js schemas and resolvers. * Cloud Deployment: Deploy, monitor, and manage applications on AWS. * Collaboration: Work closely with front-end developers, product managers, and other stakeholders to define API requirements and functionalities. * Optimization: Optimize applications for speed and scalability. * Debugging & Troubleshooting: Troubleshoot, debug, and perform code reviews. * Documentation: Create and maintain comprehensive documentation for new and existing features. * Team Leadership: Mentor and manage associate developers. Required Skills & Qualifications * Professional Experience: 7+ years of back-end development experience with strong expertise in Node.js and Java, GraphQL. * Programming Languages: Proficient in Java, JavaScript and experience with TypeScript. * Cloud Technologies: Solid experience with AWS services. * Version Control: Experience with Git for version control. * Containerization: Understanding of Docker and containerization technologies. * C|/CD: Familiarity with Jenkins CI/CD pipelines and tools. * Security: Experience with authentication and authorization mechanisms (e.g., OAuth). * Problem-Solving: Excellent troubleshooting skills and attention to detail. * Collaboration: Strong communication skills and the ability to work effectively within a team. Preferred Skills * Architecture: Knowledge of microservices architecture and design patterns. * Mentorship: Experience in managing and mentoring Associate developers. Preferred candidate profile
Hyderabad
INR 3.0 - 4.5 Lacs P.A.
Work from Office
Full Time
Role & responsibilities Were seeking an in-office DSA Coding Assessment & POC Specialist to work the evening shift, solving algorithmic challenges on platforms like Codility, LeetCode, HackerRank, and crafting concise proof-of-concept demos to support our placement team. Key Responsibilities • Evening-shift (5PM–2 AM IST) on-site at our Hyderabad office. • Tackle timed coding assessments (DSA/algorithms) on Codility, LeetCode, HackerRank, etc. • Develop small proof-of-concepts (mini-apps, scripts) per client/candidate briefs. • Deliver clean, well-documented code with quick turnaround. • Work closely with our recruitment team to clarify requirements and ensure alignment. • Maintain confidentiality of all test and POC materials. Must-Have Skills • Proficient in Java, Python, or JavaScript (or similar). • Strong grasp of Data Structures, Algorithms, and OOP principles. • Prior experience with timed coding platforms and take-home assessments. • Excellent problem-analysis skills and the ability to prototype rapidly. • Good communication and attention to detail. Nice-to-Have • Bachelor’s in Computer Science, Engineering, or related. • Familiarity with Git and modern development workflows. • Previous freelance/contract work experience. What We Offer • On-site evening role with performance-based pay. • Steady evening hours perfect for students, second-shift professionals, or anyone seeking non-traditional hours. • Exposure to varied, real-world technical challenges Preferred candidate profile
Hyderabad
INR 12.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Role Description Job Title : Senior Full Stack Developer - Node Js and Java Location: Hyderabad Experience Level: 5+ years Package: 12 to 18 LPA Mandatory Skills : Node.js, Express, Java Spring boot, GraphQL, JavaScript, AWS, Git, Oracle, Kafka Good to have skills : Microservices architecture and design patterns. Job Summary We are looking for a Senior Full Stack Developer with strong experience in backend development, particularly in Node.js and Java. The ideal candidate will have hands-on expertise in AWS, JavaScript, and TypeScript, and be capable of building scalable, high-performance systems. You'll work in a collaborative environment, partnering with front-end developers, product managers, and other stakeholders. Key Responsibilities * Backend Development: Design, develop, and maintain scalable back-end systems using Node.js, Java and GraphQL. * Server-Side Logic: Implement and manage server-side logic, ensuring high performance and responsiveness. * API Development: Develop and maintain Node js schemas and resolvers. * Cloud Deployment: Deploy, monitor, and manage applications on AWS. * Collaboration: Work closely with front-end developers, product managers, and other stakeholders to define API requirements and functionalities. * Optimization: Optimize applications for speed and scalability. * Debugging & Troubleshooting: Troubleshoot, debug, and perform code reviews. * Documentation: Create and maintain comprehensive documentation for new and existing features. * Team Leadership: Mentor and manage associate developers. Required Skills & Qualifications * Professional Experience: 5+ years of back-end development experience with strong expertise in Node.js and Java, GraphQL. * Programming Languages: Proficient in Java, JavaScript and experience with TypeScript. * Cloud Technologies: Solid experience with AWS services. * Version Control: Experience with Git for version control. * Containerization: Understanding of Docker and containerization technologies. * C|/CD: Familiarity with Jenkins CI/CD pipelines and tools. * Security: Experience with authentication and authorization mechanisms (e.g., OAuth). * Problem-Solving: Excellent troubleshooting skills and attention to detail. * Collaboration: Strong communication skills and the ability to work effectively within a team. Preferred Skills * Architecture: Knowledge of microservices architecture and design patterns. * Mentorship: Experience in managing and mentoring Associate developers.
Musheerabad, Hyderabad, Telangana
INR 10.0 - 16.0 Lacs P.A.
On-site
Full Time
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 01/07/2025
Hyderabad
INR 12.0 - 15.0 Lacs P.A.
Work from Office
Full Time
Tech stack AWS, Terraform, Kubernetes, Docker, Linux, .NET Core, Selenium, Postgres, SQL Server, ECS, CloudWatch, CI/CD, YAML, Git, Entity Framework, Jenkins, PowerShell
Hyderabad
INR 9.6 - 15.0 Lacs P.A.
Work from Office
Full Time
Responsibilities: * Design, develop, test & maintain Scala applications using Akka, Cats & Play Framework with SQL/NOSQL databases. * Collaborate with cross-functional teams on night shift for project delivery.
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Work from Office
Full Time
URGENT HIRING(WORK FROM OFFICE-US SHIFT TIMING) Bachelor in Computer Science, Information Technology, or a related field. At least 5 years of hands-on experience in cloud engineering, with a focus on AWS. expertise in cloud architecture and design,
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Role & responsibilities As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 6+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus. Preferred candidate profile
Hyderabad
INR 8.5 - 17.0 Lacs P.A.
Work from Office
Full Time
Role & responsibilities Tech Stack - Spark, Scala, EMR, Glue, Agile, SQL Required Skills As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 5+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus.
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Role & responsibilities As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 6+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus. Preferred candidate profile
Musheerabad, Hyderabad, Telangana
INR 10.0 - 20.0 Lacs P.A.
On-site
Full Time
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹2,000,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 21/07/2025
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.