Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
Are you someone who jumps into action when something breaksDo you enjoy digging into dashboards, logs, and alerts to find out what went wrong Were looking for a Product Support Engineer who loves solving problems and working closely with teams to keep systems running smoothly. In this role, you'll be the first point of contact for production issues. You'll monitor system health, investigate alerts, and work with Engineering, DevOps, Product, and Customer Support teams to fix problems fast. From resolving customer support tickets to setting up alerts and dashboards, your work will directly impact the stability and reliability of our services. If you're hands-on with monitoring tools, understand cloud basics, can dig into logs, and are eager to take ownership of production support, we'd love to connect with you. What You'll Do: Monitor system health and performance using tools like Grafana, New Relic, Datadog, Sumo Logic, Dynatrace, etc. Create and maintain dashboards, alerts, and log queries to improve visibility and issue detection. Respond to and resolve support tickets by working closely with customer support and engineering teams. Use Jira to track issues, bugs, and tasks; keep them updated with clear status and progress. Document processes, known issues, and solutions in Confluence and maintain operational playbooks. Troubleshoot and analyze production issues using logs and monitoring data. Support root cause analysis and contribute to post-incident reviews. Assist in automating routine tasks and improving support workflows. Communicate effectively with both technical and non-technical stakeholders. Apply basic SQL and programming knowledge for debugging and data checks. Collaborate with engineering, DevOps, product, and customer support teams to ensure fast resolution and continuous improvement. What Were Looking For: B.Tech / MCA in Computer Science, IT, or a related field 3+ years of experience in a technical support, product support, or site operations role Hands-on experience with monitoring and observability tools like Grafana, New Relic, Datadog, Sumo Logic, Dynatrace, etc. Experience in creating dashboards, setting up alerts, and analyzing logs Working knowledge of Jira (issue tracking) and Confluence (documentation) Basic understanding of cloud platforms such as AWS or GCP Strong problem-solving skills with a proactive mindset Familiarity with SQL for basic querying and troubleshooting Basic programming or scripting experience (e.g., Python, Bash) Good communication skills and ability to collaborate across teams (engineering, DevOps, product, and support)
Posted 1 month ago
6.0 - 10.0 years
7 - 11 Lacs
Mumbai
Work from Office
We are looking for an experienced DevOps Engineer (Level 2 3) to design, automate, and optimize cloud infrastructure. You will play a key role in CI/CD automation, cloud management, observability, and security, ensuring scalable and reliable systems. Key Responsibilities: Design and manage AWS environments using Terraform/Ansible. Build and optimize deployment pipelines (Jenkins, ArgoCD, AWS CodePipeline). Deploy and maintain EKS, ECS clusters. Implement OpenTelemetry, Prometheus, Grafana for logs, metrics, and tracing. Manage and scale cloud-native microservices efficiently. Required Skills: Proven experience in DevOps, system administration, or software development. Strong knowledge of AWS. Programming languages: Python, Go, Bash, are good to have Experience with IAC tools like Terraform, Ansible Solid understanding of CI/CD tools (Jenkins, ArgoCD , AWS CodePipeline). Experience in containers and orchestration tools like Kubernetes (EKS) Understanding of OpenTelemetry observability stack (logs, metrics, traces Good to have: Experience with container orchestration platforms (e.g., EKS, ECS). Familiarity with serverless architecture and tools (e.g., AWS Lambda). Experience using monitoring tools like DataDog/ NewRelic, CloudWatch, Prometheus/Grafana Experience with managing more than 20+ cloud-native microservices. Previous experience of working in a startup Education Experience: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Years of relevant experience in DevOps or a similar role.
Posted 1 month ago
0.0 - 5.0 years
10 - 20 Lacs
Musheerabad, Hyderabad, Telangana
On-site
As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹2,000,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 21/07/2025
Posted 1 month ago
0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist. In this role, you will: The DevOps Engineering job is responsible for developing automations across the Technology delivery lifecycle including construction, testing, release and ongoing service management, and monitoring of a product or service within a Technology team. They will be required to continually enhance their skills within a number of specialisms which include CI/CD, automation, pipeline development, security, testing, and operational support. This role will carry out some or all of the following activities: The role of the DevOps engineer is to facilitate the application teams across the Bank to deploy and their applications across GCP services like GKE Container, BigQuery, Dataflow, PubSub, Kafka The DevOps Engineer should be the go-to person in case application team faces any issue during Platform adoption, onboarding, deployment and environment troubleshooting. Ensure service resilience, service sustainability and recovery time objectives are met for all the software solutions delivered. Responsible for automating the continuous integration / continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement. Keep up to date and have expertise on current tools, technologies and areas like cyber security and regulations pertaining to aspects like data privacy, consent, data residency etc. that are applicable End to end accountability for a product or service, identifying and developing the most appropriate Technology solutions to meet customer needs as part of the Customer Journey Liaise with other engineers, architects, and business stakeholders to understand and drive the product or service’s direction. Analyze production errors to define and create tools that help mitigate problems in the system design stage and applying user-defined integrations, improving the user experience. Requirements To be successful in this role, you should meet the following requirements: Bachelor Degree in Computer Science or related disciplines 6 or more years of hands-on development experience building fully self-serve, observable solutions using infrastructure and Policy As A Code Proficiency developing with modern programming languages and and ability to rapidly develop proof-of-concepts Ability to work with geographically distributed and cross-functional teams Expert in code deployment tools (Jenkins, Puppet, Ansible, Git, Selenium, and Chef) Expert in automation tools (CloudFormation, Terraform, shell script, Helm, Ansible) Familiar with Containers (Docker, Docker compose, Kubernetes, GKE) Familiar with Monitoring (DATADOG, Grafana, Prometheus, AppDynamics, New Relic, Splunk) The successful candidate will also meet the following requirements: Good understanding of GCP Cloud or Hybrid Cloud approach implementations Good understanding and experience on MuleSoft / PCF/Any Gateway Server Implementations Hands on experience in Kong API Gateway platform Good understanding and experience on Middleware and MQ areas. Familiar with infrastructure support Apache Gateway, runtime Server Configurations, SSL Cert setup etc You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
10.0 years
1 - 10 Lacs
Hyderābād
On-site
JOB DESCRIPTION If you are a software engineering leader ready to take the reins and drive impact, we’ve got an opportunity just for you. As a Director of Software Engineering at JPMorgan Chase within Consumer and Community Banking you will be guiding product teams to deploy infrastructure optimally as a part of their modernization journey, assess for application readiness in moving to public cloud, enable application teams to effectively perform run functions such as upgrades, incident support, self-serve, etc. Your leadership and experience in public cloud migrations of complex systems, anticipating problems, and finding ways to mitigate risk, and issues will be key in leading numerous public cloud initiatives from ideation to production by collaborating with cross-functional teams. Some of the key pillars you would be driving are, Solution Engineering, Technology life cycle management, Problem Management, Resiliency and Automation. Job responsibilities: Collaborate with product and engineering teams to deliver robust cloud-based solutions that drive enhanced customer experiences. Guide various product teams on the standards and best practices related to the Public Cloud process and help them mitigate issues in production cloud with minimal downtime. Lead a team to Develop, enhance, and maintain established standards and best practices, Drive, self-service, and deliver on a strategy to operate on a build broad use of Amazon's utility computing web services (e.g., AWS EC2, AWS S3, AWS RDS, AWS CloudFront, AWS EFS, CloudWatch, EKS) Own end-to-end platform issues, problem management & help provide solutions to platform production issues on the AWS Cloud & ensure the applications are available as expected. Identify opportunities to improve resiliency, availability, secure, high performing platforms in Public Cloud using JPMC best practices. Improve reliability, quality, and reduce to time to resolve issues in production incidents on software applications in prod. Implement continuous process improvement, including but not limited to policy, procedures, and production monitoring and reduce time to resolve. Identify, coordinate, and implement initiatives/projects and activities that create efficiencies and optimize technical processing. Analyze upcoming platform level changes into production ensure communication of relevant impact. Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve. Provide primary operational support and engineering for the public cloud platform. Show leadership for any production issue and manage all the corresponding team in working towards fix and also should ensure minimal customer impact. Debug and optimize systems and automate routine tasks. Collaborate with a cross-functional team to identify potential risks in production and opportunities to improve user experiences at every interaction. Drive work streams to ensure Applications meet strict operational readiness for Public Cloud On-boarding. Evaluate production readiness through game days, resiliency tests and chaos engineering exercises. Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of AWS tools and services Required qualifications, capabilities, and skills Formal training or certification in software engineering concepts and 10+ years applied experience. In addition, 5+ years of experience in building or supporting environments on AWS using Terraform, which includes working with services like EC2, ELB, RDS, and S3 Strong understanding of business technology drivers and their impact on architecture design, performance and monitoring, best practices. Dynamic individual with excellent communication skills, who can adapt verbiage and style to the audience at hand and deliver critical information in a clear and concise message Strong experience in managing stakeholders at all levels Strong analytical thinker, with business acumen and the ability to assimilate information quickly, with a solution-based focus on incident and problem management. Expertise using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins. Expertise using monitoring solutions like CloudWatch, Prometheus, Datadog. Experience/Knowledge of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform Experience with one or more public cloud platforms like AWS, GCP, Azure . Experience with one or more automation tools like Terraform, Puppet, Ansible Experience with high volume, mission critical applications and their interdependencies with other applications and databases Ability to leverage Splunk and Dynatrace to identify and troubleshoot issues. Experience of ITIL process such as incident, problem, and life cycle management. Experience with high volume, mission critical applications, and building upon messaging and or event-driven architectures. Knowledge of container platforms such as Docker and Kubernetes. Strong understanding of architecture, design, and business processes. Keen understanding of financial and budget management, control and optimization of Public Cloud expenses Experience in working in in large, collaborative teams to achieve organizational goals. Passionate about building an innovative culture. Experience with production/non-production support of highly available applications. Experience with system performance monitoring and operational capacity management. Strong communication and collaboration skills Preferred qualifications, capabilities and skills Bachelor’s degree in computer science or other technical, scientific discipline A proactive approach to spotting problems, areas for improvement, and performance bottlenecks AWS Certification. SRE mindset Culture/Approaches: To run better production systems by creating engineering solutions to operational problems. Ability to program (structured and OO) with one or more high level languages, such as Python, Java, C/C++, Ruby, and JavaScript Infrastructure budgeting and finances. Infrastructure cost optimization ABOUT US J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world’s most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction.
Posted 1 month ago
3.0 years
1 - 5 Lacs
Hyderābād
On-site
JOB DESCRIPTION There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Employee Platforms team, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Implements infrastructure, configuration, and network as code for the applications and platforms in your remit Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Supports the adoption of site reliability engineering best practices within your team Contributes to large and collaborative teams by presenting information in a logical and timely manner with compelling language and limited supervision Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with troubleshooting common networking technologies and issues Ability to proactively recognize road blocks and demonstrates interest in learning technology that facilitates innovation Ability to identify new technologies and relevant solutions to ensure design constraints are met by the software team Preferred qualifications, capabilities, and skills Ability to initiate and implement ideas to solve business problems Passion for learning new technologies and driving innovative solutions ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 month ago
7.0 years
0 Lacs
Chennai
On-site
SRE Tool Evaluation & Deployment (7+ Years) Job Description Join AWACs engineering team to support the transition from Datadog and LogicMonitor to next gen SRE tools. Youll contribute to tool evaluation, POCs, and deployment across AWS, Azure, and Databricks environments. This role requires hands on experience with observability platforms and a strong understanding of cloud-native monitoring practices. Key Responsibilities: Assist in evaluating and testing SRE tool alternatives Support implementation and configuration of selected tools Integrate monitoring with cloud and data platforms Develop dashboards and alerting mechanisms Key Skills: SRE Tools (Prometheus, Grafana, etc.), AWS, Azure, Databricks, SQL Server, SSIS, Monitoring Setup, Cloud Observability About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
0 years
0 Lacs
Chennai
On-site
Job Description: Were looking for an Engineer to support AWACs SRE tool migration initiative. Youll help configure and maintain new monitoring tools, replacing Datadog and LogicMonitor, and ensure visibility across AWS, Azure, and Databricks environments. Ideal for someone with hands on experience in observability and a passion for modern tooling. Key Responsibilities: Support setup and configuration of new SRE tools Assist in dashboard creation and alert tuning Collaborate with teams to ensure coverage across systems Key Skills SRE Tools, AWS, Azure, Databricks, SQL Server, SSIS, Monitoring Support, Tool Configuration About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
10.0 years
0 Lacs
Chennai
On-site
Implementation (10+ Years) Job Description We are seeking a strategic Observability Lead to spearhead AWACs SRE tooling transformation. This role involves evaluating and recommending modern alternatives to Datadog and LogicMonitor, conducting gap analysis, and leading the implementation of selected tools across AWS, Azure, and Databricks environments. The ideal candidate will bring deep monitoring expertise and leadership in tool selection, integration, and rollout. Key Responsibilities Lead analysis of current monitoring tools (Datadog, LogicMonitor) Identify and evaluate SRE tool alternatives (e.g., Prometheus, Grafana, New Relic, Dynatrace) Architect and implement chosen solutions across cloud and data platforms Collaborate with engineering and data teams to ensure seamless integration Key Skills: SRE Tooling Strategy, AWS, Azure, Databricks, SQL Server, SSIS, Monitoring Architecture, Tool Evaluation, Implementation Leadership About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
About Us: Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI. Project Overview: We're building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software consultancy tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process. Why This Role Is Unique: Collaborate directly with AI researchers shaping the future of AI-powered software development. Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks. Influence dataset design that will train and benchmark next-gen LLMs. What does day-to-day look like: Review and compare 3–4 model-generated code responses for each task using a structured ranking system. Evaluate code diffs for correctness, code quality, style, and efficiency. Provide clear, detailed rationales explaining the reasoning behind each ranking decision. Maintain high consistency and objectivity across evaluations. Collaborate with the team to identify edge cases and ambiguities in model behavior. Required Skills: At least 3 years of experience at top-tier product or research companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify, Intuit, PayPal, or research roles at IBM, GE, Honeywell, Schneider, etc.), with a total of 7+ years of overall professional software engineering experience. Strong fundamentals in software design, coding best practices, and debugging. Excellent ability to assess code quality, correctness, and maintainability. Proficient with code review processes and reading diffs in real-world repositories. Exceptional written communication skills to articulate evaluation rationale clearly. Prior experience with LLM-generated code or evaluation work is a plus. Bonus Points: Experience in LLM research, developer agents, or AI evaluation projects. Background in building or scaling developer tools or automation systems. Engagement Details: Commitment: ~20 hours/week (partial PST overlap required) Type: Contractor (no medical/paid leave) Duration: 1 month (starting next week; potential extensions based on performance and fit) Rates: $40–$100/hour, based on experience and skill level.
Posted 1 month ago
10.0 years
1 - 7 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. A seasoned and forward-thinking Infrastructure & DevOps Manager with deep expertise in designing scalable, resilient, and automated infrastructure solutions. This role leads a high-performing team of engineers responsible for building and maintaining modern DevOps ecosystems, leveraging Infrastructure as Code (IaC), container orchestration, CI/CD pipelines, and cloud-native technologies. The ideal candidate combines solid technical acumen with leadership capabilities to drive innovation, operational excellence, and continuous improvement across the infrastructure landscape in a 24x7 Rotational Shift Model. Primary Responsibilities: Design and implement Infrastructure as Code (IaC) using tools like Terraform Automate operational tasks using Python and other scripting languages Provide expert-level support for Apache, nginx, Envoy, and working knowledge of HAProxy Lead development efforts using Go, and manage service discovery with Consul and Consul Template Leverage AI tools and AIOps to optimize operations and reduce manual overhead Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Implement Agentic DevOps practices to enhance automation and decision-making Monitor infrastructure using Prometheus, Grafana, and Datadog; lead incident response and root cause analysis Deploy and manage containerized applications using Docker and Kubernetes Use Ansible or Chef for configuration management and provisioning Manage cloud infrastructure across AWS, Azure, and Google Cloud, ensuring cost-efficiency and scalability Automate machine image creation using Packer for consistent environments Collaborate with development and QA teams to ensure seamless software delivery Mentor junior engineers and promote best practices across the team Lead system integration projects and consult with stakeholders to align infrastructure with business goals Provide training and support on new systems and technologies Stay current with industry trends and emerging technologies Participate in rotational on-call shifts to support 24/7 infrastructure availability Adhere to company policies and demonstrate flexibility in adapting to evolving business needs Participate in rotational shifts to provide 24/7 support for critical systems and infrastructure and promptly addressing any issues that arise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field Certifications in Kubernetes, Terraform, or any public cloud platform like AWS, Azure, or GCP 10+ years of experience in a DevOps role or similar Experience with distributed systems and microservices architecture Programming Languages: Proficiency in Python and experience with other scripting languages (e.g., Bash) Cloud Platforms: Familiarity with AWS, Azure, or GCP services. Proven experience implementing Public Cloud Services using Terraform within Terraform Enterprise or HCP Terraform Infrastructure as Code: Proficiency in tools like Terraform and Ansible. Proven experience in authoring Terraform and shared Terraform Modules DevOps Tools: Solid experience with tools like Terraform, Kubernetes, Docker, Packer, and Consul CI/CD Pipelines: Hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Monitoring & Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack) Version Control: Experience with Git and branching strategies System Implementation & Integration: Proven experience in system implementation and integration projects Consulting Skills: Ability to consult with clients and stakeholders to understand their needs and provide expert advice Soft Skills: Solid analytical and problem-solving skills Proven excellent communication and collaboration abilities Ability to work in an agile and fast-paced environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
5.0 years
0 Lacs
Andhra Pradesh
On-site
About the Role We are seeking a highly skilled and proactive Senior Cloud Infrastructure Engineer to join our dynamic team. This role is ideal for someone with deep expertise in Infrastructure as Code (IaC) using Terraform, strong troubleshooting capabilities, and a passion for building scalable, secure, and cost-effective cloud infrastructure in AWS. You will work closely with our engineering teams to provision and optimize infrastructure, support production environments, and drive automation and performance improvements across our cloud ecosystem. Key Responsibilities Infrastructure Development & Automation Design, implement, and maintain cloud infrastructure using Terraform. Build and manage reusable Terraform modules to streamline provisioning of AWS resources (EKS, RDS, ElastiCache, EC2). Develop and maintain CI/CD pipelines using GitHub Actions and ArgoCD. Cloud Optimization & Monitoring Analyze and optimize AWS resource usage for performance and cost efficiency. Create and manage Datadog dashboards and alerts to monitor production systems. Collaborate with cybersecurity teams (e.g., CrowdStrike) to ensure secure and performant clusters. Support & Collaboration Assist engineering teams with infrastructure provisioning and maintenance. Support EC2 patching and integrate servers into disaster recovery plans. Facilitate migrations (e.g., Cognito, Autoloader) and troubleshoot cross-account issues. Configure CDN between portal and mobile applications. Required Skills & Experience 5+ years of experience in cloud infrastructure engineering, preferably in AWS. Strong proficiency in Terraform and Infrastructure as Code principles. Experience with EKS, RDS, ElastiCache, and other AWS services. Hands-on experience with GitHub Actions, ArgoCD, and CI/CD pipelines. Familiarity with Datadog for monitoring and alerting. Solid understanding of cloud security and disaster recovery practices. Excellent troubleshooting and communication skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Role: Performance Test Engineer (2+ Years Experience) Location: Coimbatore (In Office) Job Summary: We are seeking a skilled Performance Test Engineer with hands-on experience in testing both serverless and traditional server-based systems , as well as mobile applications . The ideal candidate will have a strong understanding of performance testing tools, cloud platforms (AWS/Azure/GCP), CI/CD pipelines, and mobile environments. You will be responsible for identifying bottlenecks, simulating load, and ensuring the scalability, reliability, and efficiency of applications under varying loads and network conditions. Key Responsibilities: Design, develop, and execute performance, load, and stress tests for applications built on serverless (e.g., AWS Lambda) and server-based (e.g., Node.js, Java) architectures. Plan and conduct mobile performance testing across different devices and network conditions to simulate real-world usage. Collaborate with development, DevOps, and mobile teams to define test scenarios based on real-world workloads , SLAs , and user behaviour patterns . Analyses test results to identify system bottlenecks, CPU/memory utilization issues, and latency problems across both web and mobile platforms . Monitor and benchmark API performance, infrastructure scalability, third-party system integrations, and mobile responsiveness. Use cloud-native tools and third-party solutions (e.g., AWS X-Ray, CloudWatch, k6 , JMeter , Gatling , Artillery ) to simulate and monitor traffic. Automate performance tests and integrate them into CI/CD pipelines . Generate detailed test reports with actionable insights and optimization recommendations for both web and mobile systems. Continuously refine performance testing strategies for scalability, cost-efficiency , mobile performance , and test coverage . Required Skills & Experience: 2+ years of hands-on experience in performance and load testing . Practical experience with tools such as k6 , Apache JMeter , Artillery , Gatling , or similar. Solid understanding of server less services (AWS Lambda, Step Functions, API Gateway) and server-based systems (e.g., EC2, containerized APIs). Experience monitoring performance using tools like CloudWatch , X-Ray , or equivalent. Familiarity with distributed tracing tools such as Open Telemetry , Jaeger , or AWS X-Ray . Proficiency in JavaScript/Node.js, Java , or Python for scripting and automation. Familiarity with CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins) and experience embedding performance tests into workflows. Experience with mobile performance testing using tools like Charles Proxy , Firebase Performance Monitoring , Xcode Instruments , or Android Profiler . Knowledge of API protocols (REST, WebSocket’s), authentication mechanisms, and latency-related factors. Experience in cloud environments, preferably AWS . Strong understanding of auto-scaling mechanisms in both server less and traditional environments. Nice to Have: Proficiency with IaC tools like Terraform, AWS CloudFormation, or Serverless Framework. Knowledge of event-driven architectures and message queues like Amazon SQS, Kafka, or RabbitMQ. Awareness of security and compliance considerations in performance testing (e.g., rate limiting, HIPAA, GDPR). Basic understanding of front-end performance testing using Lighthouse, Webpage Test, or Sitespeed.io. Experience with Real User Monitoring (RUM) tools like New Relic Browser, Datadog RUM, or Google Analytics. Mobile performance testing exposure across platforms (iOS/Android) and networks (3G/4G/5G) including battery usage, cold start time, and memory profiling. Test data generation using Mockaroo, Faker.js, or custom scripts. To apply: If you believe you have all the above then please share all the below to gayathri@steam-a.com and preeti@steam-a.com Phone number Email id Total number of years of experience as a Performance Test Engineer Current CTC Expected CTC Are you an immediate joiner? Notice period/availability Current location This is a work-from-office role based in Coimbatore (not remote or hybrid).Pls acknowledge Updated CV
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About the Role As a Full Stack Engineering Manager, you'll lead a cross-functional development team responsible for building secure, scalable, and high-performance financial intelligence tools. You will oversee the architecture, execution, and delivery of full stack web applications powered by AWS serverless infrastructure, modern React frontends, and AI-enhanced features using services like Amazon Bedrock. Responsibilities 🔹 Leadership & Engineering Strategy • Lead a team of full-stack engineers across frontend, backend, and DevOps. • Own technical delivery, timelines, and solution architecture across multiple product modules. • Facilitate Agile/Scrum ceremonies and ensure iterative, high-impact product development. • Mentor, coach, and support team members to improve their skills and career growth. • Collaborate closely with Product, Design, and Data Science teams to shape product roadmap. 🔹 Backend & Cloud Architecture • Design and implement Python-based, event-driven, serverless architectures using AWS Lambda. • Build secure, high-performance RESTful APIs using AWS API Gateway, Lambda, DynamoDB, PostgreSQL (pgvector), and Step Functions. • Integrate with third-party services like Zoho, Gmail APIs, Netsuite, and IMAP using secure OAuth flows. • Drive CI/CD pipelines using GitHub Actions and AWS SAM/CloudFormation. • Apply best practices in observability, monitoring (CloudWatch), and security (IAM, Secrets Manager). 🔹 Frontend Engineering • Oversee the development of React-based frontend apps using TypeScript and modern libraries like MUI, Tailwind CSS, Zustand, and react-chatbotify. • Architect scalable, reusable UI components with a strong UX focus and dynamic AI-driven interactivity. • Manage frontend deployments via AWS CloudFront and Route 53. • Enable secure user authentication and authorization workflows using Auth0 or Cognito. 🔹 DevOps & Platform Engineering • Define and manage infrastructure as code (IaC) using AWS SAM or CDK. • Optimize performance, logging, and monitoring across distributed systems. • Establish and enforce coding, deployment, and security standards across environments (Dev, PreProd, Prod). • Ensure cost-effective cloud resource usage and enforce budget-aware scalability strategies. What We’re Looking For ✅ Must-Have Skills • 10+ years in web development; 3+ years in a leadership role. • Expertise in Python, React, and AWS Serverless (Lambda, API Gateway, DynamoDB). • Strong experience in CI/CD, GitHub Actions, IaC (CloudFormation/SAM), and containerization (Docker). • Deep understanding of RESTful APIs, asynchronous event-driven systems, and security best practices. • Proven success in delivering production systems in regulated or financial domains. 🌟 Nice to Have • AWS Certifications (Developer Associate / Solutions Architect). • Experience with pgvector, LLM-based RAG systems, or Bedrock/Amazon Titan. • Familiarity with monorepo, micro-frontend, or modular codebase strategies. • Exposure to tools like Datadog, New Relic, or Sentry for observability. • Background in building AI-powered features (chatbots, email reply suggestions, etc.). Why Join Us? • Work at the cutting edge of finance, AI, and automation. • Solve meaningful problems that help businesses grow smarter. • Build and lead high-impact products from the ground up. • Flexible remote/hybrid work culture with a focus on output and ownership.
Posted 1 month ago
7.0 - 12.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Preferred candidate profile DevOps & Cloud Infrastructure Engineer to lead the design, implementation, and optimization of scalable, secure, and cost-effective infrastructure in Microsoft Azure . The ideal candidate will have deep expertise in Kubernetes , Docker , Terraform , CI/CD , and monitoring tools like Datadog, Grafana , Prometheus along with experience in SonarQube setup , Azure AD integration , and multi-tenancy architecture . Key Responsibilities: Design and implement scalable, secure, and cost-efficient infrastructure on Azure . Set up and manage Kubernetes clusters and containerized applications using Docker . Automate infrastructure provisioning using Terraform . Build and maintain robust CI/CD pipelines for continuous integration and deployment. Design and implement multi-tenancy architecture to support multiple clients or business units securely and efficiently
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Performance Engineer Experience: 8+ years Location: Pune Key Responsibilities: • Design and develop automated suites/tools, Identify performance bottlenecks in design and implementation and be involved in deployment, troubleshooting/analysis, and preparing performance engineering reports • Work within the Agile framework executing performance tasks along with the sprint team • Interact and co-ordinate between different stakeholders in Engineering/Architect/PO/Doc functions • Prioritize the tasks and communicate to all stakeholders. Adopt, enable, and ensure timely decision-making across the stakeholders Desired Skills & Experience : • Professional degree (Bachelor's / Master's) in engineering with a consistent academic record • Professional hands-on experience of 5 to 8 years in Performance • Engineering activities and developing performance tools Must have skills : • Good knowledge of JAVA, JMeter, Python • Good knowledge of Database and DB queries in Postgress & MongoDB • Good knowledge of performance monitoring tools like Newrelic, Datadog, Grafana • Good knowledge of performance profiling tools like Yourkit, JProfiler, JVisual VM • Exposure to Virtualization, K8, AWS, Azure and Unix flavors • Exposure to microservices benchmarking and horizontal scalability • Confident in complex determinate analysis, managing trade-offs between technical benefits, risk and efficiency • Very strong communicator, critical thinking, and natural in articulating complex technical topics and ideas to technical and non-technical stakeholders • Demonstrated success in providing clarity and delivering the sprint goals during ambiguities Nice to have skills : • Exposure to working in auto-scaling, and redundant technologies is a huge plus • Familiar with Message Queues like Kafka, large Storage Systems like S3, and NoSQL DBs is a plus
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Description Jitterbit is a leading data, application, and process workflow automation solution. Rooted in iPaaS and fueled by an ambitious vision, we integrate critical business processes to deliver the experiences and insights needed by enterprises of all sizes to accelerate their digital journey and future proof their business. Simply put, we power people to perform their best. Jitterbit empowers business transformation by automating critical business processes for faster, more informed decision-making. Jitterbit is the only provider to seamlessly combine and simplify the power of integration, APIM, and no-code app creation to amplify the value of your tech stack and speed up your digital journey. Organizations worldwide rely on Jitterbit’s experience and expertise to help them save time and money, while creating exceptional experiences, now and into the future. Job Description About the Role: We are seeking a talented and passionate DevOps Engineer to join our growing team. You will play a crucial role in building and maintaining our infrastructure, automating our deployment pipelines, and ensuring the reliability and scalability of our applications. You will work closely with development and operations teams to streamline our processes and contribute to a culture of continuous improvement. Responsibilities: Design, build, and maintain our cloud infrastructure (AWS, Azure, GCP). Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Develop and maintain CI/CD pipelines using tools like Jenkins, CI/CD, and GitHub Actions. Implement and manage containerization and orchestration technologies (Docker, Kubernetes). Monitor system performance and availability using tools like Zabbix, Grafana, Datadog, or ELK stack. Troubleshoot and resolve infrastructure and application issues. Implement and maintain security best practices throughout the development and deployment lifecycle. Collaborate with development and operations teams to improve processes and workflows. Participate in on-call rotations to ensure system availability. Contribute to documentation and knowledge sharing. Qualifications Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). Proven experience as a DevOps Engineer or in a similar role. Strong understanding of cloud computing concepts and experience with AWS, Azure, or GCP. Proficiency in scripting languages (e.g., Python, Bash). Experience with IaC tools (e.g., Terraform, CloudFormation). Experience with CI/CD pipelines and tools. Experience with containerization and orchestration technologies (Docker, Kubernetes). Experience with monitoring and logging tools. Strong understanding of networking concepts. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Ability to work in a fast-paced environment. Knowledge of security best practices. Preferred Qualifications: Relevant certifications (e.g., AWS Certified DevOps Engineer, Kubernetes Certified Administrator). Experience with configuration management tools (e.g., Ansible, Chef, Puppet). Experience with database administration. Experience with serverless technologies. Additional Information What You’ll Get: Work for a growing leader within the Integration Platform as a Service (iPaaS) tech space Join a mission-driven company that is transforming the industry by changing the way customers use API creation within business-critical processes Career development and mentorship A flexible, remote-friendly company with personality and heart Jitterbit is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
Posted 1 month ago
20.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Who we are Brightly, a Siemens company, is the global leader in intelligent asset management solutions. Brightly enables organizations to transform the performance of their assets with a sophisticated cloud-based platform that leverages more than 20 years of data to deliver predictive insights that help users through the key phases of the entire asset lifecycle. More than 12,000 clients of every size worldwide depend on Brightly’s complete suite of intuitive software – including CMMS, EAM, Strategic Asset Management, IoT Remote Monitoring, Sustainability and Community Engagement. Paired with award-winning training, support and consulting services, Brightly helps light the way to a bright future with smarter assets and sustainable. About The Job Brightly continues to grow and needs amazing engineers. This is an excellent fit for talented engineers who thrive in a fast-paced environment. New hires will work alongside our top-notch engineers and product team to design, implement, deliver, and support our highly ambitious products and integrations. We care deeply about your passion and dedication to the craft of software. The Test Manager supports the application platforms in the Engineering department. This is a great opportunity for a QA leader to bring their experience, business acumen, and architectural philosophies to the forefront of the day-to-day management of our projects and sprint What you’ll be doing Lead and mentor a team of test engineers focused on web, mobile, and cloud-based applications. Define and implement robust test strategies including functional, regression, performance, and accessibility testing. Drive automation initiatives using modern frameworks for web and mobile using tools like Selenium/Appiu m etc Oversee performance testing efforts to ensure scalability and responsiveness using tools like JMeter, K6 etc. CI/CD pipelines integration Manage test environments in containerized and cloud-native setups (Kubernetes, Docker, AWS). Ensure compliance with accessibility standards (WCAG 2.1+) and collaborate with design and development teams to improve inclusive user experiences. Monitor test execution, analyze results, and report quality metrics to stakeholders. Continuously improve testing processes, tools, and team capabilities. Create career development programs for the team and develop short-term plans to ensure the skills and performance of employees meet current and future needs. Collaborate in product lifecycle with senior engineers, development managers, product managers, scrum-masters in an agile environment, with scrum implemented at scale globally. Ability to multitask and stay organized in a dynamic work environment. Be part of continuous improvement processes. Welcome, change and complexity. Learn quickly and adapt fast. Be a change leader! What you need Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 10+ years of experience in software testing, with at least 3 years in a leadership role. Strong ability to understand technical product architecture Strong communication, partnership, teamwork skills required Hands-on experience with test automation for web and mobile platforms. Strong experience in performance testing and analysis. Experience with performance monitoring & analysis tools like datadog, dynatrace, newrelic etc Proficiency with cloud platforms (AWS, Azure, GCP) and container orchestration (Docker, Kubernetes). Experience with CI/CD tools (e.g. Jenkins, teamcity etc). Familiarity with accessibility testing tools and WCAG guidelines. Excellent leadership, communication, and organizational skills. Solid understanding of Agile/Scrum methodologies. Experience with Blue/Green or Canary Deployments Great to have experience with application migration from Monolith to Microservices The Brightly culture Service. Ingenuity. Integrity. Together. These values are core to who we are and help us make the best decisions, manage change, and provide the foundations for our future. These guiding principles help us innovate, flourish and make a real impact in the businesses and communities we help to thrive. We are committed to the great experiences that nurture our employees and the people we serve while protecting the environments in which we live. Together we are Brightly
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients ' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you. Key Responsibilities Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability Respond promptly to incidents and alerts, investigating and resolving issues efficiently Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python) Communicate clearly and fluently in English with customers and internal teams Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage Shift Detail Engineers rotate shifts, typically working 4–5 shifts per weeks Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team Qualifications 2–5 years of experience in DevOps or cloud support roles Strong familiarity with AWS and/or Azure cloud environments Experience with CI/CD tools such as GitHub Actions or Jenkins Proficiency with monitoring tools like Datadog, CloudWatch, or similar Basic scripting skills in Bash, Python, or comparable language Excellent communication skills in English Comfortable and willing to work in a shift-based support role, including night and weekend shifts Prior experience in a shift-based support environment is preferred What We Offer Remote work opportunity — work from anywhere in India with a stable internet connection Comprehensive training program including Shadowing existing processes to gain hands-on experience Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins . Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers
Posted 1 month ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Description: Key Skills : Java, Microservices, Spring boot, Kafka, AWS (ECS fargate container, S3, lamda, AWS Route 53), Terraform Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Job Title : Java Developer Contract Key Skills : Java, Spring Boot, Kubernetes, AWS (EKS, EC2) Job Locations : Hyderabad,Bengaluru Experience : 6 – 9 Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days
Posted 1 month ago
5.0 years
0 Lacs
Andhra Pradesh, India
On-site
Required Skills & Experience 5+ years of experience in cloud infrastructure engineering, preferably in AWS. Strong proficiency in Terraform and Infrastructure as Code principles. Experience With EKS, RDS, ElastiCache, And Other AWS Services. Hands-on experience with GitHub Actions, ArgoCD, and CI/CD pipelines. Familiarity with Datadog for monitoring and alerting. Solid understanding of cloud security and disaster recovery practices. Excellent troubleshooting and communication skills.
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Pune, Maharashtra
Remote
Job Description  Job summary Zendesk is seeking a highly skilled Software Engineer (Frontend or Full Stack) to join our Custom Objects, Triggers & Automation team. As part of the Custom Data & Logic group, you will play a pivotal role in developing innovative functionality to support our new Employee Service initiative within the Support domain. The ideal candidate is an accomplished engineer dedicated to creating consistent, usable, reliable, and high-performance user experiences for enterprise customers. What you’ll be doing Contribute to the design, development, testing, and deployment of high-quality, efficient software solutions. Collaborate with team members and cross-functional partners to architect solutions that are reliable, secure, performant, extensible, and scalable. Communicate technical decisions and their implications clearly to team members and stakeholders. Identify and help mitigate potential issues during development, testing, and delivery stages. Prioritize tasks to deliver value to users, the team, and the organization. Participate in application improvement discussions, project initiatives, and feature design processes. Develop reusable code and components for future use, ensuring maintainability and scalability. Review and provide feedback on pull requests to help enhance code quality. Maintain accurate and up-to-date technical documentation. Participate in on-call rotations following an initial training period. Engage actively in agile development ceremonies and contribute to a collaborative work environment. Required Qualifications: 3 - 6 years of professional experience in designing, developing, testing, and deploying frontend features to production in a stable and reliable manner. Strong experience with JavaScript/TypeScript, with a strong emphasis on React. In-depth knowledge of JavaScript and TypeScript fundamentals, as well as frameworks like React and Redux, and testing libraries including Cypress, Jest, and React Testing Library. Good knowledge of GraphQL and REST APIs. Proficiency with version control tools and continuous integration/continuous delivery (CI/CD) pipelines. Experience mentoring software engineers and collaborating with Design, Product, and Engineering teams. Strong problem-solving, critical thinking, and collaboration skills. Strong verbal, written, and interpersonal communication skills in English. Preferred Qualifications Knowledge of Ruby on Rails and relational databases such as MySQL/Aurora. Familiarity with micro frontends and federation architectures. Knowledge of AI and machine learning integration in applications is a Tech stack Backend: Ruby on Rails, Aurora/MySQL, S3, REST, GraphQL Frontend: JavaScript/TypeScript, React, Redux, React Testing Library, Cypress/Playwright, Jest DevOps & Monitoring: DataDog, GitHub Actions, Jenkins, CI/CD tools Cloud & Infrastructure: AWS, Spinnaker, Kubernetes .  Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The intelligent heart of customer experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here . Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
Posted 1 month ago
6.0 years
0 Lacs
Delhi Cantonment, Delhi, India
On-site
What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are a Glassdoor Best Place to Work and we have maintained a spot in the top four since its founding in 2009. We believe that diversity, inclusion and collaboration are key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. Who You’ll Work With Working alongside our generalist consultants, Bain's Artificial Intelligence, Insights and Solutions (AIS) helps clients across industries solve their biggest problems using our expertise in data science and engineering. Stationed in our global offices, AAG team members hold advanced degrees in computer science, engineering, AI, data science, physics, statistics, mathematics, and other quantitative disciplines, with backgrounds in a variety of fields including tech, data science, and academia. What You’ll Do As a Lead Cloud Engineer you will design and build cloud-based distributed systems that solve complex business challenges for some of the world’s largest companies. You will draw on your deep software engineering, cloud engineering, and DevOps expertise to design and build technology stacks and platform components that enable cross functional AI Engineering teams to create robust, observable and scalable solutions. As a member of a diverse and globally distributed engineering team, you will participate in the full engineering life cycle which includes designing, developing, optimizing, and deploying solutions and infrastructure at the scale of the world’s largest companies. Core Responsibilities Cloud solution and distributed systems architecture for full stack AI software and data solutions Implementation, testing and management of Infrastructure as Code (IAC) of cloud-based solutions that may include CI/CD, data integrations, APIs, web and mobile apps, and AI solutions Defining and implementing scalable, observable, manageable, and self-healing cloud-based solutions across AWS, Google Cloud and Azure Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics and AI features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients About You Master’s degree in Computer Science, Engineering, or a related technical field. 6+ years experience and atleast 3+ years at Staff level or equivalent Proven experience as a cloud engineer and software engineer within either/or product engineering or professional services organisations Experience designing and delivering cloud-based distributed solutions. GCP, AWS, or Azure certifications are a benefit Experience building infrastructure as code with tools such as Terraform (preferred), Cloud Formation, Pulumi, AWS CDK, CDKTF, etc Deep familiarity with nuances of software development lifecycle One or more configuration management tools: Ansible, Salt, Puppet, or Chef One or more monitoring and analytics platforms: Grafana, Prometheus, Splunk, SumoLogic, NewRelic, DataDog, CloudWatch, Nagios/Icinga CI/CD deployment pipelines (e.g. Github Actions, Jenkins, Travis CI, Gitlab CI, Circle CI) Experience building backend APIs, services and/or integrations with Python Practitioner experience with Kubernetes through services like GKE, EKS or AKS is a benefit Ability to work closely with internal and client teams and stakeholders Use Git as your main tool for versioning and collaborating Exposure to LLMs, Prompt engineering, Langchain a plus Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies About Us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 cities in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition, and redefine industries. We complement our tailored, integrated expertise with a vibrant ecosystem of digital innovators to deliver better, faster, and more enduring outcomes. Our 10-year commitment to invest more than $1 billion in pro bono services brings our talent, expertise, and insight to organizations tackling today’s urgent challenges in education, racial equity, social justice, economic development, and the environment. We earned a platinum rating from EcoVadis, the leading platform for environmental, social, and ethical performance ratings for global supply chains, putting us in the top 1% of all companies. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry.
Posted 1 month ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who We Are ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. Who We Are Looking For The data engineering team's mission is to provide high availability and high resiliency as a core service to our ACV applications. The team is responsible for ETL’s using different ingestion and transformation techniques. We are responsible for a range of critical tasks aimed at ensuring smooth and efficient functioning and high availability of ACVs data platforms. We are a crucial bridge between Infrastructure Operations, Data Infrastructure, Analytics, and Development teams providing valuable feedback and insights to continuously improve platform reliability, functionality, and overall performance. We are seeking a talented data professional as a Senior Data Engineer to join our Data Engineering team. This role requires a strong focus and experience in software development, multi-cloud based technologies, in memory data stores, and a strong desire to learn complex systems and new technologies. It requires a sound foundation in database and infrastructure architecture, deep technical knowledge, software development, excellent communication skills, and an action-based philosophy to solve hard software engineering problems. What You Will Do As a Data Engineer at ACV Auctions you HAVE FUN !! You will design, develop, write, and modify code. You will be responsible for development of ETLs, application architecture, optimizing databases & SQL queries. You will work alongside other data engineers and data scientists in the design and development of solutions to ACV’s most complex software problems. It is expected that you will be able to operate in a high performing team, that you can balance high quality delivery with customer focus, and that you will have a record of delivering and guiding team members in a fast-paced environment. Design, develop, and maintain scalable ETL pipelines using Python and SQL to ingest, process, and transform data from diverse sources. Write clean, efficient, and well-documented code in Python and SQL. Utilize Git for version control and collaborate effectively with other engineers. Implement and manage data orchestration workflows using industry-standard orchestration tools (e.g., Apache Airflow, Prefect).. Apply a strong understanding of major data structures (arrays, dictionaries, strings, trees, nodes, graphs, linked lists) to optimize data processing and storage. Support multi-cloud application development. Contribute, influence, and set standards for all technical aspects of a product or service including but not limited to, testing, debugging, performance, and languages. Support development stages for application development and data science teams, emphasizing in MySQL and Postgres database development. Influence company wide engineering standards for tooling, languages, and build systems. Leverage monitoring tools to ensure high performance and availability; work with operations and engineering to improve as required. Ensure that data development meets company standards for readability, reliability, and performance. Collaborate with internal teams on transactional and analytical schema design. Conduct code reviews, develop high-quality documentation, and build robust test suites Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Mentor junior data engineers. Assist/lead technical discussions/innovation including engineering tech talks Assist in engineering innovations including discovery of new technologies, implementation strategies, and architectural improvements. Participate in on-call rotation What You Will Need Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English. 4+ years of experience programming in Python 3+ years of experience with ETL workflow implementation (Airflow, Python) 3+ years work with continuous integration and build tools. 3+ years of experience with Cloud platforms preferably in AWS or GCP Knowledge of database architecture, infrastructure, performance tuning, and optimization techniques. Deep Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Proficient in databases (RDB), SQL, and can contribute to schema definitions. Self-sufficient debugger who can identify and solve complex problems in code. Deep understanding of major data structures (arrays, dictionaries, strings). Experience with Domain Driven Design. Experience with containers and Kubernetes. Experience with database monitoring and diagnostic tools, preferably Data Dog. Hands-on skills and the ability to drill deep into the complex system design and implementation. Proficiency in SQL query writing and optimization. Experience with database security principles and best practices. Experience with in-memory data processing Experience working with data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment. Experience working with: SQL data-layer development experience; OLTP schema design Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Github, Jenkins, Python, Docker, Kubernetes Nice To Have Qualifications Experience with Airflow, Docker, Visual Studio, Pycharm, Redis, Kubernetes, Fivetran, Spark, Dataflow, Dataproc, EMR Experience with database monitoring and diagnostic tools, preferably DataDog Hands-on experience with Kafka or other event streaming technologies. Hands-on experience with micro-service architecture Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |