Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
15 - 27 Lacs
Bangalore Rural, Bengaluru
Work from Office
DevOps, Site Reliability Engineering,loud platforms,GCP,Infrastructure as Code tools (Terraform, Ansible, CloudFormation), Prometheus, Grafana, ELK stack,Python, Bash, Go, Istio, Linkerd
Posted 2 months ago
1.0 - 5.0 years
8 - 15 Lacs
Bengaluru
Work from Office
Junior DevOps Engineer / DevOps Engineer Location: Bengaluru South, Karnataka, India Experience: 1.53 Years Compensation: 815 LPA Employment Type: Full-Time | Work From Office Only ________________________________________ Are you an aspiring DevOps professional ready to work on a transformative platform? Join a purpose-led team building India’s most disruptive ecosystem at the intersection of technology, property, and sustainability. This role is ideal for engineers who are eager to learn, automate, and contribute to building reliable, scalable, and secure infrastructure. Key Responsibilities Assist in designing, implementing, and managing CI/CD pipelines using tools like Jenkins or GitLab CI to automate build, test, and deployment processes. Support the deployment and management of cloud infrastructure, primarily on AWS, with exposure to Azure or GCP. Contribute to infrastructure as code practices using Terraform, CloudFormation, or Ansible. Participate in maintaining and operating containerized applications using Docker and Kubernetes. Implement and manage monitoring and logging solutions using Grafana, Loki, Prometheus, or ELK stack. Collaborate with engineering and QA teams to streamline release pipelines, ensuring high availability and performance. Develop basic automation scripts in Python or Bash to optimize and streamline operational tasks. Gain exposure to serverless and event-driven architectures under guidance from senior engineers. Troubleshoot infrastructure issues and contribute to system security and performance optimization. Requirements 1.5 to 3 years of experience in DevOps, SRE, or related infrastructure roles. Solid understanding of cloud environments (AWS preferred; Azure/GCP a plus). Basic to intermediate scripting knowledge in Python or Bash. Familiarity with CI/CD concepts and tools such as Jenkins, GitLab CI, etc. Working knowledge of Docker and introductory experience with Kubernetes. Exposure to monitoring and logging stacks (Grafana, Loki, Prometheus, ELK). Understanding of infrastructure as code using tools like Terraform or Ansible. Familiarity with networking, DNS, firewalls, and system security practices. Strong problem-solving skills and a learning mindset. Preferred Qualifications Certifications in AWS, Azure, or GCP. Exposure to serverless architectures and event-driven systems. Experience with additional monitoring tools or scripting languages. Familiarity with geospatial systems, virtual mapping, or sustainability-oriented platforms. Passion for eco-conscious technology and impact-driven development. Why You Should Join Contribute to a next-gen PropTech platform promoting sustainable and inclusive land ownership. Work closely with senior engineers committed to mentorship and ecosystem building. Join a team where your ideas are valued, your skills are sharpened, and your work has real-world impact. Be part of a vibrant, office-first culture that encourages innovation, collaboration, and growth.
Posted 2 months ago
8.0 - 13.0 years
10 - 20 Lacs
Hyderabad, Bengaluru, Thiruvananthapuram
Work from Office
Job Requirements Software development and enhancements on applications primarily using Java/JEE and ELK Stack (Elasticsearch, Logstash, and Kibana). The team will be making changes for the core product related to data analysis and reporting. Key Responsibilities Software Enhancement: Engage in application enhancement using Java/J2EE and ELK stack (Elasticsearch, Logstash, and Kibana) to support data analysis and reporting. Understand the current architecture and design of various modules, identify modules / functions to be modified. Implement solutions focusing on reuse and industry standards at a program, enterprise, or operational scope. Leadership: Lead design/development efforts across multiple functions ensuring adherence to established architecture, design patterns, policies, standards and best practices. Implement solutions focusing on reuse and industry standards at a program, enterprise, or operational scope. Design: Generate detailed design of enhancements, participate in code reviews. Expected to be a self-starter who can implement very complex systems with no supervision. Team Working: Work closely with the core development teams, coordinate with them to understand the requirements and take guidance to complete development tasks. Communicate with and work effectively with all team members. Core Tasks Perform software development work on applications Participate and lead efforts in requirements gathering, estimating, and system analysis Generate system designs, both at high and low levels Participate in code reviews Provide the required support to post-development phases of projects, such as acceptance testing and integration with other software applications. Liaise with members of other teams both internal and external Provide technical leadership Work Experience Application Requirements What you need to succeed Degree in Software Engineering, Computer Science, or an equivalent Engineering degree. Substantial experience in development of Warehouse Management Systems 8+ yrs of experience in Java application development Experience in design and integration of applications across multiple enterprise and third-party software systems Should be proficient in Core Java, J2EE, OODesign and Java architectures Experience in Elasticsearch, Logstash, and Kibana Experience in writing unit tests and integration tests. Experience in DevOps tools like git, maven, ssh Experience in Agile development practices Strong verbal and written communication skills, and ability to work well across teams. Strong organizational skills. Ability to work with all levels of management Good to have: Experience in data analysis.
Posted 2 months ago
5.0 - 8.0 years
15 - 20 Lacs
Chennai, Bengaluru
Work from Office
Key Responsibilities: Design, implementation, and maintenance of technology infrastructure. This includes the software development platform, servers, and applications Design continuous integration and development (CI/CD) pipelines to allow teams to collaborate on building, testing, and releasing new features. Participate in the day to day running of the incidents, problems and issues relative to the Platform Engineering tooling and processes. Execute the maintenance schedule for platform engineering tooling and processes, aligning with the company goals and objectives. Part of the team covering the required supporting hours for tools and services. Collaborate with cross-functional teams when issues arise on tooling and processes. Enforce best practices for automation, deployment, monitoring, and operations to improve efficiency and reliability. Drive innovation and continuous improvement within the platform engineering team, fostering a culture of learning, experimentation, and knowledge sharing. Foster a collaborative and inclusive work environment where team members feel empowered to contribute ideas, challenge assumptions, and drive positive change. Keep abreast of the latest industry trends and best practices in software delivery. Knowledge of modern, end-to-end systems development life cycles 5 Plus years of experience as DevOps Engineer Good understanding on CI/CD concepts for the SDLC lifecycle. Good understanding on Infrastructure as code Terraform Cloud Good understanding on Building and maintaining an internal developer platform (IDP) Experience of working with Containerization (Docker/Kubernetes) Kubernetes on-premise and/or cloud Good understanding of JFrog Platform GitHub Enterprise TeamCity Octopus Deploy Chef Sonar Cloud ELK Linux OS (Redhat, CentOS) Identity Management (AD, ADFS, AAD) Storage platforms (Pure Flash Array, Flash Blades, Cohesity, MDS Fabric switches, HP Tape Libraries) Enterprise server virtualization (VMWare 6.7/7.0, vCenter, Host profiles, Auto Deployment) Cisco UCS Platforms (Blade and Standalone C Series) HP blade Citrix Netscaler and Cloud Apps Cloud IaaS and PaaS (Azure Compute, Azure VMs, AAD, Exchange Online, SharePoint Online & M365) Application delivery (Citrix, Wide Area Optimizers) Monitoring solutions (App Dynamics, Grafana, Control UP, SolarWinds, SCOM) .Net, Java and MSSQL database development Excellent written and verbal communication skills. ConfigureBroad familiarity with delivery practices and management methodologies, including ITIL, and proficiency in ITIL tools such as ServiceNow. Hands-on experience leveraging Agile methodologies to achieve goals and outcomes, including collaborative work in Agile teams. Proven experience in development, infrastructure, IT operations, platform engineering, DevOps, or a related role, with a track record of successfully leading high-performing teams. Proven track record of driving innovation and continuous improvement, with a passion for staying current with emerging technologies and industry trends
Posted 2 months ago
8.0 - 12.0 years
35 - 60 Lacs
Pune
Work from Office
About the Role: We are seeking a skilled Site Reliability Engineer (SRE) / DevOps Engineer to join our infrastructure team. In this role, you will design, build, and maintain scalable infrastructure, CI/CD pipelines, and observability systems to ensure high availability, reliability, and security of our services. You will work cross-functionally with development, QA, and security teams to automate operations, reduce toil, and enforce best practices in cloud-native environments. Key Responsibilities: Design, implement, and manage cloud infrastructure (GCP/AWS/Azure) using Infrastructure as Code (Terraform). Maintain and improve CI/CD pipelines using tools like circleci, GitLab CI, or ArgoCD. Ensure high availability and performance of services using Kubernetes (GKE/EKS/AKS) and container orchestration. Implement monitoring, logging, and alerting using Prometheus, Grafana, ELK, or similar tools. Collaborate with developers to optimize application performance and deployment processes. Manage and automate security controls such as IAM, RBAC, network policies, and vulnerability scanning. Basic Qualifications: Strong knowledge of Linux Experience with scripting languages such as Python, Bash, or Go. Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Proficient in Kubernetes operations, including Helm, operators, and service meshes. Experience with Infrastructure as Code (Terraform). Solid experience with CI/CD pipelines (GitLab CI, Circleci, ArgoCD, or similar). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, etc.). Experience with scripting languages such as Python, Bash, or Go. Knowledge of networking concepts (TCP/IP, DNS, Load Balancers, Firewalls). Preferred Qualifications Experience with advanced networking solutions. Familiarity with SRE principles such as SLOs, SLIs, and error budgets. Exposure to multi-cluster or hybrid-cloud environments. Knowledge of service meshes (Istiol). Experience participating in incident management and postmortem processes.
Posted 2 months ago
3.0 - 5.0 years
7 - 10 Lacs
Kolkata
Hybrid
Intermediate understanding of Docker & Kubernetes Fundamental understanding of Python & Java. Exp in working on Ansible. Good knowledge of shell scripting. Exp in working on Linux-based architecture, RDBMS,Spark,Elastic Search,NoSql
Posted 2 months ago
5.0 - 10.0 years
6 - 16 Lacs
Chennai
Work from Office
Roles and responsibilities: Design & Implementation: Understand the customer requirement, Architect, Design and implement scalable ELK solutions. Develop Design documentations HLD and LLD ELK components Installation Configure ELK components as per best practices. ELK Operations: Lead Log onboarding activities Configuration of Logstash, FileBeats, MetricsBeats, elastic agent, etc., to collect and process data efficiently. Configure Elasticsearch components to efficiently store various kinds of data by optimizing performance and ensuring high availability. Configuration of Kibana visualizations as per requirement Configuration management User management activities Build integrations with upstream and downstream applications as necessary. Platform troubleshooting activities / Work with OEM to fix product level issues. Continuously document lessons learnt as part of troubleshooting activities. Health Monitoring Preferred Qualifications 5+ years of experience deploying and managing a large scale ELK solutions for enterprise customers. Experience working in SOC analysis / Incident response teams Strong understanding of cybersecurity technologies, protocols and applications ELK certifications Knowledge on Python scripting, Dockers, Kubernetes, Ansible for Run book Automation.
Posted 2 months ago
8.0 - 12.0 years
25 - 40 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Job Title: ELK Developer Experience Required: 8 - 12 Years Location: Hyderabad, Bangalore (Preferred) Also open to Chennai, Mumbai, Pune, Kolkata, Gurgaon Work Mode: On-site / Hybrid Job Summary: We are seeking a highly experienced ELK Developer with a strong background in designing and implementing monitoring, logging, and visualization solutions using the ELK Stack (Elasticsearch, Logstash, Kibana) . The ideal candidate should also have hands-on expertise with Linux/Solaris administration , scripting for automation, and performance testing. Additional experience with modern DevOps tools and monitoring platforms like Grafana and Prometheus is a plus. Primary Responsibilities: Design, implement, and maintain solutions using ELK Stack Elasticsearch , Logstash , Kibana , and Beats Create dashboards and visualizations in Kibana to support real-time data analysis and operational monitoring Define and apply indexing strategies , configure log forwarding , and manage log parsing with Regex Set up and manage data aggregation, pipeline testing, and performance evaluation Develop and maintain custom rules for alerting, anomaly detection, and reporting Troubleshoot log ingestion, parsing, and query performance issues Automate jobs and notifications through scripts (Bash, PowerShell, Python, etc.) Perform Linux/Solaris system administration tasks: Monitor services and system health Manage memory and disk usage Schedule jobs, update packages, and maintain uptime Work closely with DevOps, Infrastructure, and Application teams to ensure system integrity and availability Must-Have Skills: Strong hands-on experience with the ELK Stack (Elasticsearch, Logstash, Kibana) Proficient in Regex , SQL , JSON , YAML , XML Deep understanding of indexing , aggregation , and log parsing Experience in AppDynamics and related observability platforms Proven skills in Linux/Solaris system administration Proficiency in scripting (Shell, Python, PowerShell, Bash) for log handling, jobs, and notifications Experience in performance testing and optimization Good-to-Have / Secondary Skills: Experience with Grafana and Prometheus for metrics and visualization Knowledge of web and middleware components: HTTP server , HAProxy , Keepalived , Tomcat , NGINX Familiarity with DevOps tools: Git, Bitbucket, GitHub, Helm charts, Terraform, JMeter Programming/Scripting experience in Perl , Java , JavaScript Hands-on with CI/CD tools: TeamCity , Octopus , Nexus Working knowledge of Agile methodologies and JIRA Education: Bachelors or Master’s degree in Computer Science, Engineering, or a related field
Posted 2 months ago
10.0 - 13.0 years
35 - 50 Lacs
Chennai
Work from Office
Cognizant Hiring Payments BA!!! Location: Chennai, Bangalore, Hyderabad JD: Job Summary Atleast 10yrs of experience in the BA role and in that a couple of years of experience as BA lead role good domain knowledge in SWIFT/ISO 20022 Payment background and stakeholders management Java Microservices and Spring boot Technical Knowledge: Java / Spring Boot Kafka Streams REST JSON Netflix Micro Services suite ( Zuul Eureka Hystrix etc)12 Factor Apps Oracle PostgresSQL Cassandra & ELK Ability to work with geographically dispersed and highly varied stakeholders Responsibilities Strategy Develop the strategic direction and roadmap for our flagship payments platform aligning with Business Strategy Tech and Ops Strategy and investment priorities. Tap into latest industry trends innovative products & solutions to deliver effective and faster product capabilities Support CASH Management Operations leveraging technology to streamline processes enhance productivity reduce risk and improve controls Business Work hand in hand with Payments Business taking product programs from investment decisions into design specifications solutioning development implementation and hand-over to operations securing support and collaboration from other teams Ensure delivery to business meeting time cost and high quality constraints Support respective businesses in growing Return on investment commercialization of capabilities bid teams monitoring of usage improving client experience enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Processes Responsible for the end-to-end deliveries of the technology portfolio comprising key business product areas such as Payments Clearing etc. Own technology delivery of projects and programs across global markets that a develop/enhance core product capabilities b ensure compliance to Regulatory mandates c support operational improvements process efficiencies and zero touch agenda d build payments platform to align with latest technology and architecture trends improved stability and scale Interface with business & technology leaders of other systems for collaborative delivery.
Posted 2 months ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead, At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed We're not just any public company we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors, As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment, Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities, The Role Do you want to be part of an engineering team that strives to build simple solutions to complex problemsVeeva is looking for a passionate engineering manager for the Vault Automation Platform & Tools team This is a great opportunity to put your creativity and problem-solving skills to the test You would be working as part of a team that constantly strives to turn innovative ideas into reality using bleeding-edge technology and a bouquet of programming languages, What You'll Do Responsible for the timely & quality delivery of projects related to the Automation platform Contribute to the operational excellence of Veeva Hyderabad Manage a team of engineers and ensure successful and timely deliverables Single point of contact for QA management across various global offices of Veeva w r to tools and framework Ensure communication is quick and timely across time zones Help the engineering team in Hyderabad to be integrated well in the global team, Facilitate collaboration between various automation teams and key stakeholders to ensure effective issue resolution and technical solutions, Work closely with the engineers to identify opportunities to simplify and scale up the test automation architecture, Support engineering sprints for product releases (planning, grooming, etc ) Contribute to the hiring & onboarding efforts of Veeva Hyderabad Collaborate and contribute to state-of-the-art automation framework and cloud-based test infrastructure that can operate at scale with 24/7 availability Participate in code review and provide good coding practices Requirements Experience (at least 2+ years) managing a team of engineers & leads involved in test Automation and Development projects Total experience of 12+ years Lead by example: Be a hands-on and technical leader with effective communication skills Experience in agile & scrum processes and understanding the role of a scrum master Work with a team of 5 to 10 members to ensure quality deliverables Experience with KPIs that measure the success of the team and the projects Experience in creating, documenting, and refining the SW engineering process Bachelor's / Masters degree in Computer Science or related field relevant experience building tools and/or test automation framework Solid programming skills in Java Curious to learn and adapt to a fast-paced environment Excellent written and verbal communication skills Nice to Have Experience with the following tools/technologies: Test Automation: TestNG/Cucumber, Infrastructure: AWS, Reporting: ELK Stack, Orchestration: Jenkins, Build: Maven, or Other Tools: Gitlab/Jira Veevas headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world, Veeva is an equal opportunity employer All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances,
Posted 2 months ago
2.0 - 6.0 years
4 - 8 Lacs
Pune
Work from Office
About The Role The Ad Server and RTB Production Infrastructure is pivotal to ensuring our software applications reliability, availability, and overall excellence As an SRE Engineer, you will be responsible for the Ad Server and RTB Production Infrastructure Your essential duties encompass ensuring the seamless operation and optimal performance of large-scale distributed software applications Your role revolves around maintaining a robust and high-performing environment, contributing to the reliability of our services, and innovating solutions to guarantee 24/7 availability By leveraging your technical expertise and dedication, you contribute to maintaining a seamless experience for our users while upholding the highest standards of operational excellence Your specific responsibilities include: What You'll Do Operational Support Be a primary point of contact for operational support of multiple large-scale distributed software applications in the Ad Server environment, Monitor availability of applications, promptly detect anomalies, analyze the impact, debug the problems in production, and follow up for the resolution by working closely with the engineering team, Maintain services once they are live by measuring and monitoring availability, latency, and overall system health, Diligently work with the engineering team to expedite the resolution of incidents and ensure a swift return to normal operations, Be innovative in building dashboards, adding metrics, writing automation scripts to reduce operation toil, and streamlining processes to enhance system reliability and stability, Design and construct software and systems to effectively manage the Ad Serving platform, its underlying infrastructure, and applications, On Call Availability and Support Work in shifts to provide continuous on-call support for the production systems and resolve issues on your own by using predefined handbooks, Show a sense of urgency for high-priority issues and arrange war rooms to resolve the problems, Provide timely updates for high-priority issues and do handovers when a problem needs to be worked out 24*7, Conduct post-incident reviews to identify root causes, recommend preventive measures, and contribute to a culture of learning and improvement, We'd Love for You to Have Three plus years experience in software development, Ability to program using programming languages like C or C++, Scripting languages like Shell or Python, Good to have prior experience in technical engineering, A proactive approach to identify the problems, performance bottlenecks, and areas of improvement, Must know, Networking, Database (MySQL) and Linux System concepts, Debugging and analyzing the core dumps, Hands-on experience with monitoring and observability tools like Grafana, Nagios, Influx, ELK, etc Familiarity with orchestration tools like Docker and Grafana and incident management systems like Zenduty, Excellent communication and collaboration skills, with the ability to work effectively across teams, Self-motivated and positive mindset to examine any incidents, Excellent interpersonal, written, and verbal communication skills, Should have a bachelors degree in engineering (CS / IT) or equivalent degree from well-known Institutes / Universities, Additional Information Return to Office: PubMatic employees throughout the global have returned to our offices via a hybrid work schedule (3 days ?in office? and 2 days ?working remotely?) that is intended to maximize collaboration, innovation, and productivity among teams and across functions, Benefits: Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement As well, when were back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more! Diversity and Inclusion: PubMatic is proud to be an equal opportunity employer; we dont just value diversity, we promote and celebrate it We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status, About PubMatic PubMatic is one of the worlds leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes, Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand,
Posted 2 months ago
4.0 - 9.0 years
3 - 8 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Role & responsibilities Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action Preferred candidate profile Immediate Joiner
Posted 2 months ago
3.0 - 5.0 years
15 - 17 Lacs
Bengaluru
Work from Office
About the Role Own the deployment, scaling and hardening of our Kubernetes-based infrastructure. Automate end-to-end provisioning, ensure security and high availability, and troubleshoot production incidents. Key Responsibilities Kubernetes: Deploy, manage & optimize clusters (on-prem, EKS/GKE/AKS) IaC & GitOps: Automate with Terraform, Helm charts & Argo CD (or similar) CI/CD: Build/maintain pipelines (Jenkins, GitHub Actions, etc.) Monitoring: Implement Prometheus, Grafana & ELK for metrics, logs & alerts Troubleshooting: Diagnose container networking, storage & performance issues Security: Enforce RBAC, network policies & image-scanning best practices DR & Optimization: Define backup/restore strategies and cost-control measures Collaboration: Partner with dev teams on containerization and CI/CD workflows Required Qualifications 3-5 yrs in infrastructure, SRE or DevOps roles Hands-on Kubernetes (cluster lifecycle, Helm, CRDs) Linux administration & Bash scripting; networking tools (ip, netstat, tcpdump) IaC with Terraform/Ansible; deep Docker knowledge Monitoring with Prometheus/Grafana & ELK Automation scripting in Bash, Python or Go; Git proficiency; production debugging Preferred Skills Managed K8s services (EKS/GKE/AKS) Advanced IaC/GitOps (Argo CD, Terraform, Helm) Service mesh (Istio, Linkerd) Container security (Trivy, Clair) Custom tooling via Bash/Python automation
Posted 2 months ago
21 - 31 years
50 - 70 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices. Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Provide Technical Leadership & Mentorship Mentor and guide senior engineers to build technical expertise and drive a culture of excellence in software development. Foster collaboration within the engineering team, ensuring the adoption of best practices in coding, testing, and deployment. Review code and provide constructive feedback to ensure code quality and adherence to architectural principles. Collaboration & Cross-Functional Leadership Collaborate with cross-functional teams (Product, Security, and other Engineering teams) to drive the roadmap and ensure alignment with business objectives. Provide technical leadership in meetings and discussions, influencing key decisions on architecture, design, and implementation. Innovation & Continuous Improvement Propose, evaluate, and integrate new tools and technologies to improve the performance, security, and scalability of the cloud platform. Drive initiatives for optimizing cloud resource usage and reducing operational costs without compromising performance. Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems. Participate in on-call rotation. Support and partner with other teams on improving our observability systems to monitor site stability and performance We’d love to hear from people with: 12+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience leading design sessions and evolving well-architected environments in AWS at scale. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, OpenTelemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 2 months ago
21 - 31 years
35 - 42 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 2 months ago
3 - 8 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Job Description: As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Key Responsibilities : Configure Logstash to receive, filter, and transform logs from diverse sources (e.g., servers, applications, AppDynamics, Storage, Databases and so son) before sending them to Elasticsearch. Configure ILM policies, Index templates etc. Develop Logstash configuration files to parse, enrich, and filter log data from various input sources (e.g., APM tools, Database, Storage and so on) Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Ensure efficient and reliable data ingestion by optimizing Logstash performance, handling high data volumes, and managing throughput. Utilize Kibana to create visually appealing dashboards, reports, and custom visualizations. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Establishing the correlation within the data and develop visualizations to detect the root cause of the issue. Integration with ticketing tools such as Service Now Hands on with ML and Watcher functionalities Monitor Elasticsearch clusters for health, performance, and resource utilization Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Education, Experience, and Certification Requirements: BS or MS degree in Computer Science or a related technical field 5+ years overall IT Industry Experience. 3+ years of development experience with Elastic, Logstash and Kibana in designing, building, and maintaining log & data processing systems 3+ years of Python or Java development experience 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling Experience working with Machine Learning model is a plus Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus Elastic Certified Engineer” certification is preferrable
Posted 2 months ago
5 - 6 years
7 - 8 Lacs
Gurugram
Work from Office
Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action
Posted 2 months ago
8 - 10 years
25 - 40 Lacs
Bengaluru
Remote
Job description Technical Lead (Enterprise AI Systems) We are seeking a Technical Lead to drive the design and development of our enterprise-grade AI product. This role requires a deep understanding of AI/ML systems, cloud-native architectures, and scalable software design. The applicant should have mandatorily worked on enterprise grade AI product or project for at least 2 - 3 years. This role includes at least 50% hands on development, 30% LLD and the remaining 20% of HLD / Architecture. The role is on the technical career path and is NOT suitable for project managers, technical managers, engineering managers etc. Key Responsibilities: Develop scalable, modular, and secure AI-driven software systems. Design APIs, microservices, and cloud infrastructure for AI workloads. Ensure seamless AI model integration and efficient scaling. Optimize performance & scalability of systems. Implement caching, load balancing, and distributed computing. Ensure security compliance (GDPR, SOC2) and secure data handling. Enforce best practices in authentication, encryption, and API security. Mandatory Skills & Qualifications: Experience: 8 - 10 years in software architecture & design, with at least 3+ years focused on AI/ML product design & development. Last 1 - 2 years should include deep Gen AI experience in enterprise grade AI projects or applications. Experience with design and developing applications with LLMs, RAG Applications, Gen AI Agents. Fine-tuning open-source LLMs. Experience with deploying and monitoring LLMs. Experience with LLM evaluation frameworks or tools like DeepEval, Langfuse or similar. Experience with frameworks like langchain, llamaindex, pydantic etc. Proficiency in RDBMS like PostgreSQL and knowledge of vector databases (eg: chromadb,Pinecone, FAISS, Weaviate). Strong background in microservices, RESTful & GraphQL APIs, event-driven architectures, Working experience with Kafka, RabbitMQ etc... Proficient in Python, Java for backend development Experience with TensorFlow, PyTorch, Hugging Face etc... Skilled in data processing (NumPy, Pandas, Dask,NLTK) and front-end (TypeScript, React.js). Experience in scalable AI systems, data pipelines, MLOps, and real-time inference. Strong understanding of security protocols, IAM, OAuth2, JWT, RBAC. Expertise in cloud platforms (AWS, GCP, Azure), Kubernetes, Docker. Hands-on experience with CI/CD (Eg: github,Jenkins etc...) Hands-on with system monitoring (Eg: ELK Stack,OpenTelemetry, Prometheus, Grafana). Nice-to-Have Skills: Ensuring AI model interpretability, fairness, and bias mitigation strategies. Optimizing model inference using quantization, distillation, and pruning techniques. Experience in deploying AI models in production at scale, including MLOps best practices. Designing AI systems with privacy-preserving techniques (differential privacy, homomorphic encryption etc.). Experience with knowledge graph-based AI applications.
Posted 2 months ago
6 - 8 years
16 - 20 Lacs
Bengaluru
Work from Office
Senior DevOps Engineer Location: Bengaluru South, Karnataka, India Experience: 68 Years Compensation: 1620 LPA Industry: PropTech | AgriTech | Cloud Infrastructure | Platform Engineering Employment Type: Full-Time | On-Site/Hybrid Are you a DevOps Engineer passionate about building scalable and efficient infrastructure for innovative platforms? If you’re excited by the challenge of automating and optimizing cloud infrastructure for a mission-driven PropTech platform, this opportunity is for you. We are seeking a seasoned DevOps Engineer to be a key player in scaling a pioneering property-tech ecosystem that reimagines how people discover, trust, and own their dream land or property. Our ideal candidate thrives in dynamic environments, embraces automation, and values security, performance, and reliability. You’ll be working alongside a passionate and agile team that blends technology with sustainability, enabling seamless experiences for both property buyers and developers. Key Responsibilities Architect, deploy, and maintain highly available, scalable, and secure cloud infrastructure, preferably on AWS. Design, develop, and optimize CI/CD pipelines for automated software build, test, and deployment. Implement and manage Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools. Set up and manage robust monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, etc.). Proactively monitor and improve system performance, availability, and resilience. Ensure compliance, access control, and secrets management across environments using best-in-class DevSecOps practices. Collaborate closely with development, QA, and product teams to streamline software delivery lifecycles. Troubleshoot production issues, identify root causes, and implement long-term solutions. Optimize infrastructure costs while maintaining performance SLAs. Build and maintain internal tools and automation scripts to support development workflows. Stay updated with the latest in DevOps practices, cloud technologies, and infrastructure design. Participate in on-call support rotation for critical incidents and infrastructure health. Preferred Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 6–8 years of hands-on experience in DevOps, SRE, or Infrastructure roles. Strong proficiency in AWS (EC2, S3, RDS, Lambda, ECS/EKS). Expert-level scripting skills in Python, Bash, or Go. Solid experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Expertise in Docker, Kubernetes, and container orchestration at scale. Experience with configuration management tools like Ansible, Chef, or Puppet. Solid understanding of networking, DNS, SSL, firewalls, and load balancing. Familiarity with relational and non-relational databases (PostgreSQL, MySQL, etc.) is a plus. Excellent troubleshooting and analytical skills with a performance- and security-first mindset. Experience working in agile, fast-paced startup environments is a strong plus. Nice to Have Experience working in PropTech, AgriTech, or sustainability-focused platforms. Exposure to geospatial mapping systems, virtual land visualization, or real-time data platforms. Prior work with DevSecOps, service meshes like Istio, or secrets management with Vault. Passion for building tech that positively impacts people and the planet. Why Join Us? Join India’s first revolutionary PropTech platform, blending human-centric design with cutting-edge technology to empower property discovery and ownership. Be part of a company that doesn’t just build products—it builds ecosystems: for urban buyers, rural farmers, and the environment. Work with a forward-thinking leadership team from one of India’s most respected sustainability and land stewardship organizations. Collaborate across cross-disciplinary teams solving real-world challenges at the intersection of tech, land, and sustainability.
Posted 2 months ago
3 - 5 years
5 - 7 Lacs
Pune
Work from Office
We are looking for DevOps Engineer How do you craft the future Smart Buildings? Were looking for the makers of tomorrow, the hardworking individuals ready to help Siemens transform entire industries, cities and even countries. Get to know us from the inside, develop your skills on the job. Youll make a difference by Designing, deploying, and managing AWS cloud infrastructure, including compute, storage, networking, and security services. Implementing and maintaining CI/CD pipelines using tools like GitLab CI, Jenkins, or similar technologies to automate build, test, and deployment processes. Collaborating with development teams to streamline development workflows and improve release cycles. Monitor and troubleshoot infrastructure and application issues, ensuring high availability and performance. Implementing infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and configuration management. Maintaining version control systems and Git repositories for codebase management and collaboration. Implementing and enforce security best practices and compliance standards in cloud environments. Continuously evaluate and embrace new technologies, tools, and best practices to improve efficiency and reliability. There are a lot of learning opportunities for our new team member. An openness to learn more about data analytics (including AI) offerings is part of your motivation Your defining qualities A University degree in Computer Science or a comparable education, we are flexible if a high quality of code is ensured. Proven experience (3-5 years) with common DevOps practices such as CI/CD pipelines (GitLab), Container and orchestration (Docker, ECS, EKS, Helm) and infrastructure as code (Terraform) Working knowledge of TypeScript, JavaScript, and Node.js. Good exposure to AWS cloud Thriving in working independently, i.e., can break down high-level objectives into concrete key results and implement those. Able to work with AWS from day one, familiarity with AWS services beyond EC2 (e.g., Fargate, RDS, IAM, Lambda) is something we expect from applicants. Having good knowledge of configuring logging and monitoring infrastructure with ELK, Prometheus, CloudWatch, Grafana. When it comes to methodologies, having knowledge of agile software development processes would be highly valued. Having the right demeanor, allowing you to navigate within a complex global organization and getting things done. We need a person with an absolute willingness to support the team, a proactive and stress-resistant personality. Business fluency in English
Posted 2 months ago
5 - 7 years
14 - 17 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team in Bangalore. The ideal candidate will have a strong background in DevOps practices, with a proven track record of managing and optimizing infrastructure and deployment processes. As a Senior DevOps Engineer, you will be responsible for ensuring the reliability, scalability, and performance of our systems. Roles and Responsibilities Design, implement, and maintain CI/CD pipelines to automate the deployment process. Collaborate with development and operations teams to ensure smooth and reliable software releases. Monitor and optimize system performance, reliability, and security. Implement and manage infrastructure as code using tools like Terraform or CloudFormation. Troubleshoot and resolve issues in development, test, and production environments. Ensure high availability and disaster recovery of critical systems. Participate in code reviews and provide constructive feedback to team members. Stay up-to-date with emerging technologies and industry trends in DevOps. Required Skills: 6 to 7 years of experience in DevOps or related roles. Proficiency in CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Strong understanding of cloud platforms like AWS, Azure, or Google Cloud. Experience with containerization technologies like Docker and Kubernetes. Familiarity with infrastructure as code tools like Terraform or CloudFormation. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Expertise of scripting languages such as Python, Bash, or PowerShell. Good to Have: Experience with monitoring and logging tools like Prometheus, Grafana, or ELK stack. Understanding of Agile methodologies.
Posted 2 months ago
8 - 12 years
11 - 16 Lacs
Bengaluru
Work from Office
The candidate is required to have an experience of 8+ years in the IT field. He/She should have played/aspired to play the role of Team Lead. The candidate is expected to have: Good understanding of Devops platform and its associated tool chain like Jira, Github, CI/CD tools (Eg. Jenkins, Teamcity), Nexus/Artifactory and CD tools Expert in Docker platform like writing Docker file, Docker Swarm, Docker Trusted Registry, Helm charts etc. Expert knowledge in Orchestration tools like Kubernetes and debugging of the environment like Azure Kubernetes Services etc. Extensive knowledge in Cloud platforms like Azure, AWS and GCP. Exposure to MLOps Thorough understanding of Azure DevOps and its implementation for end to end deployments. Skills around automation tools like Ansible and Terraform Knowledge of Microservice architecture and event streaming platforms like Kafka etc. Knowledge of Monitoring tools like Prometheus, New Relic, ELK Good working and debugging knowledge of Linux environment Understand basics of Networking topology Experience with scripting tools like Python
Posted 2 months ago
7 - 12 years
45 - 65 Lacs
Pune
Work from Office
We're hiring a Senior Backend Python Developer (7+ yrs )to build scalable, AI-powered systems using Django/Flask/FastAPI, GCP, Kubernetes & GraphQL. Design APIs, drive architecture, mentor teams & integrate ML for high-performance platforms.
Posted 2 months ago
8 - 13 years
18 - 30 Lacs
Coimbatore
Remote
We are seeking a highly skilled and experienced Senior DevOps Engineer to join our growing team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD pipelines, automation, containerization, and a passion for delivering scalable and reliable DevOps solutions in a dynamic environment. Key Responsibilities: Design, implement, and manage CI/CD pipelines for code deployment and automation. Manage infrastructure using tools like Terraform, Ansible, or CloudFormation. Deploy and manage applications in cloud environments (AWS, Azure, GCP). Monitor systems and resolve issues to ensure high availability and performance. Work closely with development, QA, and operations teams to ensure smooth software delivery. Implement security best practices and compliance standards in DevOps processes. Automate manual processes and identify areas for continuous improvement. Containerize applications using Docker and orchestrate using Kubernetes. Maintain and improve logging, monitoring, and alerting systems (e.g., Prometheus, Grafana, ELK). Required Skills & Experience: 8+ years of experience in DevOps, SRE, or related roles. Strong knowledge of cloud platforms: AWS, Azure, or GCP. Proficiency in scripting languages like Bash, Python, or Go. Hands-on experience with Docker, Kubernetes, and Helm. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of CI/CD tools: Jenkins, GitLab CI, CircleCI, etc. Familiarity with infrastructure as code tools (Terraform/CloudFormation). Good knowledge of Git, GitOps, and version control practices. Strong problem-solving skills and attention to detail. Preferred Qualifications: Relevant certifications (AWS Certified DevOps Engineer, Kubernetes Administrator, etc.) Experience in Agile environments Exposure to microservices architecture
Posted 2 months ago
4 - 8 years
15 - 25 Lacs
Noida, Pune, Gurugram
Hybrid
We are looking for passionate DevOps Engineers for our team. The Devops Engineer will work closely with architects, data engineers and operations to design, build, deploy, manage and operate our development, test and production infrastructure. You will build and maintain tools to ensure our applications meet our stringent SLA's in a fast-paced culture with a passion to learn and contribute. We are looking for a strong engineer with a can-do attitude. What Youll Need: 4+ years of industry experience in design, implementation and maintenance of IT infrastructure and devOps solutions, data centres and greenfield infrastructure projects on both On-Premise and Cloud. Strong experience in Terrform, Oracle Cloud Infrastructure , Anisible, Puppet. Strong experience in server installation, maintenance, monitoring, troubleshooting, data backup, recovery, security and administration of Linux Operating systems like Red Hat Enterprise, Ubuntu and Centos. Strong programming ability in Shell/Perl/Python with automation experience. Experience automating public cloud deployments Experience using and optimizing monitoring and trending systems (Prometheus, Grafana), log aggregation systems (ELK, Splunk), and their agents. Experience with working in container based technologies like Dockers and Openshift. Using Docker and Ansible to automate the creation of kubernetes pods Experience with Ansible for provisioning and configuration of servers. Experience with Jenkins to automate the Build and Deploy process across all environments. Experience with Build tools like Apache Maven and Apache Gradle Monitoring and Troubleshooting of Kubernetes clusters using Grafana and Prometheus. Experience in working closely with the Development Team to avoid the manual intervention and to ensure the timely delivery of deliverables. Experience in Virtualization technologies (VMWare). Database administration, maintenance, backup and restoration. Experience with various SQL and NoSQL databases like MySQL, Postgres, MongoDB, HBase, Elasticsearch etc. Experience in handling Production Deployments. Our perfect candidate is someone that: Is proactive and an independent problem solver Is a constant learner. We are a fast-growing company. We want you to grow with us! Is a team player and good communicator. Notice Period: 30 Days or less Mode of Work: Hybrid (3 days Work from Office) .
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France