Home
Jobs
Companies
Resume

233 Logstash Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. Verizon’s Cloud Engineering & Enablement Team is looking for a subject matter expert with high technical proficiency in the monitoring and SRE domain with deep expertise in Synthetic Monitoring, specifically with Catchpoint, to join our SRE/Operations team. You will be part of the Monitoring-as-a-Service Product team that is accountable for designing, implementing, and optimizing our synthetic monitoring strategies to proactively identify performance bottlenecks and ensure an exceptional digital experience for our users. Responsibilities - What You Will Be Doing… Designing, developing, and maintaining comprehensive synthetic monitoring solutions across our digital ecosystem. Administering, configuring, and optimizing the Catchpoint platform, including managing nodes, tests, alerts, dashboards, and reporting. Developing and maintaining complex synthetic test scripts (e.g., Browser, API, Transaction, DNS, FTP) within Catchpoint to simulate critical user journeys and business transactions. Analyzing synthetic monitoring data to identify performance trends, anomalies, and root causes of issues impacting user experience, application availability, and network health. Configuring and fine-tuning Catchpoint alerts and integrations with incident management systems (e.g., Service Now, PagerDuty, Opsgenie) to ensure timely notification and resolution of critical issues. Creating insightful dashboards and reports in Catchpoint to visualize key performance indicators (KPIs), Service Level Objectives (SLOs), and Service Level Indicators (SLIs) for various stakeholders. Working closely with SRE, DevOps, Development, and Network teams to understand application architecture, new features, and potential performance impacts, feeding insights back into the development lifecycle. Establishing and enforcing best practices for synthetic monitoring, test creation, naming conventions, and data utilization within the organization. Providing guidance and training to junior engineers and other teams on Catchpoint usage, synthetic monitoring principles, and performance analysis. What we are looking for...... We are looking for a Subject Matter Expert with strong problem-solving skills for synthetic monitoring using Catchpoint. Candidates should also possess good interpersonal skills and communication skills to clearly articulate and influence stakeholders. You'll Need To Have Bachelor’s degree or four or more years of work experience Four or more years of relevant, in-depth experience with Catchpoint Synthetic Monitoring. Expertise in test creation (browser, API, transaction), alert configuration, dashboarding, and report generation. Strong understanding of web performance metrics and how synthetic monitoring impacts them. Proficiency in scripting languages like Selenium and Playwright, for developing advanced synthetic tests and integrating Catchpoint with other tools. Knowledge of network protocols and web technologies (DNS, SSL/TLS, CDNs, APIs, etc.) Excellent analytical, problem-solving, and troubleshooting skills with a keen eye for detail. Experience in analyzing performance issues across all the tiers (client, network, database). Familiarity with Real User Monitoring (RUM) concepts and how it complements synthetic monitoring. Experience integrating Catchpoint with other APM (Application Performance Monitoring) tools like Dynatrace, New Relic, Datadog, or Splunk. Even Better If You Have Catchpoint Certified Professional or equivalent certification. Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation for deploying monitoring infrastructure. Knowledge of CI/CD pipelines and integrating synthetic tests into automated deployment workflows. Knowledge of Cloud Platform: AWS, EKS, Azure, GCP, Cloud Foundry, EKS/Kubernetes Knowledge of logging infrastructure such as fluentd, Logstash, Elasticsearch, Kibana, or Splunk. Knowledge of DevOps Tools like Jenkins, Ansible, APIGEE, Gitlab, etc. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About noon noon, the region's leading consumer commerce platform. On December 12th, 2017, noon launched its consumer platform in Saudi Arabia and the UAE, expanding to Egypt in February 2019. The noon ecosystem of services now includes marketplaces for food delivery, quick-commerce, fintech, and fashion. noon is a work in progress; we’re six years in, but only 5% done. noon’s mission: every door, everyday. What we are looking for Noon’s Cybersecurity department, Security operations team is looking for a talented, experienced, and enthusiastic Senior Threat Detection Engineer to help build and scale the Detection & Threat Hunting program at Noon. The ideal candidate will be someone who has diverse security skill-set (IR, TI, SOC..) and specialized in detecting engineering and threat hunting. The focus area for this role will be on designing and implementing advanced detection mechanisms based on known/emerging attacks and pivoting techniques. The Sr. Threat Detection will be working on proactive approaches to advance steps ahead of attackers and help in building detection to identify advanced, current and emerging threats. He will be responsible for the design and implementation of security intelligence and detection capabilities across our applications and networks. This role will be assisting in building the strategy and the team for our Detection and Threat Hunting Program. He will be the focal point for the planning and execution of security investigation, response process and coordination of relevant parties when an information security incident occurs. In addition, documentation, analytical and critical thinking skills, investigation and forensics, and the ability to identify needs and take the initiative are key requirements of this position. About the role Help build and scale the Detection & Threat hunting Program at Noon Drive improvements in detection and response capabilities, and operations for the Internal SOC/TI Write detection signatures, tune security monitoring systems/ tools, develop automation scripts and correlation rules. Work closely with other Security Team members to strengthen our detection and defence mechanisms in regards to, Web applications, Cloud and Network. Exhibit knowledge of attacker lifecycle, TTPs, indicators of compromise (IOCs), and proactively implementing countermeasures to neutralize the threats. Identifies opportunities to enhance the development and implementation of new methods for detecting attacks and malicious activities. Participate as a member of the CSIRT during major incidents and lend contributions to post-Incident review and continuous improvement Proactive threat hunting of anomalies to identify IOCs and derive custom snort signatures for the IOCs Identifying and managing a wide range of intelligence sources to provide a holistic view of the threat landscape. (OSINT aggregation) Work closely with the Red Team and Blue Team to implement custom detection of new and emerging threats, and develop monitoring use cases. Coordinate in red teaming activities such as table-top and adversarial simulation exercises. Responsible for owning all confirmed incidents. This includes publishing Incident Report, documenting Lessons Learnt and updating Knowledge Base. Required Expertise: Required: Senior level experience in a threat intel, detection, IR, or similar cybersecurity roles for medium to large organizations. Required: Technical professional security certifications in Incident Response, Digital Forensics, Offensive Security, or Malware Analysis, such as GCIH, GCFA, GNFA, GCTI, OSCP or similar Bachelor’s degree in Computing, Information Technology, Engineering or a related field, with a strong security component. Hands-on experience in detection engineering, advanced cyber threat intelligence activities, intrusion detection, incident response, and security content development (e.g., signatures, rules, etc.) A broad and diverse security skill-set with an advanced understanding of modern network security technologies (e.g. Firewalls, Intrusion Detection/Prevention Systems, Access Control Lists, Network Segmentation, SIEMs, Auditing/Logging and Identity & Access Management solutions, DDoS protection etc.). Knowledge of at least one common scripting language (Python, Ruby, Go). Experience handling and building a SOAR such as Chronicle’s SOAR, Demisto, Phantom or similar tools. Experience conducting and leading incident response investigations for organizations, investigating targeted threats such as the Advanced Persistent Threat, Insider Threats .. etc. Understanding of log collection and aggregation techniques, Elastic Search, Logstash, Kibana (ELK), Syslog-NG, Windows Event Forwarding (WEF), etc. Experience with endpoint security agents (Carbon Black, Crowdstrike, etc.). Preferred Qualifications: Hands on experience with Chronicle SIEM/SOAR and Google SecOps Expertise in threat hunting in one or more public cloud solutions such as AWS and GCP Ability to work with a team or independently with minimal direction/leadership Hands-on experience in offensive/defensive web applications security is a big plus for this role. Highly motivated and self-directed with a passion for solving complex problems Establishes industry expertise through writing, speaking or online presence. Who will excel? We’re looking for people with high standards, who understand that hard work matters. You need to be relentlessly resourceful and operate with a deep bias for action. We need people with the courage to be fiercely original. noon is not for everyone; readiness to adapt, pivot, and learn is essential. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. What you will Bring: Machine Learning Model Development: Collaborate with data scientists to develop and implement machine learning models, ensuring they meet performance and accuracy requirements. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766745 Show more Show less

Posted 5 days ago

Apply

5.0 - 12.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 5 days ago

Apply

3.0 years

2 - 6 Lacs

Noida

On-site

Site Reliability Engineer I Job Summary Site Reliability Engineers (SRE's) cover the intersection of Software Engineer and Systems Administrator. In other words, they can both create code and manage the infrastructure on which the code runs. This is a very wide skillset, but the end goal of an SRE is always the same: to ensure that all SLAs are met, but not exceeded, so as to balance performance and reliability with operational costs. As a Site Reliability Engineer I, you will be learning our systems, improving your craft as an engineer, and taking on tasks that improve the overall reliability of the VP platform. Key Responsibilities: Design, implement, and maintain robust monitoring and alerting systems. Lead observability initiatives by improving metrics, logging, and tracing across services and infrastructure. Collaborate with development and infrastructure teams to instrument applications and ensure visibility into system health and performance. Write Python scripts and tools for automation, infrastructure management, and incident response . Participate in and improve the incident management and on-call process , driving down Mean Time to Resolution (MTTR). Conduct root cause analysis and postmortems following incidents, and champion efforts to prevent recurrence. Optimize systems for scalability, performance, and cost-efficiency in cloud and containerized environments. Advocate and implement SRE best practices , including SLOs/SLIs, capacity planning, and reliability reviews. Required Skills & Qualifications: 3+ years of experience in a Site Reliability Engineer or similar role. Proficiency in Python for automation and tooling. Hands-on experience with monitoring and observability tools such as Prometheus, Grafana, Datadog, New Relic, OpenTelemetry, etc. Experience with log aggregation and analysis tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Fluentd. Good understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes). Familiarity with infrastructure-as-code (Terraform, Ansible, or similar). Strong debugging and incident response skills . Knowledge of CI/CD pipelines and release engineering practices.

Posted 5 days ago

Apply

5.0 - 6.0 years

0 Lacs

Puducherry, India

On-site

Linkedin logo

Java Software Engineer We are looking for an experienced and results-driven Java Software Engineer to join our growing development team. You will be responsible for designing, developing, and maintaining scalable Java-based applications that drive core business operations. The ideal candidate will have a strong foundation in Java, object-oriented programming, backend technologies, and system design, with a proactive mindset toward clean, testable, and maintainable code. Key Responsibilities Design and develop high-performance, secure, and scalable backend solutions using Java Collaborate with cross-functional teams including product managers, QA engineers, DevOps, and other developers Translate business requirements into technical specifications and software design Write clean, reusable, efficient, and testable code following best practices and coding standards Implement RESTful APIs and integrate third-party services and data sources Participate in code reviews and provide constructive feedback Troubleshoot, debug, and resolve production issues as needed Contribute to continuous integration and continuous delivery (CI/CD) pipelines Maintain thorough documentation of software architecture, processes, and : Bachelors degree in Computer Science, Engineering, or related field 5-6years of hands-on experience in Java development Strong understanding of object-oriented programming and software engineering principles Experience working in Agile/Scrum environments Knowledge of API security, performance tuning, and scalability Excellent problem-solving, analytical, and debugging skills Strong verbal and written communication to Have : Experience with Kubernetes, Helm, or container orchestration Exposure to Kafka, RabbitMQ, or any message brokers Knowledge of ELK stack (Elasticsearch, Logstash, Kibana) for logging/monitoring Contributions to open-source projects or technical blogs (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re seeking a versatile Full Stack Architect with hands-on experience in Python (including multithreading and popular libraries) ,GenAI and AWS cloud services. The ideal candidate should be proficient in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc.EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities Application Development: Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills And Attributes For Success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Backend: Python (multithreading, Flask, FastAPI), NodeJS, ExpressJS, REST APIs Frontend: Angular, ReactJS, TypeScript Cloud Engineering : Development with AWS (Lambda, EC2, S3, API Gateway, DynamoDB), Docker, Git, etc. Proven experience in developing and deploying AI solutions with Python, JavaScript Strong background in machine learning, deep learning, and data modelling. Good to have: CI/CD pipelines, full-stack architecture, unit testing, API integration Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: At least 8 Years of experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Indore

On-site

Indore, Madhya Pradesh, India;Noida, Uttar Pradesh, India;Gurugram, Haryana, India Qualification : Job Description DevOps Lead with 8-12 years of experience Expert on setting up K8s clusters for large scale infrastructure Expert or at least aware of Ansible, Prometheus, Open Telemetry, Logstash, Kafka, ElasticSearch setup and administration perspective (if not aware of any particular thing, should be able to learn quickly) Having hands on experience on infrastructure, security, monitoring for enterprise applications and knowledge of what options are appropriate for different scenarios will be needed. Hands on experience on setting up CICD pipelines. Must have extensive experience on deploying the microservices/web-application on Kubernetes platform. Should be capable to design CICD and release management process. Must be familiar with security and DevOps best practices on K8s platform. Good concept on Docker and orchestration tools. Ability to explore DevOps tools/technologies and guide in taking decision on it. Must have exposure to python or shell scripting and familiar with Linux OS. Must have exposure to observability tools. Ability to analyze logs for error and exceptions – Ability to drill down errors at application level etc. Should be familiar with various monitoring tools – Splunk/Kibana/Grafana/Prometheus etc. General operational exposure such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks. Strong verbal and written communication skills are mandatory. Excellent analytical and problem-solving skills are mandatory. Good knowledge of Agile or Scrum methodologies Should be self-motivated and able to lead Devops team. Skills Required : Docker, Kubernetes, CI/CD, ansible, prometheus, shell scripting, linux Role : Roles & Responsibilities Good aptitude and attitude, Flexible to upskill and cross-train. Willing to provide onsite/night overlaps. Must be able to lead and guide the team on technical challenges. Manage the team of 5+ plus engineer and keep high level track of their work/deliverables. Ability to and share DevOps culture of industry trends and developments to improve software delivery practice at scale Develop scripts for provisioning cloud resources. Assist in operational enablement in different environments. Assist use cases team in deploying artifacts in cloud environments. Automate the creation of CICD pipelines for build/Deploy from Dev into UAT environment and then onto production Creation/customization of Docker images on Kubernetes cluster. Work with Infra, security & networking teams to resolve firewall and port issues in cloud. Monitor daily operations – service restoration, Debug job failures. Assist use cases teams in troubleshooting failures. Identify manual process and activities and automate using shell, Python, etc. Continuous monitoring, Troubleshooting, and debugging of issues in the eco-system. Prepare knowledge base and documents on environment configuration, deployment, etc. Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions. Experience : 8 to 12 years Job Reference Number : 12801

Posted 6 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Indeed logo

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-44982-2025 Description & Requirements Job Title: Data Engineer About Us : Samsung Ads is an advanced Advertising Technology Company that focuses on enabling brands to connect with Samsung TV audiences as they are exposed to digital media across all devices. Being part of an international company such as Samsung and doing business around the world means that we get to work on big complex projects with stakeholders and teams located around the globe. Samsung Ads is an advanced advertising platform where advertisers find and connect with audiences across over 100M Samsung Households around the world. Samsung Ads delivers high-quality audience targeting powered by three key components: first-party audience data at scale, world-class data science, and brand-safe cross-device ad inventory. Using our data, insights, and scale, we help advertisers reach consumers across CTV, our native apps, mobile and desktop. With Samsung Ads, advertisers can buy the way they want, reach who they need, and prove business results. Our purpose is to deliver unparalleled results for our customers. By using the industry’s most comprehensive data to build the world’s smartest connected audience platform, Samsung Ads is uniquely positioned to transform the advertising landscape. We deliver on Samsung Electronics’ 51-year commitment to excellence through smart, easy, effective advertising solutions to make advanced video advertising work. Samsung Finance+ Team is looking for a highly skilled and motivated Data engineer. The successful candidate should have a deep understanding of building efficient data pipelines, data processing techniques, statistical modelling and passion for utilizing data to drive business growth. About the Role Strong analytical thinking and problem-solving skills, with the ability to translate complex data into actionable insights Excellent communication skills, with the ability to effectively convey complex findings to both technical and non-technical stakeholders. Candidate to work form SRIB Bangalore with 3 days working from office is mandatory What You Will Do Python/ Spark knowledge with AWS EMR, Lambda and other AWS Data Oriented services What You Need to Be Successful Development experience with Elastic Search or Open Search or SOLR. Engineer is part of Search team for E-comm products like Amazon/Flipkart/Myntra/etc. to develop search service (ranking, search APIs, query optimization, ..). Please note ELK (elastic search, logstash, kibana) experience is different and that don’t match this expectation Good to have: ML experience Bonus Points if You Have Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field Proven experience in data processing, ETL, statistical modelling, and machine learning, preferably in offline retail or e-commerce setting 5 Years of industry experience in working with large volumes of data, evaluating patterns and trends, and developing models. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challenging career opportunities, professional training, and competitive compensation. ( www.harman.com ) Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for designing, building and overseeing the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Work with data modelers/analysts to understand the business problems they are trying to solve then create or augment data assets to feed their analysis. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Position: Data Engineer 4 Experience: 8 years to 11.5 years Job Location: Chennai Tamil Nadu Job Description: Requirements Databases: Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra, Couchbase). ( MUST) Big Data Technologies: Experience with, Spark, Kafka, and other big data ecosystem tools. (NICE TO HAVE) Cloud Platforms: Experience with cloud services such as AWS, Azure, or Google Cloud Platform, with a particular focus on data engineering services . ( NICE TO HAVE ) Version Control: Experience with version control systems like Git. ( MUST) CI/CD: Knowledge of CI/CD pipelines for automating development and deployment processes. (MUST) Proficiency in Elasticsearch and experience managing large-scale clusters. ( MUST) Hands-on experience with containerization technologies like Docker and Kubernetes. (MUST docker) Strong programming skills in scripting languages such as Python, Bash, or similar. ( NICE to HAVE) Key Responsibilities Design, develop, and maintain scalable data pipelines and infrastructure. Ensure compliance with security regulations and implement advanced security measures to protect company data. Implement and manage CI/CD pipelines for data applications. Work with containerization technologies (Docker, Kubernetes) to deploy and manage data services. Optimize and manage Elasticsearch clusters for log ingestion , and tools such as Logstash, fluent.d , promtail, used to forward logs to Elastic Istance or other Log ingestion Tool . ( Loki+ Grafana). Collaborate with other departments (e.g., Data Science, IT, DevOps) to integrate data solutions with existing business systems. Optimize the performance of data pipelines and resolve data integrity and quality issues. Document data processes and architectures to ensure transparency and facilitate maintenance. Monitor industry trends and adopt best practices to continuously improve our data engineering solutions. Core Responsibilities Develops data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize and transform data that helps generate insights and address reporting needs. Focuses on ensuring data quality during ingest, processing as well as final load to the target tables. Creates standard ingestion frameworks for structured and unstructured data as well as checking and reporting on the quality of the data being processed. Creates standard methods for end users / downstream applications to consume data including but not limited to database views, extracts and Application Programming Interfaces. Develops and maintains information systems (e.g., data warehouses, data lakes) including data access Application Programming Interfaces. Participates in the implementation of solutions via data architecture, data engineering, or data manipulation on both on-prem platforms like Kubernetes and Teradata as well as Cloud platforms like Databricks. Determines the appropriate storage platform across different on-prem (minIO and Teradata) and Cloud (AWS S3, Redshift) depending on the privacy, access and sensitivity requirements. Understands the data lineage from source to the final semantic layer along with the transformation rules applied to enable faster troubleshooting and impact analysis during changes. Collaborates with technology and platform management partners to optimize data sourcing and processing rules to ensure appropriate data quality as well as process optimization. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Develops strategies for data acquisition, archive recovery, and database implementation. Manages data migrations/conversions and troubleshooting data processing issues. Understands the data sensitivity, customer data privacy rules and regulations and applies them consistently in all Information Lifecycle Management activities. Identifies and reacts to system notification and log to ensure quality standards for databases and applications. Solves abstract problems beyond single development language or situation by reusing data file and flags already set. Solves critical issues and shares knowledge such as trends, aggregate, quantity volume regarding specific data sources. Consistent exercise of independent judgment and discretion in matters of significance. Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary. Other duties and responsibilities as assigned. Employees at all levels are expected to: Understand our Operating Principles; make them the guidelines for how you do your job. Own the customer experience - think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services. Know your stuff - be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences. Win as a team - make big things happen by working together and being open to new ideas. Be an active part of the Net Promoter System - a way of working that brings more employee and customer feedback into the company - by joining huddles, making call backs and helping us elevate opportunities to do better for our customers. Drive results and growth. Respect and promote inclusion & diversity. Do what's right for each other, our customers, investors and our communities. Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Overview Our analysts transform data into meaningful insights that drive strategic decision making. They analyze trends, interpret data, and discover opportunities. Working cross-functionally, they craft narratives from the numbers - directly contributing to our success. Their work influences key business decisions and shape the direction of Comcast. Success Profile What makes a successful Data Engineer 4 at Comcast? Check out these top traits and explore role-specific skills in the job description below. Good Listener Problem Solver Organized Collaborative Perceptive Analytical Benefits We’re proud to offer comprehensive benefits to help support you physically, financially and emotionally through the big milestones and in your everyday life. Paid Time off We know how important it can be to spend time away from work to relax, recover from illness, or take time to care for others needs. Physical Wellbeing We offer a range of benefits and support programs to ensure that you and your loved ones get the care you need. Financial Wellbeing These benefits give you personalized support designed entirely around your unique needs today and for the future. Emotional Wellbeing No matter how you’re feeling or what you’re dealing with, there are benefits to help when you need it, in the way that works for you. Life Events + Family Support Benefits that support you no matter where you are in life’s journey. Data Engineer 4 Location Chennai, India Req ID R412866 Job Type Full Time Category Analytics Date posted 06/10/2025 Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for designing, building and overseeing the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Work with data modelers/analysts to understand the business problems they are trying to solve then create or augment data assets to feed their analysis. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Position: Data Engineer 4 Experience: 8 years to 11.5 years Job Location: Chennai Tamil Nadu Job Description: Requirements Databases: Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra, Couchbase). ( MUST) Big Data Technologies: Experience with, Spark, Kafka, and other big data ecosystem tools. (NICE TO HAVE) Cloud Platforms: Experience with cloud services such as AWS, Azure, or Google Cloud Platform, with a particular focus on data engineering services . ( NICE TO HAVE ) Version Control: Experience with version control systems like Git. ( MUST) CI/CD: Knowledge of CI/CD pipelines for automating development and deployment processes. (MUST) Proficiency in Elasticsearch and experience managing large-scale clusters. ( MUST) Hands-on experience with containerization technologies like Docker and Kubernetes. (MUST docker) Strong programming skills in scripting languages such as Python, Bash, or similar. ( NICE to HAVE) Key Responsibilities Design, develop, and maintain scalable data pipelines and infrastructure. Ensure compliance with security regulations and implement advanced security measures to protect company data. Implement and manage CI/CD pipelines for data applications. Work with containerization technologies (Docker, Kubernetes) to deploy and manage data services. Optimize and manage Elasticsearch clusters for log ingestion , and tools such as Logstash, fluent.d , promtail, used to forward logs to Elastic Istance or other Log ingestion Tool . ( Loki+ Grafana). Collaborate with other departments (e.g., Data Science, IT, DevOps) to integrate data solutions with existing business systems. Optimize the performance of data pipelines and resolve data integrity and quality issues. Document data processes and architectures to ensure transparency and facilitate maintenance. Monitor industry trends and adopt best practices to continuously improve our data engineering solutions. Core Responsibilities Develops data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize and transform data that helps generate insights and address reporting needs. Focuses on ensuring data quality during ingest, processing as well as final load to the target tables. Creates standard ingestion frameworks for structured and unstructured data as well as checking and reporting on the quality of the data being processed. Creates standard methods for end users / downstream applications to consume data including but not limited to database views, extracts and Application Programming Interfaces. Develops and maintains information systems (e.g., data warehouses, data lakes) including data access Application Programming Interfaces. Participates in the implementation of solutions via data architecture, data engineering, or data manipulation on both on-prem platforms like Kubernetes and Teradata as well as Cloud platforms like Databricks. Determines the appropriate storage platform across different on-prem (minIO and Teradata) and Cloud (AWS S3, Redshift) depending on the privacy, access and sensitivity requirements. Understands the data lineage from source to the final semantic layer along with the transformation rules applied to enable faster troubleshooting and impact analysis during changes. Collaborates with technology and platform management partners to optimize data sourcing and processing rules to ensure appropriate data quality as well as process optimization. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Develops strategies for data acquisition, archive recovery, and database implementation. Manages data migrations/conversions and troubleshooting data processing issues. Understands the data sensitivity, customer data privacy rules and regulations and applies them consistently in all Information Lifecycle Management activities. Identifies and reacts to system notification and log to ensure quality standards for databases and applications. Solves abstract problems beyond single development language or situation by reusing data file and flags already set. Solves critical issues and shares knowledge such as trends, aggregate, quantity volume regarding specific data sources. Consistent exercise of independent judgment and discretion in matters of significance. Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary. Other duties and responsibilities as assigned. Employees at all levels are expected to: Understand our Operating Principles; make them the guidelines for how you do your job. Own the customer experience - think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services. Know your stuff - be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences. Win as a team - make big things happen by working together and being open to new ideas. Be an active part of the Net Promoter System - a way of working that brings more employee and customer feedback into the company - by joining huddles, making call backs and helping us elevate opportunities to do better for our customers. Drive results and growth. Respect and promote inclusion & diversity. Do what's right for each other, our customers, investors and our communities. Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years

Posted 1 week ago

Apply

55.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Capgemini is actively seeking a skilled Elasticsearch Developer to join our team! Responsibilities: Maintain and optimize existing applications built with Spring Boot and Elasticsearch Monitor and troubleshoot application performance issues, ensuring high availability and reliability Implement and manage Elasticsearch clusters, including indexing, querying, and data management Develop and maintain RESTful APIs using Spring Boot to interact with Elasticsearch Collaborate with development teams to integrate new features and enhancements Perform regular updates and patches to ensure security and compliance Document processes, configurations, and troubleshooting steps Excellent problem-solving skills and attention to detail Provide support and guidance to other team members on Elasticsearch and Spring Boot best practices Requirements Skills required: Elasticsearch - in Depth knowledge Logstash - Pipeline creation Spring boot (Java - How to integrate with Elastic search and do query) AWS (Hosting application in EC2, Lambda, Cloud watch) Able to understand different application Below skill are Good to have AWS - Step functions Python IBM ACE Unix script CI/CD pipelines and DevOps Benefits Competitive compensation and benefits package: Competitive salary and performance-based bonuses Comprehensive benefits package Career development and training opportunities Flexible work arrangements (remote and/or office-based) Dynamic and inclusive work culture within a globally renowned group Private Health Insurance Pension Plan Paid Time Off Training & Development Performance Bonus Note: Benefits differ based on employee level. About Capgemini Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 340,000 team members in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group €22.5 billion in revenues in 2023. https://www.capgemini.com/us-en/about-us/who-we-are/ Show more Show less

Posted 1 week ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Job Title:ELK & Grafana Architect Design, implement, and optimize ELK solutions to meet data analytics and search requirements. Collaborate with development and operations teams to enhance logging capabilities. Implement and configure components of the Elastic Stack, including Filebeat, Metricbeat, Winlogbeat, Logstash, and Kibana. Create and maintain comprehensive documentation for Elastic Stack configurations and processes. Ensure seamless integration between various Elastic Stack components. Develop and maintain advanced Kibana dashboards and visualizations. Design and implement solutions for centralized logs, infrastructure health metrics, and distributed tracing for different applications. Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Build detailed technical designs related to monitoring as part of complex projects. Ensure engagement with customers and deliver business value. Requirements: 6+ years of experience as an ELK Architect/Elastic Search Architect. Hands-on experience with Prometheus, Loki, OpenTelemetry, and Azure Monitor. Experience with data pipelines and redirecting Prometheus metrics. Proficiency in scripting and programming languages such as Python, Ansible, and Bash. Familiarity with CI/CD deployment pipelines (Ansible, GIT). Strong knowledge of performance monitoring, metrics, capacity planning, and management. Excellent communication skills with the ability to articulate technical details to different audiences. Experience with application onboarding, capturing requirements, understanding data sources, and architecture diagrams. Experience with OpenTelemetry monitoring and logging solutions. 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. At least 3 years of experience in Java/J2EE, REST API, RDBMS At least 2 years of work experience in ReactJs. Ready for Individual contributor role and have done something similar in last 6 months. Fair understanding of Microservices, Cloud (Azure preferred) Practical knowledge of Object-Oriented Programming concepts and design patterns. Experience in implementation of Microservices, Service-oriented-architecture and multi-tier application platforms Good knowledge of JPA and SQL (preferable Oracle SQL) Experience working with RESTful Web Services Hands-on experience in tracing applications in distributed/microservices environment with usage of modern tools (Grafana, Prometheus, Splunk, Zipkin, Elasticsearch, Kibana, Logstash or similar) Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: .NET.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Jobs 03/18/2020 Carmatec is looking for passionate DevOps Engineers to be a part of our InstaCarma team. Not only will you have the chance to make your mark as an established DevOps Engineer, but you will also get to work and interact with seasoned professionals deeply committed to revolutionize the Cloud scenario. Job Responsibilities Work on Infrastructure provisioning/configuration management too ls. We use Packer, Terraform and Chef. Develop automation tools/scripts. We use Bash/Python/Ruby Responsible for Continuous integration and artefact management. We use Jenkins and Artifactory Setup automated deployment pipelines for microservices running as Docker containers. Setup monitoring, alerting and metrics scraping for java/scala/play applications using Prometheus and Graylog2 integrated with PagerDuty and Hipchat for alerting,reporting and monitoring. Will be doing on-call Production support an d related Incident Management, reporting & Postmortem. Create runbooks, wikis for incidents, troubleshooting performed etc. Be a proactive member of your team by sharing knowledge. Resource scheduling,orchestration using Mesos/Marathon Work closely with development teams to ensure that platforms are designed with operability in mind Function well in a fast-paced, rapidly changing environment. Required Skills A basic understanding of DevOps tools and automation framework Outstanding organization, documentation, and communication skills. Must be skilled in Linux System Administration (Ubuntu/Centos) Knowledge of AWS is a must. (EC2, EBS, S3, Route53, Cloudfront, SG, IAM, RDS etc.) Strong foundation in Docker internals and troubleshooting. Should know at least one configuration management tool – Chef/Ansible/Puppet Good to have experience at least in one scripting language – Bash/Python/Ruby Experience is an at- least one NoSQL Database Systems is a plus. – Elasticsearch/Mongodb/Redis/Cassandra Experience in a CI tool like Jenkins is preferred. Good understanding of how a 3-tier architecture works. Basic knowledge in any revision control tools like Git/Subversion etc. Should have experience working with monitoring tools like Nagios, Newrelic etc. Should be proficient in log management using tools like rsyslog, logstash etc. Working knowledge of the following items – cron, haproxy/nginx, lvm, MySql, BIND (DN S), iptables. Experience in Atlassian Tools – Jira, Hipchat,Confluence will be a plus. Experience: 5+ years Location: Bangalore If the above description is of your interest, please revert to us with your updated resume to teamhr@carmatec.com Apply now Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

4 - 9 Lacs

Gurgaon

On-site

About Payoneer Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world's underserved businesses to a rising global economy. We're a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 markets. By taking the complexity out of the financial workflows–including everything from global payments and compliance, to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence. What will you do? Translate requirements and implement product features to perfection Work directly with developers as a team lead and manage products to conceptualise, build, test and realise products Deliver best-in-class code across a broad array of interactive web and mobile products Work on continuous improvement of the products through innovation and learning. A knack for benchmarking and optimization Developing features for highly complex, distributed transaction processing systems. Implement functionality for automated tests that will successfully pass and meet coding standards. Debug production issues and create subsequent mitigation plans. Optimize the performance of existing implementations. Stay abreast of new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business. What makes you a great match for us? 5-7 years of experience as a Backend developer. Experience in Node.JS & Nestjs is a must and experience in any of these is good to have - Javascript, Java, Typescript Database architecture and design on SQL (like Postgres) and NoSQL (like MongoDB systems. DOM manipulation and new CSS functionalities and processors Memory management, multithreaded programming and background processing. Unit-testing and a strong emphasis on TDD Debug moderately complex problems and analyze logs in production systems and to read existing code. Various data storage options, such as Relational, NoSQL Object-oriented design, data structures, and complexity analysis CI/CD environment with Jenkins/CircleCI Microservices Agile Development SCRUM methodology, JIRA Code versioning tools such as Git, Bitbucket, Mercurial, SVN, etc WebSocket, REDIS, Memcached, and Cloud Messaging Frameworks PUSH Notifications) Elasticsearch ELK stack- Elasticsearch, Kibana, and Logstash, REST API integration. Have the ability to deal with ambiguity Critical Thinker, Problem Solver and team player #LI-KC1 The Payoneer Ways of Working Act as our customer's partner on the inside Learning what they need and creating what will help them go further. Continuously improve Always striving for a higher standard than our last. Do it. Own it. Being fearlessly accountable in everything we do. Build each other up Helping each other grow, as professionals and people. If this sounds like a business, a community, and a mission you want to be part of, click now to apply. We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Posted 1 week ago

Apply

7.0 years

1 - 2 Lacs

Delhi

On-site

Job Description Job Title: Devops Engineer Role Type: Fixed Term Direct Contract with Talpro Duration - 6 Months Years of Experience: 7+ Yrs. CTC Offered: INR 200K Per Months Notice Period: Only Immediate Joiners Work Mode: Hybrid (3 Days from Office Weekly) Location: Delhi / NCR Mandatory Skills: CI/CD & Automation Tools: Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD Scripting: Python, Bash, PowerShell, Go Automation Tools: Ansible, Puppet, Chef, SaltStack Infrastructure as Code (IaC): Terraform, Pulumi Containerization & Orchestration: Docker, Kubernetes (EKS, AKS, GKE), Helm Monitoring Tools: Prometheus, Grafana Logging Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog Security & Compliance: IAM, RBAC, Firewall, TLS/SSL, VPN; ISO 27001, SOC 2, GDPR Networking & Load Balancing: TCP/IP, DNS, HTTP/S, VPN; Nginx, HAProxy, ALB/ELB Databases: MySQL, PostgreSQL, MongoDB, Redis Storage Solutions: SAN, NAS Good to Have Skills: ​ Experience with hybrid cloud and multi-cloud architectures Role Overview / Job Summary: We are looking for a highly skilled DevOps Engineer to design, implement, and maintain robust CI/CD pipelines, automation workflows, and infrastructure solutions across cloud-native and containerized environments. The ideal candidate will have deep expertise in infrastructure as code, automation, security compliance, and cloud orchestration technologies. You will work closely with development, QA, and security teams to enable seamless software delivery and reliable operations.⸻Key Responsibilities / Job Responsibilities:​ Design, implement, and manage robust CI/CD pipelines using industry-standard tools. Familiarity with serverless frameworks Knowledge of DevSecOps integrations Cloud platform certifications (AWS, Azure, GCP) Automate provisioning, configuration, and deployment using tools like Ansible, Terraform, and Pulumi. Manage containerization and orchestration with Docker and Kubernetes (EKS/AKS/GKE). Implement monitoring and alerting systems using Prometheus, Grafana, and ELK stack. Enforce security best practices including IAM, firewall rules, and data encryption. Ensure compliance with ISO 27001, SOC 2, and GDPR standards. Troubleshoot system-level issues and optimize application performance. Collaborate with cross-functional teams to support Agile and DevOps delivery practices. Manage database configurations, backups, and storage integrations. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹150,000.00 - ₹200,000.00 per month Benefits: Commuter assistance Health insurance Provident Fund Schedule: Day shift Morning shift Weekend availability Experience: DevOps: 7 years (Required) Work Location: In person Speak with the employer +91 9840916415 Application Deadline: 12/06/2025

Posted 1 week ago

Apply

0 years

4 - 10 Lacs

Chennai

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... You will be playing a prominent role in providing system engineering solutions. You will be involved in engineering activities to built/provide reliable, scalable and highly available software systems and infrastructure. You will focus on developing automated self-healing solutions, IaC, capacity planning, performance optimization, continuous improvement to make the application more resilient. The role also requires to focus on the ELK stack to emphasize the design, implementation and maintenance of log management, search solutions using Elasticsearch, Logging and Kibana. What we’re looking for... Optimize data pipelines, troubleshooting issues to ensure system reliability and scalability Designing, developing, and maintaining ELK Stack (Elasticsearch, Logstash, Kibana) solutions for log management, monitoring, and search Building and maintaining dashboards, alerts, and other observability tools to gain insights into system performance and identify potential issues Tuning Elasticsearch indexing and storage to optimize system performance and ensure scalability. Diagnosing and resolving issues related to the ELK Stack, data pipelines, and system performance Work closely with development, operations, and other engineering teams to ensure system reliability and availability. Coordinating with multiple stakeholders for onboarding new application into ELK stack Troubleshooting critical issues and performing root cause analysis Performing stack upgrades as part of maintaining security standards Providing technical recommendations to improve stack performance Remediating product vulnerabilities and stay security compliant Guiding and supporting fellow team members to ensure tasks / activities / projects are tracked and completed on time You’ll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant experience required, demonstrated through work experience and/or military experience. Strong understanding of Elasticsearch, Logstash, and Kibana. Strong debugging and troubleshooting skills. Good experience with data pipeline tools like Fluentbit, Logstash, MSK etc Strong scripting languages like Python, Shell etc. Experience in Ansible, Jenkins, Cloud formation template as part of Infrastructure-as-Code (IaC) Good experience in AWS cloud (Opensearch, MSK, EC2, ELB, Auto Scaling Cloud Formation template, S3, CloudWatch, IAM, etc..) Even better if you have one or more of the following: Good experience in technologies including but not limited to Weblogic, Apache HTTPD, Apache Tomcat, Java, etc. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1071 23 April 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. We are seeking a Sr. DevOps Engineer to take ownership of our CI/CD pipelines, infrastructure automation, and cloud-native deployment strategies. This role is crucial in ensuring our platform is highly available, secure, and efficiently managed, as well as those of the applications deployed using the tool. Key Responsibilities To work with the Architect and other DevOps person on the following: CI/CD Pipeline Management: Develop and optimize GitLab CI/CD pipelines, ensuring efficient automated build, test, and deployment processes. Infrastructure as Code (IaC): Manage infrastructure using Terraform, ensuring reproducibility, scalability, and automation. Container Orchestration & Management: Deploy and maintain Docker containers on Kubernetes (Red Hat OpenShift), ensuring scalability and resilience. Monitoring & Logging: Implement ELK stack (Elasticsearch, Logstash, Kibana) for centralised logging and performance monitoring. Security & Compliance: Enforce best practices for secure deployments, data encryption, and access controls. Cloud & On-Prem Hybrid Management: Support both cloud-native and on-premises deployments, optimising infrastructure costs and performance. Automation & Scripting: Develop scripts and automation tools to improve deployment efficiency, system resilience, and performance monitoring. Collaboration & Support: Work closely with developers, architects, and security teams to ensure seamless integration of DevOps best practices. Required Skills & Experience Hands-on experience with GitLab CI/CD for automated builds, testing, and deployments. Expertise in containerization (Docker, Kubernetes, OpenShift) and managing production workloads. Strong knowledge of Terraform for defining and maintaining infrastructure as code. Experience with monitoring/logging solutions (ELK stack, Prometheus, Grafana, or similar). Solid understanding of security best practices, including access management, encryption, and vulnerability scanning. Familiarity with Redis caching strategies and optimisation techniques. Ability to diagnose and resolve infrastructure performance issues. Experience with database management in cloud environments, particularly PostgreSQL and Snowflake. Strong scripting skills in Bash, Python, or similar languages for automation. More About The Opportunity The Sr. DevOps Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We're looking for passionate technologists who want to lead client engagements and take responsibility for delivering complex technical projects. Responsibilities Setup infrastructure automation and CI/CD pipelines on AWS and GCP Implement microservice and container strategies using Kubernetes platforms Design continuous integration and delivery pipelines using the latest tools and platforms Write lots of automation code and leverage the latest tooling Consult with customers to deploy best practices in their development, QA, DevOps, and operations teams Proactively look for ways to make the architecture, code, and operations better Qualifications Experience with a few of these languages: Shell, Ruby/JRuby, Python, PowerShell, Java, Go Experience with the implementation of container technologies like Docker, Kubernetes Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer Experience with continuous integration (Jenkins, CircleCI), continuous delivery, and enterprise DevOps concepts Experience with AWS, GCP, or Azure SCM tools: Git, SVN, Mercurial Knowledge of multi-tier application architectures Logging, monitoring, and alerting tools like Splunk, Kibana, Logstash, Nagios, New Relic Bachelor’s Degree in Computer Science or Engineering Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Exp : 8yrs to 15yrs Job Description: As a Grafana Developer, you will be responsible for designing, developing, and maintaining monitoring and visualization solutions using Grafana. You will collaborate with cross-functional teams to create custom dashboards, implement data sources, and integrate Grafana with various monitoring tools and systems. Key Responsibilities: 1. Designing and developing Grafana dashboards to visualize metrics and data from multiple sources. 2. Collaborating with DevOps engineers, system administrators, and software developers to understand monitoring requirements and design appropriate Grafana solutions. 3. Integrating Grafana with data sources such as Prometheus, InfluxDB, Elasticsearch, and other databases or APIs. 4. Customizing and extending Grafana functionalities through plugins and scripting to meet specific monitoring needs. 5. Optimizing dashboard performance and usability by tuning queries, caching data, and optimizing visualization settings. 6. Troubleshooting and resolving issues related to Grafana configuration, data ingestion, and visualization. 7. Providing guidance and support to teams on best practices for Grafana usage, dashboard design, and data visualization techniques. 8. Staying updated with the latest Grafana features, plugins, and integrations, and evaluating their potential impact on monitoring solutions. 9. Collaborating with stakeholders to gather requirements, prioritize tasks, and deliver Grafana solutions that meet business needs. Required Skills and Qualifications: 1. Proficiency in Grafana dashboard development, including layout design, panel configuration, and templating. 2. Strong understanding of data visualization principles and best practices. 3. Experience with Grafana data sources and plugins, such as Prometheus, InfluxDB, Elasticsearch, Graphite, and others. 4. Solid knowledge of SQL and NoSQL databases, query optimization, and data manipulation. 5. Familiarity with time-series data and metrics monitoring concepts. 6. Proficiency in scripting languages such as JavaScript, Python, or Go for customizing Grafana functionalities. 7. Understanding of web technologies such as HTML, CSS, and JavaScript frameworks (e.g., React, Angular) for building interactive dashboards. 8. Strong problem-solving and analytical skills, with the ability to troubleshoot complex issues in Grafana configurations and data visualization. 9. Excellent communication and collaboration skills, with the ability to work effectively in a team environment and interact with stakeholders. Preferred Qualifications: 1. Certification in Grafana or related technologies. 2. Experience with Grafana Enterprise features and advanced functionalities. 3. Knowledge of containerization technologies such as Docker and Kubernetes. 4. Experience with logging and monitoring solutions such as ELK (Elasticsearch, Logstash, Kibana) stack. 5. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform and their monitoring services. 6. Understanding of infrastructure as code (IaC) tools such as Terraform or Ansible for automated deployment of Grafana configurations. 7. Knowledge of security best practices for Grafana deployment and access control mechanisms. This job description outlines the responsibilities, required skills, and qualifications for a Grafana Developer role. The specific requirements may vary depending on the organization and the complexity of the monitoring and visualization environment. (1.) To adhere to quality standards, regulatory requirements and company policies (2.) To ensure positive customer experience and CSAT through First Call Resolution and minimum rejected resolutions / Reopen Cases (3.) To participate or contribute on EN business in creation of proposals to drive Service improvement plans. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title – DevOps Engineer (Kubernetes Specialist) Experience - 6+ yrs Location - Remote Looking for Immediate joiners/ someone who can join within a month only. Job Role: Skills- DevOps- 6+ yrs CI/CD Pipeline Kubernetes- 3+ Yrs Mandatory skills- Devops- 6+Yrs Exp in Kubernetes- 3 + yrs Golang - Important Elastic Search, Logstash, Kibana [ELK], Prometheus Develop and maintain custom CRDs (Custom Resource Definitions) and controllers to extend Kubernetes functionality for our platform. Job Description - DevOps Engineer (Kubernetes Specialist) JD: Develop and maintain custom CRDs (Custom Resource Definitions) and controllers to extend Kubernetes functionality for our platform. Utilise Golang to build Services, Controllers, and Crossplane Functions Ensure availability, and performance of the platform itself Automate deployment, monitoring, and management processes using CI/CD pipelines and GitOps principles Elastic Search, Logstash, Kibana [ELK], Prometheus Should be a Kubernetes Expert Interested candidates kindly share your updated CV on email - sarah@r2rconsults.com Show more Show less

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Job Description Job Title: Devops Engineer Role Type: Fixed Term Direct Contract with Talpro Duration - 6 Months Years of Experience: 7+ Yrs. CTC Offered: INR 200K Per Months Notice Period: Only Immediate Joiners Work Mode: Hybrid (3 Days from Office Weekly) Location: Delhi / NCR Mandatory Skills: CI/CD & Automation Tools: Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD Scripting: Python, Bash, PowerShell, Go Automation Tools: Ansible, Puppet, Chef, SaltStack Infrastructure as Code (IaC): Terraform, Pulumi Containerization & Orchestration: Docker, Kubernetes (EKS, AKS, GKE), Helm Monitoring Tools: Prometheus, Grafana Logging Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog Security & Compliance: IAM, RBAC, Firewall, TLS/SSL, VPN; ISO 27001, SOC 2, GDPR Networking & Load Balancing: TCP/IP, DNS, HTTP/S, VPN; Nginx, HAProxy, ALB/ELB Databases: MySQL, PostgreSQL, MongoDB, Redis Storage Solutions: SAN, NAS Good to Have Skills: ​ Experience with hybrid cloud and multi-cloud architectures Role Overview / Job Summary: We are looking for a highly skilled DevOps Engineer to design, implement, and maintain robust CI/CD pipelines, automation workflows, and infrastructure solutions across cloud-native and containerized environments. The ideal candidate will have deep expertise in infrastructure as code, automation, security compliance, and cloud orchestration technologies. You will work closely with development, QA, and security teams to enable seamless software delivery and reliable operations.⸻Key Responsibilities / Job Responsibilities:​ Design, implement, and manage robust CI/CD pipelines using industry-standard tools. Familiarity with serverless frameworks Knowledge of DevSecOps integrations Cloud platform certifications (AWS, Azure, GCP) Automate provisioning, configuration, and deployment using tools like Ansible, Terraform, and Pulumi. Manage containerization and orchestration with Docker and Kubernetes (EKS/AKS/GKE). Implement monitoring and alerting systems using Prometheus, Grafana, and ELK stack. Enforce security best practices including IAM, firewall rules, and data encryption. Ensure compliance with ISO 27001, SOC 2, and GDPR standards. Troubleshoot system-level issues and optimize application performance. Collaborate with cross-functional teams to support Agile and DevOps delivery practices. Manage database configurations, backups, and storage integrations. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹150,000.00 - ₹200,000.00 per month Benefits: Commuter assistance Health insurance Provident Fund Schedule: Day shift Morning shift Weekend availability Experience: DevOps: 7 years (Required) Work Location: In person Speak with the employer +91 9840916415 Application Deadline: 12/06/2025

Posted 1 week ago

Apply

0.0 - 12.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Noida, Uttar Pradesh, India;Gurugram, Haryana, India Qualification : Job Description DevOps Lead with 8-12 years of experience Expert on setting up K8s clusters for large scale infrastructure Expert or at least aware of Ansible, Prometheus, Open Telemetry, Logstash, Kafka, ElasticSearch setup and administration perspective (if not aware of any particular thing, should be able to learn quickly) Having hands on experience on infrastructure, security, monitoring for enterprise applications and knowledge of what options are appropriate for different scenarios will be needed. Hands on experience on setting up CICD pipelines. Must have extensive experience on deploying the microservices/web-application on Kubernetes platform. Should be capable to design CICD and release management process. Must be familiar with security and DevOps best practices on K8s platform. Good concept on Docker and orchestration tools. Ability to explore DevOps tools/technologies and guide in taking decision on it. Must have exposure to python or shell scripting and familiar with Linux OS. Must have exposure to observability tools. Ability to analyze logs for error and exceptions – Ability to drill down errors at application level etc. Should be familiar with various monitoring tools – Splunk/Kibana/Grafana/Prometheus etc. General operational exposure such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks. Strong verbal and written communication skills are mandatory. Excellent analytical and problem-solving skills are mandatory. Good knowledge of Agile or Scrum methodologies Should be self-motivated and able to lead Devops team. Skills Required : Docker, Kubernetes, CI/CD, ansible, prometheus, shell scripting, linux Role : Roles & Responsibilities Good aptitude and attitude, Flexible to upskill and cross-train. Willing to provide onsite/night overlaps. Must be able to lead and guide the team on technical challenges. Manage the team of 5+ plus engineer and keep high level track of their work/deliverables. Ability to and share DevOps culture of industry trends and developments to improve software delivery practice at scale Develop scripts for provisioning cloud resources. Assist in operational enablement in different environments. Assist use cases team in deploying artifacts in cloud environments. Automate the creation of CICD pipelines for build/Deploy from Dev into UAT environment and then onto production Creation/customization of Docker images on Kubernetes cluster. Work with Infra, security & networking teams to resolve firewall and port issues in cloud. Monitor daily operations – service restoration, Debug job failures. Assist use cases teams in troubleshooting failures. Identify manual process and activities and automate using shell, Python, etc. Continuous monitoring, Troubleshooting, and debugging of issues in the eco-system. Prepare knowledge base and documents on environment configuration, deployment, etc. Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions. Experience : 8 to 12 years Job Reference Number : 12801

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

DevOps - Engineer JD Location - Chennai/Bangalore and Hyderabad Experience - 5 To 9 Years Deep understanding of SDLC including CI and CD pipeline architecture and process and industry-standard service management principles (such as ITIL). Deep expertise in implementing CI/CD with DevOps tools in one or more on-prem or cloud environments Microsoft or Amazon or Google for Enterprise web/desktop software. Deep Technical expertise with the orchestration of DevOps toolkits such as Ansible, Jenkins, Artifactory, Jira, Terraform, GIT/ Version Control, or similar tool stacks. Proven Experience in DevOps Automation and IaC tools(Terraform, GitHub, GitHub Actions, AWS Cloud Formation, Azure Resource Manager) and experience in performing incremental testing actions on code, processes, and deployments to identify ways to streamline execution and minimize errors encountered. Proven experience with at least one of the scripting languages such as Python, PowerShell, etc. Proven work experience as a DevOps engineer for on-premises, Cloud (AWS or Azure or GCP), and hybrid environments. Experience in one cloud technology. Experience with CI, CT, and CD and collaboration with all other teams for successful continuous deployments. Experience with Windows/ Linux scripting. Proven work experience with building and maintaining Dev, Staging, and Production environments. Familiarity with Security Test tools integration with Pipelines. Experience with Logging and Monitoring tools (Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk) to collect, analyze, visualize, and alert for various metrics, logs, and events for monitoring, troubleshooting, and optimizing system performance and availability Deep work experience in working in agile ways such as SAFe etc in delivering and deployments of PODs specifically and knowledge/ experience on product-centric delivery models. Expected consulting mindset in reviewing, and analyzing the situations of Application development, enhancements, and resolving complex technological infrastructure, security, or development problems. Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies