Jobs
Interviews

452 Logstash Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 17.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Overview The Grafana and Elastic Architect will maintain and optimize the observability platform, ensure cost-effective operations, define guardrails, and promote best practices. This role will oversee the platforms BAU support, manage vendors and partners, and collaborate closely with application owners to onboard applications. The Architect will also lead the deployment of AI Ops and other advanced features within Grafana and Elastic while working with other observability, ITSM, and platform architects. This position includes people management responsibilities and involves leading a team to achieve operational excellence. Responsibilities Key Responsibilities 1. Platform Ownership & Cost Optimization Maintain and enhance the Grafana and Elastic platforms to ensure high availability and performance. Implement cost control mechanisms to optimize resource utilization across Observability platforms. Establish platform guardrails, best practices, and governance models. 2. BAU Support & Vendor/Partner Management Manage day-to-day operations, troubleshooting, and platform improvements. Engage and manage third-party vendors and partners to ensure SLA adherence and platform reliability. Work closely with procurement and finance teams to manage vendor contracts and renewals. 3. Application Onboarding & Collaboration Partner with application owners and engineering teams to onboard applications onto the Observability platform. Define standardized onboarding frameworks and processes for application teams. Ensure seamless integration with existing observability solutions like AppDynamics, ServiceNow ITOM, and other monitoring tools. 4. AI Ops & Advanced Features Implementation Deploy AI Ops capabilities within Grafana and Elastic to enhance proactive monitoring and anomaly detection. Implement automation and intelligent alerting to reduce MTTR and operational overhead. Stay updated with industry trends and recommend innovative AI-driven observability enhancements. 5. Cross-Functional Collaboration Work closely with architects of AppDynamics, ServiceNow, and other Observability platforms to ensure an integrated monitoring strategy. Align with ITSM, DevOps, and Cloud teams to create a holistic observability roadmap. Lead knowledge-sharing sessions and create technical documentation for the team. 6. People & Team Management Lead and managed a team responsible for Grafana and Elastic observability operations. Provide mentorship, coaching, and career development opportunities for team members. Define team goals, monitor performance, and drive continuous improvement in Observability practices. Foster a culture of collaboration, innovation, and accountability within the team. Qualifications Technical Expertise 12+ years of experience in IT Operations, Observability, or related fields. Strong expertise in Grafana and Elastic Stack (Elasticsearch, Logstash, Kibana). Experience in implementing AI Ops, machine learning, or automation within observability platforms. Proficiency in scripting and automation (Python, Ansible, Terraform) for Observability workloads. Hands-on experience with cloud-based Observability solutions, particularly in Azure environments. Familiarity with additional monitoring tools like AppDynamics, ServiceNow ITOM, SevOne, and ThousandEyes. Leadership & Collaboration Experience in managing vendors, contracts, and external partnerships. Strong stakeholder management skills and ability to work cross-functionally. Excellent communication and presentation skills. Ability to lead and mentor junior engineers in Observability best practices.

Posted 2 weeks ago

Apply

0 years

3 - 7 Lacs

Noida

On-site

Description QA automation engineers design automated tests to validate applications by creating scripts that run testing functions automatically. This includes determining priority for test scenarios and creating execution plans to implement these scenarios. Skills Experience/Interest must include a combination of: Testing life cycle and QA activities, build systems, regression testing, test Automation Scripting with Python and Shell scripts/li> UI and API / Web Services testing UI and API test automation Linux/li> Virtualization and containers VMware, Hyper-V, Docker, OpenStack Cloud knowledge AWS, Azure Logging systems Logstash, Elasticsearch, Kibanae Git, Jenkins, Jira, Selenium, Postman Strong analytical and problem-solving skills Responsibilities Understand, test and automate key test cases for the software-defined converged Infrastructure product Work with test leads/development/deployment teams, and product managers to understand use cases and come up with test cases and test scenarios Be responsible for automation execution and regression testing on product to ensure quality of solutions Run system tests, performance tests and stress tests

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Team The Customer Success Automation team offers opportunities to create a meaningful impact and expand your skill set through a variety of experiences building Customer Success workflows, applications and infrastructure within CrowdStrike’s Falcon platform and its ecosystem. The Automation team is a strategic enabler for the Customer Success business (Technical Support, Technical Account Management and Provisioning teams). As part of this team, you will have the opportunity to be a trailblazer leading with technology as a solution to accelerate the digital transformation and self-service capabilities for both CrowdStrike’s customers and for CrowdStrikers (internal customers). About The Role As an Automation Engineer you will perform a hybrid role of Software Engineer / DevOps Engineer within our engineering team and help build tools, integrations and infrastructure that enable our Customer Success teams to deliver world class Technical Support and Customer Success at scale. What You’ll Do GO and Python Software Engineer for Customer Success Automation team Design and implement Kubernetes clusters in AWS for scalable and resilient application infrastructure Design, implement, deploy and maintain scalable solutions (Applications, Microservices and Infrastructure) supporting the business needs of our Technical Support and Customer Success teams Design, implement and maintain integration flows using an array of different platforms, APIs, databases, protocols and data formats Integrate UI/UX frameworks for custom-built tools System Integration to ensure the tools are well integrated within the Customer Success ecosystem Operate CI/CD pipelines for the support Automation team Collaborate with technical peers within the Customer Success, IT and Product teams Train and enable the teams within CrowdStrike Customer Success Organisation on various tools/technologies built and deployed by Support Automation team. What You’ll Need 5+ years of solid hands-on experience as a software engineer in production-grade projects with proficiency in Go (Golang) and Python Basic UI development with ability to integrate an existing UI/UX framework (any frontend stack) Working with CI/CD (Jenkins, GitLab, Bitbucket or similar CICD platforms) Experience with Kubernetes deployments in AWS (EKS or self-managed clusters) Familiarity with cloud infrastructure components (VPCs, IAM, EC2, RDS, etc.) in AWS Strong understanding of containerization technologies (Docker) and orchestration. Strong Problem-solving, team collaboration, effective communication and ability to thrive in fast-paced environments. Bonus Points Experience with infrastructure as code (IaC) practices and tools Full-Stack development experience will be a plus for this role Prior Technical Support/Customer Success tool development experience Experience building web-services with data processing pipeline Frontend experience (Ember, React, Bootstrap) Hands-on experience with Python Libraries (Text data and Log Parsing) and Python Frameworks (Django/Fast API framework experience preferred) Strong working knowledge of SIEM tools/Log Analysis tools (Logscale, LogStash, Datadog, DynaTrace or Splunk) Integration with Customer Success Platforms like Salesforce Service Cloud and GainSight Customer Success Strong understanding of the day-to-day operations and challenges of Enterprise Software/SaaS Customer Success and Technical Support functions Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Team The Customer Success Automation team offers opportunities to create a meaningful impact and expand your skill set through a variety of experiences building Customer Success workflows, applications and infrastructure within CrowdStrike’s Falcon platform and its ecosystem. The Automation team is a strategic enabler for the Customer Success business (Technical Support, Technical Account Management and Provisioning teams). As part of this team, you will have the opportunity to be a trailblazer leading with technology as a solution to accelerate the digital transformation and self-service capabilities for both CrowdStrike’s customers and for CrowdStrikers (internal customers). About The Role As an Automation Engineer you will perform a hybrid role of Software Engineer / DevOps Engineer within our engineering team and help build tools, integrations and infrastructure that enable our Customer Success teams to deliver world class Technical Support and Customer Success at scale. What You’ll Do GO and Python Software Engineer for Customer Success Automation team Design and implement Kubernetes clusters in AWS for scalable and resilient application infrastructure Design, implement, deploy and maintain scalable solutions (Applications, Microservices and Infrastructure) supporting the business needs of our Technical Support and Customer Success teams Design, implement and maintain integration flows using an array of different platforms, APIs, databases, protocols and data formats Integrate UI/UX frameworks for custom-built tools System Integration to ensure the tools are well integrated within the Customer Success ecosystem Operate CI/CD pipelines for the support Automation team Collaborate with technical peers within the Customer Success, IT and Product teams Train and enable the teams within CrowdStrike Customer Success Organisation on various tools/technologies built and deployed by Support Automation team. What You’ll Need 5+ years of solid hands-on experience as a software engineer in production-grade projects with proficiency in Go (Golang) and Python Basic UI development with ability to integrate an existing UI/UX framework (any frontend stack) Working with CI/CD (Jenkins, GitLab, Bitbucket or similar CICD platforms) Experience with Kubernetes deployments in AWS (EKS or self-managed clusters) Familiarity with cloud infrastructure components (VPCs, IAM, EC2, RDS, etc.) in AWS Strong understanding of containerization technologies (Docker) and orchestration. Strong Problem-solving, team collaboration, effective communication and ability to thrive in fast-paced environments. Bonus Points Experience with infrastructure as code (IaC) practices and tools Full-Stack development experience will be a plus for this role Prior Technical Support/Customer Success tool development experience Experience building web-services with data processing pipeline Frontend experience (Ember, React, Bootstrap) Hands-on experience with Python Libraries (Text data and Log Parsing) and Python Frameworks (Django/Fast API framework experience preferred) Strong working knowledge of SIEM tools/Log Analysis tools (Logscale, LogStash, Datadog, DynaTrace or Splunk) Integration with Customer Success Platforms like Salesforce Service Cloud and GainSight Customer Success Strong understanding of the day-to-day operations and challenges of Enterprise Software/SaaS Customer Success and Technical Support functions Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Kochi, Kerala, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for contributing to the development of our SDWAN and Managed WiFi network monitoring platform as a skilled and versatile Full-Stack Developer. Your expertise in either Java or Python and experience with modern backend and frontend technologies, containerization, and observability tools will be crucial for this role. In the backend development aspect, you will be tasked with developing and maintaining microservices using Java (Spring Boot) or Python (Flask), optimizing Kafka consumers and producers for efficient data processing, utilizing Redis for caching and data storage, and designing and consuming RESTful APIs to facilitate seamless communication across services. On the frontend development front, your responsibilities will include building and enhancing UI components using Angular and Node.js, ensuring smooth integration between frontend and backend services. You will also collaborate on data ingestion workflows using Logstash, support alarm processing mechanisms and ticketing integration with SNOW (ServiceNow), utilize Elastic Stack (Elasticsearch, Logstash, Kibana) for data storage, monitoring, and visualization, and work with Docker and Kubernetes for DevOps and containerization if required. Your collaboration with cross-functional teams to develop solutions based on the design and adherence to best practices in coding, testing, and deployment for high-quality deliverables will be fundamental. Qualifications: - Strong experience in either Java, Spring Boot with Kafka (consumer/producer), Redis, and REST API development or Python, Flask with Pandas, REST API development, and API integrations. - Hands-on experience with Angular and Node.js for frontend development. - Knowledge of data ingestion pipelines and integration with Elasticsearch and Kafka. Required Skills: - Strong experience in either Java, Spring Boot with Kafka (consumer/producer), Redis, and REST API development or Python, Flask with Pandas, REST API development, and API integrations. - Hands-on experience with Angular and Node.js for frontend development. - Knowledge of data ingestion pipelines and integration with Elasticsearch and Kafka. Preferred Skills: - Experience with Docker and Kubernetes for containerization and orchestration. - Familiarity with Elastic Stack (Elasticsearch, Logstash, Kibana) for data monitoring and visualization.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Overview We are looking for a highly experienced and proactive Senior Java Developer with strong expertise in Spring Boot, Java Multithreading, and ELK stack integration. The ideal candidate should have experience in building robust, scalable applications and working with real-time log analytics and monitoring frameworks. Responsibilities A strong foundation in CI/CD and team collaboration is essential, along with the ability to engage directly with clients and Responsibilities : Design, develop, and maintain scalable microservices using Java and Spring Boot frameworks. Write efficient, high-performance, and concurrent Java code using multithreading and synchronization best practices. Integrate and maintain ELK Stack (Elasticsearch, Logstash, Kibana) for real-time logging, analytics, and system monitoring. Leverage ELK APIs to customize dashboards, automate log parsing, and build intelligent monitoring tools. Work with CI/CD tools like Concourse or similar (e.g., Jenkins, GitLab CI) for seamless build and deployment pipelines. Participate in code reviews, unit testing, and performance tuning to ensure code quality and application responsiveness. Collaborate with cross-functional teams including QA, DevOps, and Product Management for end-to-end delivery. Interface directly with clients for requirement gathering, sprint planning, and technical presentations. Lead and mentor junior developers, perform technical grooming, and ensure adherence to coding and design standards. Contribute to architectural discussions and help in evaluating emerging tools and Skills : Core Java Strong OOPs concepts, exception handling, collections, concurrency, and memory management. Java Multithreading Deep understanding of thread lifecycle, synchronization, parallelism, and performance optimization. Spring Boot REST APIs, Spring MVC, Spring Data JPA, Spring Security. ELK Stack Hands-on experience with Elasticsearch, Logstash, Kibana, and integrating them with Java applications. CI/CD Practical knowledge of pipelines, preferably using Concourse, but experience in Jenkins, GitHub Actions, etc. is also acceptable. Build Tools Maven or Gradle. Version Control Git, Database Strong in SQL (PostgreSQL, MySQL, or Oracle); knowledge of NoSQL (MongoDB/Redis) is a plus. API Integration RESTful API design and consumption; Swagger/OpenAPI. Exposure to containerization and orchestration tools (Docker, Kubernetes) is a plus. Logging & Monitoring Fluentd, Prometheus, Grafana knowledge is a : Bachelors / Masters Degree in Computer Science, Engineering, or related field. Certifications in Java or Spring technologies are a Skills & Attributes : Strong analytical, problem-solving, and debugging skills. Excellent communication skills both verbal and written. Ability to work independently as well as in a team environment. Self-driven, proactive, and capable of handling client communications effectively. Experience in agile/scrum delivery : Experience working in a product or platform-based company. Exposure to cloud platforms like AWS, Azure, or GCP. Prior experience leading a team or mentoring junior developers (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 7.0 years

9 - 13 Lacs

Pune

Work from Office

As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. Your primary responsibilities include: 24x7 Observability: Be part of a worldwide team that monitors the health of production systems and services around the clock, ensuring continuous reliability and optimal customer experience. Cross-Functional Troubleshooting: Collaborate with engineering teams to provide initial assessments and possible workarounds for production issues. Troubleshoot and resolve production issues effectively. Deployment and Configuration: Leverage Continuous Delivery (CI/CD) tools to deploy services and configuration changes at enterprise scale. Security and Compliance Implementation: Implementing security measures that meet or exceed industry standards for regulations such as GDPR, SOC2, ISO 27001, PCI, HIPAA, and FBA. Maintenance and Support: Tasks related to applying Couchbase security patches and upgrades, supporting Cassandra and Mongo for pager duty rotation, and collaborating with Couchbase Product support for issue resolution. Required education Bachelor's Degree Required technical and professional expertise System Monitoring and Troubleshooting: Strong skills in monitoring/observability, issue response, and troubleshooting for optimal system performance. Automation Proficiency: Proficiency in automation for production environment changes, streamlining processes for efficiency, and reducing toil. Linux Proficiency: Strong knowledge of Linux operating systems. Operation and Support Experience: Demonstrated experience in handling day-to-day operations, alert management, incident support, migration tasks, and break-fix support. Experience with Infrastructure as Code (Terraform/OpenTofu) Experience with ELK/EFK stack (ElasticSearch, Logstash/Fluentd, and Kibana) Preferred technical and professional experience Kubernetes/OpenShift: Strongly preferred experience in working with production Kubernetes/OpenShift environments. Automation/Scripting: In depth experience with the Ansible, Python, Terraform, and CI/CD tools such as Jenkins, IBM Continuous Delivery, ArgoCD Monitoring/Observability: Hands on experience crafting alerts and dashboards using tools such as Instana, Grafana/Prometheus Experience working in an agile team, e.g., Kanban

Posted 2 weeks ago

Apply

6.0 - 10.0 years

5 - 8 Lacs

Bengaluru

On-site

As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the world’s leading provider of intelligent information, we want your unique perspective to create the solutions that advance our business—and your career. Our Service Management function is transforming into a truly global, data and standards-driven organization, employing best-in-class tools and practices across all disciplines of Technology Operations. About the Role: In this opportunity as Senior Service Reliability Engineer - Global Command Center, you will: Run the production environment by monitoring availability and taking a holistic view of system health. Build software and systems to manage platform infrastructure and applications Improve reliability, quality, and time-to-market of our suite of software solutions Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating to continually improve. Provide primary operational support and engineering for multiple large distributed software applications. About You: You're fit for the role of Senior Service Reliability Engineer - Global Command Center, if your background includes: 6 - 10 years of experience. Good understanding of Unix/Linux, Windows administration Experience in working on any public cloud like AWS, Azure Proficiency in the following general areas: Java (Java 1.7/Java 1.8), Javascript, Python, Jenkins, MSTFS/ADO and/or Github experience providing technical support to Enterprise networks. Good understanding of database technologies Programming/Scripting languages such as Python, Perl,Powershell, Java / J2EE. Experience working with logging tools (ex- Logstash and/or Kibana) and monitoring tools like Datadog Security. Hands on experience in implementing an DevOps pipeline using Jenkins and the AWS CI / CD tool sets. A proactive approach to spotting problems, areas for improvement, and performance bottlenecks. #LI-VGA1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 2 weeks ago

Apply

8.0 years

5 - 25 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bangalore Experience: 3 -8 Years Work Mode: Hybrid Mandatory Skills: Linux, AWS, Build & Release Tools, Scripting - Shell & Python, Docker, Kubernetes, Configuration Management and Databases, Hadoop, spark, ELK (Elastic Search, Logstash, Kibana), big data system such as (Influx DB or Elasticsearch or Cassandra), Ansible, Chef, Puppet Qualifications:- 4 - 7 years track record of relevant work experience and a computer Science or a related technical discipline is required Proven track record of building and shipping large-scale engineering products and/or knowledge of cloud infrastructure such as Azure/AWS preferred Experience in Shell, Python, or any scripting language Experience in managing Linux systems, build and release tools like Jenkins Effective communication skills (both written and verbal) Ability to collaborate with a diverse set of engineers, data scientists and product managers Comfort in a fast-paced start-up environment Preferred Qualification Support experience in BigData domain Architecting, implementing, and maintaining Big Data solutions Experience with Hadoop ecosystem (HDFS, MapReduce, Oozie, Hive, Impala, Spark, Kerberos, KAFKA, etc) Experience in container technologies like Docker, Kubernetes & configuration management systems Roles & Responsibilities Align with key Client initiatives Interface daily with customers across leading Fortune 500 companies to understand strategic requirements Connect with VP and Director level clients on a regular basis. Travel to client locations Ability to understand business requirements and tie them to technology solutions Strategically support Technical Initiatives Design, manage & deploy highly scalable and fault-tolerant distributed components using Bigdata technologies. Ability to evaluate and choose technology stacks that best fit client data strategy and constraints Drive Automation and massive deployments Ability to drive good engineering practices from bottom up Develop industry leading CI/CD, monitoring and support practices inside the team Develop scripts to automate devOps processes to reduce team effort Work with the team to develop automation and resolve issues Support TB scale pipelines Perform root cause analysis for production errors Support developers in day-to-day devOps operations Excellent experience in Application support, integration development and data management. Design roster and escalation matrix for team Provide technical leadership and manage it day to day basis Guiding devOps in day-to-day design, automation & support tasks Play a key role in hiring technical talents to build the future of Company. Conduct training for technology stack for developers in house and outside Culture Must be a strategic thinker with the ability to think unconventional / out:of:box. Analytical and data driven orientation. Raw intellect, talent and energy are critical. Entrepreneurial and Agile: understands the demands of a private, high growth company. Ability to be both a leader and hands on "doer". Skills: jenkins,scripting - shell & python,build & release tools,linux,puppet,big data system (influx db, elasticsearch, cassandra),chef,kubernetes,spark,developers,ansible,python,databases,elk (elastic search, logstash, kibana),docker,management,technology,automation,devops,hadoop,big data,aws,design,data,configuration management

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana

On-site

General information Country India State Telangana City Hyderabad Job ID 45301 Department Development Description & Requirements As a software lead, you will play a critical role in defining and driving the architectural vision of our RPA product. You will ensure technical excellence, mentor engineering teams, and collaborate across departments to deliver innovative automation solutions. This is a unique opportunity to influence the future of RPA technology and make a significant impact on the industry. RESPONSIBILITIES: Define and lead the architectural design and development of the RPA product, ensuring solutions are scalable, maintainable, and aligned with organizational strategic goals. Provide technical leadership and mentor team members on architectural best practices. Analyze and resolve complex technical challenges, including performance bottlenecks, scalability issues, and integration challenges, to ensure high system reliability and performance. Collaborate with cross-functional stakeholders, including product managers, QA, and engineering teams, to define system requirements, prioritize technical objectives, and design cohesive solutions. Provide architectural insights during sprint planning and agile processes. Establish and enforce coding standards, best practices, and guidelines across the engineering team, conducting code reviews with a focus on architecture, maintainability, and future scalability. Develop and maintain comprehensive documentation for system architecture, design decisions, and implementation details, ensuring knowledge transfer and facilitating team collaboration. Architect and oversee robust testing strategies, including automated unit, integration, and regression tests, to ensure adherence to quality standards and efficient system validation. Research and integrate emerging technologies, particularly advancements in RPA and automation, to continually enhance the product’s capabilities and technical stack. Drive innovation and implement best practices within the team. Serve as a technical mentor and advisor to engineering teams, fostering professional growth and ensuring alignment with the overall architectural vision. Ensure that the RPA product adheres to security and compliance standards by incorporating secure design principles, conducting regular security reviews, and implementing necessary safeguards to protect data integrity, confidentiality, and availability. EDUCATION & EXPERIENCE: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 8+ years of professional experience in software development. REQUIRED SKILLS: Expertise in object-oriented programming languages such as Java, C#, or similar, with a strong understanding of design patterns and principles. Deep familiarity with software development best practices, version control systems (e.g., Git), and continuous integration/continuous delivery (CI/CD) workflows. Proven experience deploying and managing infrastructure on cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. Strong proficiency in architecting, building, and optimizing RESTful APIs and microservices, with familiarity in tools like Swagger/OpenAPI and Postman for design and testing Comprehensive knowledge of SQL databases (e.g., PostgreSQL, SQLServer) with expertise in designing scalable and reliable data models, including creating detailed Entity-Relationship Diagrams (ERDs) and optimizing database schemas for performance and maintainability. Demonstrated experience in building and maintaining robust CI/CD pipelines using tools such as Jenkins or GitLab CI. Demonstrated ability to lead teams in identifying and resolving complex software and infrastructure issues using advanced troubleshooting techniques and tools. Exceptional communication and leadership skills, with the ability to guide and collaborate with cross-functional teams, bridging technical and non-technical stakeholders. Excellent written and verbal communication skills, with a focus on documenting technical designs, code, and system processes clearly and concisely. Comfortable and experienced in agile development environments, demonstrating adaptability to evolving requirements and timelines while maintaining high productivity and focus on deliverables. Familiarity with security best practices in software development, such as OWASP guidelines, secure coding principles, and implementing authentication/authorization frameworks (e.g., OAuth, SAML, JWT). Experience with microservices architecture, message brokers (e.g., RabbitMQ, Kafka), and event-driven design. Extensive experience in performance optimization and scalability, with a focus on designing high-performance systems and utilizing profiling tools and techniques to optimize both code and infrastructure for maximum efficiency. PREFERRED SKILLS: Experience with serverless architecture, including deploying and managing serverless applications using platforms such as AWS Lambda, Azure Functions, or Google Cloud Functions, to build scalable, cost-effective solutions. Experience with RPA tools or frameworks (e.g., UiPath, Automation Anywhere, Blue Prism) is a plus. Experience with Generative AI technologies, including working with frameworks like TensorFlow, PyTorch, or Hugging Face, and integrating AI/ML models into software applications. Hands-on experience with data analytics or logging tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk for monitoring and troubleshooting application performance About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Job Description: As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl’ s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Key Responsibilities : Configure Logstash to receive, filter, and transform logs from diverse sources (e.g., servers, applications, AppDynamics, Storage, Databases and so son) before sending them to Elasticsearch. Configure ILM policies, Index templates etc. Develop Logstash configuration files to parse, enrich, and filter log data from various input sources (e.g., APM tools, Database, Storage and so on) Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Ensure efficient and reliable data ingestion by optimizing Logstash performance, handling high data volumes, and managing throughput. Utilize Kibana to create visually appealing dashboards, reports, and custom visualizations. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Establishing the correlation within the data and develop visualizations to detect the root cause of the issue. Integration with ticketing tools such as Service Now Hands on with ML and Watcher functionalities Monitor Elasticsearch clusters for health, performance, and resource utilization Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Who You Are Education, Experience, and Certification Requirements: BS or MS degree in Computer Science or a related technical field 5+ years overall IT Industry Experience. 3+ years of development experience with Elastic, Logstash and Kibana in designing, building, and maintaining log & data processing systems 3+ years of Python or Java development experience 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling Experience working with Machine Learning model is a plus Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus “Elastic Certified Engineer” certification is preferrable Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

3.0 years

8 - 10 Lacs

Mohali

On-site

Apply Here - https://beyondroot.keka.com/careers/jobdetails/31270 We are seeking a highly skilled DevSecOps Engineer to join our team and enhance the security posture of our development and deployment processes. You will be responsible for embedding security throughout the DevOps pipeline and across the infrastructure, ensuring best practices are implemented in CI/CD, infrastructure automation, container security, and monitoring. The ideal candidate is experienced with AWS, Kubernetes, Jenkins, and a suite of security and monitoring tools. Key Responsibilities: Design, implement, and manage CI/CD pipelines for automated builds, testing, and deployments. Use Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation) to provision and manage infrastructure. Automate manual operational tasks through scripting and configuration management tools (e.g., Ansible, Bash, Python). Deploy, monitor, and maintain applications in cloud environments such as AWS, Set up and manage monitoring, alerting, and logging systems using tools like Prometheus, Grafana, ELK, or Datadog. Collaborate with development and QA teams to optimize the software development lifecycle. Implement DevSecOps practices to integrate security into CI/CD and cloud workflows. Perform routine system maintenance, upgrades, and troubleshooting. Required Skills and Qualifications: 3–6+ years in a DevSecOps, DevOps, or Security Engineer role. Strong hands-on experience with AWS services and security configurations. Proficient in Jenkins and GitLab CI for pipeline automation. Deep understanding of Docker and container orchestration with Kubernetes . Experience with SonarQube , CodeQL , and OWASP security practices. Familiarity with monitoring and observability tools like Datadog , ELK (Elasticsearch, Logstash, Kibana) , and New Relic . Proficient in Git workflows and secure development practices. Strong scripting experience (e.g., Bash, Python). Knowledge of secure coding practices, threat modeling, and compliance frameworks (e.g., CIS, NIST). Job Types: Full-time, Permanent Pay: ₹70,000.00 - ₹90,000.00 per month Benefits: Flexible schedule Paid sick time Schedule: Day shift Monday to Friday Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Devops : 3 years (Required) Location: Mohali, Punjab (Required) Work Location: In person Speak with the employer +91 9817558892

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a skilled ELK (ElasticSearch, Logstash, and Kibana) Stack Engineer, you will be responsible for designing and implementing ELK stack solutions to create and manage large-scale elastic search clusters on Production and DR environments. Your primary focus will be on designing end-to-end solutions that emphasize performance, reliability, scalability, and maintainability. You will collaborate with subject matter experts (SMEs) to create prototypes and adopt agile and DevOps practices to align with the product delivery lifecycle. Automation of processes using relevant tools and frameworks will be a key aspect of your role. Additionally, you will work closely with Infrastructure and development teams for capacity planning and deployment strategy to achieve a highly available and scalable architecture. Your proficiency in developing ELK stack solutions, including Elasticsearch, Logstash, and Kibana, will be crucial. Experience in upgrading Elasticsearch across major versions, managing large applications in production environments, and proficiency in Python is required. Familiarity with Elasticsearch Painless scripting language, Linux/Unix operating systems (preferably CentOS/RHEL), Oracle PL/SQL, scripting technologies, Git, Jenkins, Ansible, Docker, ITIL, Agile, Jira, Confluence, and security best practices will be advantageous. You should be well versed in applications/infrastructure logging and monitoring tools like SolarWinds, Splunk, Grafana, and Prometheus. Your skills should include configuring, maintaining, tuning, administering, and troubleshooting Elasticsearch clusters in a cloud environment, understanding Elastic cluster architecture, design, and deployment, and handling JSON data ingest proficiently. Agile development experience, proficiency in source control using Git, and excellent communication skills to collaborate with DevOps, Product, and Project Management teams are essential. Your initiative and problem-solving abilities will be crucial in this role, along with the ability to work in a dynamic, fast-moving environment, prioritize tasks effectively, and manage time optimally.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

You should have hands-on experience working with Elasticsearch, Logstash, Kibana, Prometheus, and Grafana monitoring systems. Your responsibilities will include installation, upgrade, and management of ELK, Prometheus, and Grafana systems. You should be proficient in ELK, Prometheus, and Grafana Administration, Configuration, Performance Tuning, and Troubleshooting. Additionally, you must have knowledge of various clustering topologies such as Redundant Assignments, Active-Passive setups, and experience in deploying clusters on multiple Cloud Platforms like AWS EC2 & Azure. Experience in Logstash pipeline design, search index optimization, and tuning is required. You will be responsible for implementing security measures and ensuring compliance with security policies and procedures like CIS benchmark. Collaboration with other teams to ensure seamless integration of the environment with other systems is essential. Creating and maintaining documentation related to the environment is also part of the role. Key Skills required for this position include certification in monitoring systems like ELK, RHCSA/RHCE, experience on the Linux Platform, and knowledge of Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine, or any APM tool. Educational Qualifications should include a Bachelor's degree in Computer Science, Information Technology, or a related field. The ideal candidate should have 4-7 years of relevant experience and the work location for this position is Mumbai.,

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us: At Dabster, we specialize in connecting top talent with leading global companies. We are currently seeking a skilled and dedicated QA Lead to join our client's team in Chennai, Tamilnadu. Our mission is to be the foremost recruitment specialist in securing exceptional talent for a diverse range of global clients. Who You Will Work With: Our client is a globally recognized technology company delivering IT services, consulting, and business solutions. They partner with leading organizations worldwide to drive digital transformation, leveraging innovation and deep industry expertise to solve complex business challenges. Job Description: The core responsibilities will cover: Lead and mentor a team of QA engineers, fostering their professional growth and ensuring adherence to best practices. Define, develop, and oversee comprehensive test strategies, plans, and test cases for complex systems, encompassing manual and automated testing. Drive the QA automation strategy, including the architecture, design, development, maintenance, and execution of robust automated test suites across various layers (frontend, backend, APIs) within CI/CD pipelines. Champion automation best practices and ensure the team's proficiency in automating tests using Python and relevant frameworks. Collaborate closely with development, product, and other cross-functional teams to define clear quality standards, identify potential risks, and ensure the timely delivery of highquality software. Oversee defect management processes, ensuring defects are accurately identified, documented, prioritised, tracked, and resolved promptly. Utilise advanced technical skills to investigate and troubleshoot complex test defects, providing detailed insights and collaborating with developers on solutions. Ensure testing strategies are effectively adapted for cloud-based environments (preferably AWS), leveraging cloud services for optimal test execution and coverage. Manage and optimise test cases and defect management using tools like Xray and JIRA, ensuring comprehensive test coverage and a clear understanding of user story lifecycles. Build strong relationships with internal and external stakeholders, fostering effective communication and collaboration to achieve quality goals. Champion quality assurance best practices, advocate for innovative ideas, and drive continuous improvement in testing methodologies and processes across the team and organisation. Tech Stack We work with an exciting, modern stack built for scale, reliability, and productivity. To be successful in this role, you’ll need solid experience with some of the core tools, platforms and technologies we work with: Linux MySQL Margin and collateral intelligence for derivatives. AWS (EC2, IAM, S3, Cloudformation, Lambda, ELB/ALB, API Gateway, ACM) Python (Pip, Flask, SciPy, QuantLib, Jinja2) Playwright Docker Git & GitLab Elastic stack (Elasticsearch, Logstash, Kibana) Serverless (framework) Jira, Confluence, Xray Generative AI tools Required Qualifications & Experience Proven experience as a QA Lead, demonstrating leadership in quality assurance and a track record of successfully leading testing initiatives. Demonstrable and significant expertise in designing, implementing, and leading QA Automation strategies, including extensive experience executing automation tests on CI/CD pipelines. Strong experience working with cloud-based environments (preferably AWS), including understanding how to test applications deployed in such infrastructures. Exceptional proficiency in automating frontend, backend, and API testing using Python and related frameworks (e.g., Playwright, Postman/Bruno, Cucumber/Gherkin, Rest Assured). In-depth knowledge of Agile methodologies and proven ability to lead and thrive in a fastpaced, dynamic environment. Hands-on experience with industry-standard tools such as Playwright, Postman/Bruno, Cucumber/Gherkin, and Rest Assured. Expertise with test management tools like Xray and version control systems like GitLab, including the ability to define and enforce usage best practices. Comprehensive experience with JIRA, including a deep understanding of the full lifecycle of user stories, epics, and testing processes within an Agile framework. Solid working knowledge of SQL for data validation and backend testing. Comprehensive understanding of the entire software development lifecycle and its impact on quality assurance. Skills & Competencies Experience in the financial industry, with exposure to derivatives, prime brokerage, and margin/collateral management preferred. Prior experience in performance testing. Excellent teamwork skills. Professional fluency in English (both written and spoken) Excellent interpersonal and communication skills, with the ability to collaborate effectively across global teams and time zones. Margin and collateral intelligence for derivatives. Exceptional leadership, communication, and analytical skills, with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Excellent internal and external rapport-building and people skills, with a proven ability to establish strong working relationships and mentor team members. The ability to inspire and advocate for good ideas and solutions, irrespective of their source, and to drive their implementation. How to Apply Apply by submitting your resume today, showcasing your relevant experience and passion for the position via LinkedIn Easy Apply or directly to james.a@dabster.net

Posted 3 weeks ago

Apply

11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: About the Company: At AT&T, we’re connecting the world through the latest tech, top-of-the-line communications and the best in connectivity. Our groundbreaking digital solutions provide intuitive and integrated experiences for millions of customers across online, retail and care channels. Join our mission to deliver compelling communication and entertainment experiences to customers around the world as we continue to evolve as a technology-powered, human-centered organization. As part of our team, you’ll transform the way we deliver a seamless customer experience with digital at the center of all you do. In our world, digital is much larger than just an eCommerce channel, we are transforming all channels to digitally perform as one team to create a better customer experience. As we move into 2024, the digital transformation will revolutionize the digital space and you can build a career that will propel your future. About the Job: This position is a Lead Cyber Security, responsible to design, implement and operate/administer Elastic Stack within the Dynamic Defense product development portfolio in Chief Security Office within AT&T. Experience Level: 11+ years Location: Hyderabad or Bengaluru Roles and Responsibilities: Design, implementation and operation/administration of Elastic Stack. Designing and implementing Elastic Stack scalability and availability/redundancy. Design, implement and troubleshoot Logstash, Metricbeat and Filebeat. Implement and troubleshoot log forwarding and ingestion into Elastic Stack with performance, scalability and availability as requirements. Create and update scripts to enhance automation, operations and management of the system with Python, shell scripts or Powershell. Manage and troubleshoot Elastic Cloud for Kubernetes instances. Providing thought leadership and direction on program improvements & optimizations Collaborates with team members to determine best practices and client requirements for needed software products. Ability to adapt to an evolving process and application. Willingness to experiment and try new approaches to solve old and new problems. Will work with onshore leaders to discuss staffing and resource issues and strategies. Supports innovation, strategic planning, technical proof of concepts, testing, lab work, and various other technical program management. Primary / Mandatory skills: Overall – 12+ years of IT experience 8+ Proven experience working across the elastic Stack. Has cloud experience with Elastic. Primarily Elastic Cloud for Kubernetes (ECK) Understand how to deploy nodes in Azure. Able to manage / support pipelines in Azure Create indexes / data streams Define ILM policies Able to parse data from different raw sources Able to enrich data Ability to troubleshoot Elastic indexes, shards, and errors. Able to work with free version of Elastic / build tools to assist in its operation. Understand how Logstash, Metricbeat, and Filebeat work. How to integrate as forwarders to Elastic and Kafka. Able to manage / support multiple elastic clusters. Able to architect ILM policies with node resources in mind. Has experience with elastic agents / fleet. Experience with design, implementation and support of Azure components, including databases and networking. Additional information (if any): Flexible to provide coverage in US morning hours upon need. Certification : CISSP or equivalent #Cybersecurity Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Project description We are seeking a skilled professional to work on a robust technical platform for analyzing, developing, and maintaining business-critical software systems. Responsibilities This position will be responsible for development and maintenance of multiple high availability applications for the client. This includes software design and build for product enhancements, resolution of defects, and rapid response to critical issues as well as on-going monitoring of system health. Due to high performance nature of the product and criticality of these applications, high proficiency is required in trouble shooting, expert skills in Java and well-rounded skills for the overall software stack. This is a Java developer role. The candidate is expected to deliver quality software as part of a development team. Skills Must have Must have the following years of industry experience 5+ years of Agile/Scrum with Java development in a Linux environment 2+ years of RDBMS 2+ years of CI/CD and DevOps 2+ years JBoss 2+ years of development in experience in Azure (or equivalent cloud) 1+ years of experience in migrating on-prem applications to the cloud Experience in troubleshooting, identifying root causes of production incidents, and implementing performance improvements considered a plus Programming experience in the following considered a plus Logstash Analytics Kafka LDAP Microservices ETL Airline Experience Preferred but not required Personal Traits Creative problem solver Strong written and verbal communication skills in English Dependability and accountability Nice to have -

Posted 3 weeks ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Project description We're seeking a solid and creative .NET Developer eager to solve scale problems and work on cutting-edge and open-source technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting-edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture strives to solve challenging problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. We do it with a truly flexible environment, high-impact projects in Agile environments, a culture focused on results, training, and strong support to grow your career. In this project, you will be a member of the Information Technology Team, within the Information Technology Division. This position supports and transforms existing and new mission-critical and highly-visible operational website(s) and applications - spanning multiple technology stacks - through all phases of SDLC, while working collaboratively across IT, business, and third-party suppliers from around the globe in a 24x7, fast-paced, and Agile based environment. Responsibilities Experience in .NET (Backend) Skills Must have 8-12 Years experience in .Net Technologies Hands-on service design, schema design and application integration design Hands-on software development using C#, .Net Core Use of multiple Cloud native database platforms including DynamoDB, SQL, Elastic cache, and others Conduct Code reviews and peer reviews Unit test and Unit test automation, defect resolution and software optimization Code deployment using CI/CD processes Understand business requirements and technical limitations Ability to learn new technologies and influence the team and leadership to constantly implement modern solutions Experience in using Elasticsearch, Logstash, Kibana (ELK) stack for Logging and Analytics Experience in container orchestration using Kubernetes Knowledge and Experience working with public cloud AWS services Knowledge of Cloud Architecture and Design Patterns Ability to prepare documentation for Microservices Monitoring tools such as Datadog, Logstash Excellent Communication skills Nice to have Airline industry knowledge is preferred but not required

Posted 3 weeks ago

Apply

3.0 years

4 - 7 Lacs

Kazhakuttam

On-site

Job Title: Data Engineer Location: Trivandrum Type: Full-time Experience Level: Mid-Senior (3+ years) About the Role We are seeking a skilled Data Engineer to design and implement robust, scalable data pipelines for processing and transforming log data stored in Elasticsearch . You will play a key role in building the data pipeline for our advanced ML-powered behavioural anomaly detection platform. This role also involves designing and maintaining the feature engineering pipeline , including integration with a feature store like Feast , and ensuring high-quality, low-latency data delivery for ML models. If you have strong experience in ELK stack, Python, and modern data architectures, and are excited by the intersection of AI and cybersecurity, this is for you. Key Responsibilities ETL Pipeline Development : Build scalable ETL workflows to extract raw logs from Elasticsearch. Clean, normalize, and transform logs into structured features for ML use cases. Maintain data freshness with either batch or near real-time workflows. Feature Store Integration : Design schemas for storing derived features into a feature store (e.g., Feast). Collaborate with ML engineers to ensure features are aligned with model requirements. Manage historical feature backfills and real-time lookups. Data Infrastructure and Architecture : Optimize Elasticsearch queries and index management for performance and cost. Design data schema, partitioning, and retention policies for long-term storage. Ensure data integrity, versioning, and reproducibility of transformed data. Monitoring and Scaling : Implement monitoring for pipeline performance and failures. Scale pipelines to support growing log data (in the scale of 100s of GBs per day). Collaboration : Work closely with security analysts and AI engineers to translate behavioural insights into engineered features. Document data lineage, transformation logic, and data dictionaries. Minimum Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. 3+ years of experience in data engineering roles with Python and Elasticsearch. Strong experience building data pipelines using: Python (pandas, elasticsearch-py, PySpark is a bonus) Orchestration tools (e.g., Apache Airflow , Prefect ) Familiarity with log processing, especially NGINX, Apache logs , HTTP protocols , and cybersecurity-relevant fields (IP, headers, user agents). Experience with feature stores such as Feast , Tecton , or custom-built systems. Solid understanding of data modeling, versioning, and time-series data handling. Knowledge of DevOps practices (Docker, Git, CI/CD workflows). Nice to Have Experience with Kafka, Fluentd or Logstash pipelines. Experience deploying data workloads on cloud environments (AWS/GCP/Azure). Exposure to anomaly detection or cybersecurity ML systems. Familiarity with ML workflows, model deployment, and MLOps. Job Type: Full-time Pay: ₹35,000.00 - ₹60,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Experience: Data engineering: 3 years (Required) Work Location: In person

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies