Jobs
Interviews

25 Logging Tools Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining Postman, the world's leading collaboration platform for API development. Postman is dedicated to simplifying each step of building an API and streamlining collaboration to facilitate the creation of better APIs faster. With over 35 million developers and 500,000 organizations worldwide already utilizing Postman, the company is on a mission to connect 100 million developers and assist companies in innovating in an API-first world. Postman's customer base continues to grow rapidly, fueling the need for talented individuals to join the team. As a Software Engineer-II in the Identity and Access Management (IAM) team at Postman, you will play a crucial role in developing secure, reliable, performant, and scalable IAM services and components. Working collaboratively with a cross-functional team, including product managers, UX designers, and quality, security, and platform engineers, you will contribute to building APIs and products. Your responsibilities will include creating foundational tools, frameworks, and systems to support the organization's requirements and implementing standardized formats for authentication and authorization needs. To excel in this role, you should have 3-6 years of experience in developing complex and distributed software applications at scale. Proficiency in Javascript and any server-side programming language is essential, as is a deep understanding of web fundamentals, web application development lifecycle, and microservices architecture. Additionally, strong knowledge of database fundamentals, especially in aspects like query optimization, indexing, and caching, will be beneficial. A self-motivated and innovative mindset, coupled with a proactive approach to challenges, will be key to your success in this position. While not mandatory, experience working with NodeJS and ReactJS, familiarity with the IAM domain, and understanding of authentication products, protocols, and methods such as OAuth, SAML, and OIDC, would be advantageous. Exposure to container orchestration, CI/CD, monitoring and logging tools, and cloud infrastructure is also desirable. At Postman, transparency, honest communication, and a focus on specific goals under a larger vision are core values that guide our work culture. We prioritize inclusivity, ensuring that every individual's contribution is valued equally. As part of our team, you will enjoy competitive salary and benefits, flexible working hours, full medical coverage, unlimited paid time off, and a monthly lunch stipend. Our wellness program promotes a healthy lifestyle, and virtual team-building events foster connection among team members. Furthermore, our donation-matching program allows you to support causes that are important to you. Join us in building a long-term company with an inclusive culture, where everyone can thrive and be their best selves. Your presence will be required at our Bangalore office on Mondays, Wednesdays, and Fridays, contributing to the collaborative and dynamic environment at Postman.,

Posted 3 days ago

Apply

1.0 - 5.0 years

0 Lacs

udaipur, rajasthan

On-site

The Electric Vehicle Charging & Energy Management SaaS startup YoCharge is seeking a dedicated Backend Engineer (SDE-I) to join their team at their Udaipur location. As a full-time on-site team member, you will play a crucial role in scaling YoCharge's back-end platform and services to facilitate smart charging for electric vehicles. To qualify for this position, you should hold a Bachelor's degree in Computer Science or a related field. Additionally, you must have proven experience as a Backend Developer or in a similar role. Proficiency in Python with Django and FastAPI frameworks is essential, along with prior experience in scaling a product. Knowledge of database systems such as SQL and NoSQL, as well as ORM frameworks, is required. Experience with cloud platforms like AWS and Azure, as well as containerization technologies including Docker and Kubernetes, is preferred. Familiarity with deployment pipelines, CI/CD, and DevOps practices is also beneficial. Proficiency in designing and developing RESTful APIs, understanding of systems architecture, and experience in developing for scale are necessary skills for this role. Furthermore, expertise in micro-services architecture and asynchronous programming, along with familiarity with monitoring and logging tools like Prometheus, Grafana, and ELK Stack, are desired qualifications. Strong communication and teamwork abilities are essential, as well as the capability to thrive in a fast-paced environment and meet deadlines. The ideal candidate should have 1-2+ years of experience in a relevant field. If you have a fondness for Electric Vehicles & Energy, a passion for building products, and an excitement to work in a startup environment, this opportunity at YoCharge may be the perfect fit for you. Join the energetic team at YoCharge and contribute to the development of a global product in the EV & Energy domain.,

Posted 3 days ago

Apply

1.0 - 5.0 years

0 Lacs

udaipur, rajasthan

On-site

You will be joining YoCharge, an Electric Vehicle Charging & Energy Management SaaS startup that supports Charge Point Operators in efficiently launching, operating, and expanding their EV charging business. YoCharge operates globally in over 20 countries and is seeking enthusiastic team members who are passionate about developing innovative products in the EV & Energy sector. As a Backend Engineer (SDE-I) at YoCharge, located in Udaipur, you will play a key role in scaling YoCharge's back-end platform and services to facilitate smart charging for electric vehicles. To qualify for this position, you should hold a Bachelor's degree in Computer Science or a related field and have proven experience as a Backend Developer or in a similar role. Proficiency in Python with Django and FastAPI frameworks is essential, along with previous experience in scaling a product. Additionally, knowledge of database systems (SQL and NoSQL), ORM frameworks, cloud platforms (e.g., AWS, Azure), containerization technologies (e.g., Docker, Kubernetes), deployment pipelines, CI/CD, DevOps practices, micro-services architecture, and asynchronous programming is required. You should also be adept at designing and developing RESTful APIs, understanding systems architecture for scalability, utilizing monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack), and possess excellent communication and teamwork skills. The ability to thrive in a fast-paced environment, meet deadlines, and have 1-2+ years of experience are also necessary. If you have a passion for Electric Vehicles & Energy, enjoy building products, and are excited about working in a startup environment, you might be an instant match for this role at YoCharge.,

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

DISH Network Technologies India Pvt. Ltd, a technology subsidiary of EchoStar Corporation, is a pioneering organization driving innovation and value for its customers through cutting-edge technology solutions. Our diverse product portfolio includes Boost Mobile, DISH TV, Sling TV, OnTech, Hughes, and Hughesnet, offering a wide range of services from consumer wireless to global satellite connectivity solutions. As one of EchoStar's largest development centers outside the U.S., our facilities in India are at the forefront of technological convergence. Our talented engineering team is dedicated to catalyzing innovation in multimedia network and communications development. Join our Technology teams that challenge the status quo and redefine industry capabilities. Through research, technology innovation, and solution engineering, you will be instrumental in shaping the products and platforms of tomorrow, connecting consumers with innovative solutions. In the role of System Reliability & Performance, you will be responsible for designing, implementing, and maintaining monitoring solutions for various platforms like webMethods, GemFire, AWS services, and Kubernetes clusters. Your tasks will include automation of operational tasks, incident response, and system provisioning. Additionally, you will participate in on-call rotations, conduct performance tuning, and ensure platform reliability. Your expertise will be key in managing and optimizing platforms like webMethods, GemFire, AWS Cloud, and Rancher Kubernetes. Collaborating closely with development teams, you will champion SRE best practices and ensure system documentation and maintenance. As a mentor, you will contribute to a culture of continuous learning and improvement. To excel in this role, you should hold a Bachelor's degree in Computer Science or a related field and have at least 8 years of experience in SRE, DevOps, or technical operations. Proficiency in webMethods, GemFire, AWS cloud services, Kubernetes administration, and scripting languages like Python and Java is essential. Experience with CI/CD pipelines, monitoring tools, and networking concepts is required. Nice-to-have skills include familiarity with other integration platforms, distributed databases, AWS certifications, Kubernetes certifications, and chaos engineering principles. An agile mindset and effective problem-solving abilities are crucial for success in this fast-paced, collaborative environment. At DISH Network Technologies India Pvt. Ltd, we offer a range of benefits including insurance, financial programs, mental wellbeing support, employee stock purchase program, professional development reimbursement, and team outings. Join us in driving technological innovation and redefining the future of connectivity.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Job Description: We are looking for a skilled and experienced DevOps Engineer specializing in AWS cloud services. Your primary responsibilities will include designing, deploying, and managing AWS cloud infrastructure. Utilizing tools like Terraform and Packer, you will automate provisioning processes to ensure efficient resource allocation. Collaboration with cross-functional teams to implement Infrastructure as Code (IaC) principles will be essential for maintaining the infrastructure. As a DevOps Engineer, you will play a key role in optimizing system performance, reliability, and scalability. Your expertise in CI/CD pipelines and version control systems will be crucial for automating software delivery processes. Monitoring system performance, troubleshooting issues, and identifying opportunities for improvement will also be part of your responsibilities. Your mandatory skills should include proven experience in DevOps engineering with a focus on AWS, proficiency in automated provisioning, and familiarity with Infrastructure as Code tools like Terraform and Packer. Strong scripting skills in Python and Bash, along with excellent problem-solving abilities and communication skills, are essential for effective collaboration with team members and stakeholders. Secondary skills such as AWS certifications, experience with containerization and orchestration tools like Docker and Kubernetes, knowledge of microservices architecture, and familiarity with monitoring and logging tools will be advantageous. Education Qualifications: - Bachelor's degree in computer science, Information Technology, Bachelor of Engineering (BE), or Bachelor of Technology (B Tech),

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Big Data Engineer based in Hyderabad, you will be responsible for developing and maintaining performance-critical applications using your expertise in C programming and other programming languages such as C++, Java, Python, and Scala. Your strong knowledge of AWS services, scripting languages like Python and Bash, and experience with CI/CD tools will be key assets in this role. You will have the opportunity to work on developing large-scale database applications using MySQL, Oracle, or DB2, and leverage your problem-solving skills to think critically and creatively. Familiarity with infrastructure as code tools like Terraform and CloudFormation, containerization and orchestration tools such as Docker and Kubernetes, and monitoring and logging tools like CloudWatch, Dynatrace, New Relic, and ELK Stack will be advantageous. Experience with Big Data solutions like Hadoop and Spark will further enhance your contributions to the team. This is a full-time role with health insurance benefits offered. Relocation to Hyderabad, Telangana is required for this in-person position. To ensure a good fit, we are seeking candidates with 7+ years of experience. Less experienced individuals are encouraged not to apply. This role is initially for a 6-month contract, with the possibility of extension based on the client's discretion. Are you comfortable with these terms We look forward to receiving your application and learning more about your current and expected salary.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

punjab

On-site

ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. We are dedicated to building Agentic Systems for AI Agents with Akira.ai, developing the Vision AI Platform with XenonStack.ai, and providing Inference AI Infrastructure for Agentic Systems through Nexastack.ai. THE OPPORTUNITY We are seeking an experienced Associate DevOps Engineer with 1-3 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES - Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across multiple cloud platforms (AWS, Azure, GCP). - Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. - Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. - Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). - Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. - Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. - Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. SKILLS REQUIREMENTS - 1-3 years of experience in DevOps or cloud infrastructure engineering. - Proficiency in cloud platforms like AWS, Azure, or GCP and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). - Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. - Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. - Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. - Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. - Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. - Strong understanding of Linux-based operating systems and cloud-based infrastructure management. - Bachelors degree in Computer Science, Information Technology, or related field. - 1-3 years of hands-on experience working in a DevOps or cloud engineering role. CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, there's nowhere better for you than XenonStack. Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI - Obsessed with Adoption: We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. - Obsessed with Simplicity: We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStack's Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Engineer at our fast-growing eCommerce unicorn company, you will play a crucial role in spearheading the development of microservices-based applications on AWS. You will collaborate closely with business stakeholders and product teams to create secure, reliable, and maintainable solutions that contribute to our business success. Your responsibilities will include leading the design and implementation of microservices-based applications on AWS using services like Lambda, SQS, ECS, API Gateway, and DynamoDB. You will also design and develop RESTful APIs, integrate with third-party services including AI services, and work on solutions that leverage AI models for automation, personalization, and enhanced user experiences. Additionally, you will collaborate with DevOps teams to implement CI/CD pipelines, monitoring, and logging for seamless deployment and system reliability. Ensuring code quality by writing clean, testable, and maintainable code and staying updated with the latest industry trends and technologies will also be part of your role. To excel in this position, you should have at least 3 years of software engineering experience, with a focus on building cloud-native and AI-enhanced applications. Expertise in AWS services such as Lambda, ECS, S3, DynamoDB, API Gateway, and CloudFormation, as well as strong experience in Python, message queues, and asynchronous processing is required. Knowledge of microservices architecture, RESTful API design, event-driven systems, containerization technologies, relational and NoSQL databases, and excellent communication and leadership skills are essential. Having experience integrating AI/ML models using APIs, familiarity with vector databases and text embedding techniques for AI applications, and knowledge of monitoring and logging tools will be advantageous for this role. Join our team at Razor Group and be part of our mission to revolutionize the eCommerce world with a diverse portfolio of products across multiple continents. With a strong backing from renowned investors, we are expanding rapidly and have a significant presence across various locations globally. This is a full-time onsite position based in Hyderabad, India within the Tech & Analytics department.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

You will be responsible for designing, implementing, and maintaining cloud infrastructure on AWS as a DevOps Engineer. Your primary tasks will involve working closely with development and operations teams to ensure that systems are scalable, reliable, and secure. Your responsibilities will include designing, deploying, and managing systems on AWS, implementing and managing CI/CD pipelines, automating infrastructure provisioning and application deployment, monitoring and optimizing system performance, collaborating with development teams, troubleshooting and resolving infrastructure issues, ensuring security compliance, and staying updated with the latest AWS services, tools, and best practices. Additionally, you will participate in on-call rotations. To qualify for this role, you should have at least 5 years of experience as a DevOps Engineer, expertise in AWS services, experience with CI/CD & GitOps tools, proficiency in IaC tools, experience in building modules using IaC, proficiency in AWS Identity and Access Management, strong scripting skills, experience with Docker, Kubernetes, and Service Mesh, understanding of networking concepts and security best practices, excellent problem-solving skills, strong communication and collaboration skills, and experience with monitoring and logging tools. Preferred qualifications include AWS certifications, experience with ELK, OpenSearch, experience with agile methodologies and DevOps culture, familiarity with other cloud providers and hybrid cloud environments.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

The company is looking for an experienced and motivated DevOps Lead to join their team. As a DevOps Lead, you will be responsible for driving the DevOps strategy, optimizing infrastructure, and leading a team of DevOps engineers. Your role will involve managing infrastructure on cloud platforms, overseeing Linux systems administration, implementing monitoring solutions, ensuring security and compliance, automating workflows, managing containerized applications, optimizing CI/CD pipelines, mentoring team members, handling incident management, and maintaining comprehensive documentation. You should have a minimum of 3 years of experience in DevOps or related roles with leadership responsibilities. Strong proficiency in Linux systems administration, experience with cloud platforms (AWS, GCP, Azure), networking knowledge, proficiency in automation tools and scripting languages, expertise in Docker and Kubernetes, experience with version control, familiarity with monitoring tools, proven ability in building CI/CD pipelines, and excellent problem-solving skills are required. Preferred qualifications include relevant certifications in AWS, Google, or Azure DevOps, knowledge of security best practices and compliance frameworks, demonstrated leadership skills, and a passion for promoting DevOps methodologies. A Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field is preferred. The company offers competitive compensation with performance-based bonuses, opportunities for professional growth through training and certifications, flexible working hours with remote work options, a collaborative environment working with industry experts, and the chance to contribute to high-impact projects using cutting-edge technologies.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

About Delhivery: Delhivery is India's leading fulfillment platform for digital commerce. Spanning over 18,000+ pin codes and 2,500 cities, Delhivery offers a wide range of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, Delhivery has successfully completed over 550 million transactions, empowering more than 10,000 businesses ranging from startups to large enterprises. Vision: The vision of Delhivery is to become the operating system for e-commerce in India by integrating world-class infrastructure, robust logistics operations, and technological excellence. Technical Expertise Required: - Ability to independently manage and execute client integrations with precision. - In-depth understanding of REST APIs and their practical applications. - Fundamental knowledge of SQL for efficient data handling and query optimization. - Strong analytical skills for interpreting and managing data effectively. - Proficiency in tracking and diagnosing issues using logging tools like Coralogix and Sentry. - Hands-on experience in making and troubleshooting HTTP calls using tools like cURL and Postman. - Advanced proficiency in Excel, Google Sheets, and other productivity tools for data processing and reporting. - Comprehensive awareness of the technology stack, particularly Go (Golang), used within Delhivery. - Experience in SAP integrations, including configuring, troubleshooting, and optimizing ERP modules for seamless logistics and supply chain operations. - Understanding of TMS projects, with expertise in workflow automation, system integration, and operational enhancements. Problem-Solving & Analytical Skills: - Expertise in conducting root cause analysis to swiftly identify and resolve system issues. - Ability to assess and classify system issues as bugs or feature enhancements. - Strong business and product knowledge to deliver effective, solution-driven outcomes. - Clear and impactful communication skills for effective stakeholder collaboration. - Proactive approach in managing daily tasks with structured planning. - Timely identification and escalation of critical issues leads to swift resolution.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a DevOps Architect at our Coimbatore onsite location, with over 7 years of experience, you will play a crucial role in designing, implementing, and optimizing scalable and reliable DevOps processes for continuous integration, continuous deployment (CI/CD), and infrastructure as code (IaC). You will lead the architecture and implementation of cloud-based infrastructure solutions using AWS, Azure, or GCP based on project requirements. Collaboration with software development teams to ensure smooth integration of development, testing, and production environments will be a key responsibility. Your role will involve implementing and managing automation, monitoring, and alerting tools across development and production environments such as Jenkins, GitLab CI, Ansible, Terraform, Docker, and Kubernetes. Additionally, you will oversee version control, release pipelines, and deployment processes for various applications while designing and implementing infrastructure monitoring solutions to maintain high availability and performance of systems. A significant aspect of your role will involve fostering a culture of continuous improvement by closely working with development and operations teams to enhance automation, testing, and release pipelines. You will ensure that security best practices are followed in the development and deployment pipeline, including secret management and vulnerability scanning. Efforts to address performance bottlenecks, scaling challenges, and infrastructure optimization will be led by you, along with mentoring and guiding junior engineers in the DevOps space. To excel in this role, you are required to have a Bachelor's degree in computer science, Information Technology, or related field, or equivalent work experience, along with a minimum of 7 years of experience in DevOps, cloud infrastructure, and automation tools. Proficiency in cloud platforms such as AWS, Azure, GCP, containerization technologies like Docker and Kubernetes, orchestration tools, automation tools like Jenkins, Ansible, Chef, Puppet, Terraform, scripting languages (Bash, Python, Go), version control systems (Git, SVN), and monitoring and logging tools is essential. Strong troubleshooting skills, communication, leadership abilities, and understanding of Agile and Scrum methodologies are also vital for this role. Preferred qualifications include certifications in DevOps tools, cloud technologies, or Kubernetes, experience with serverless architecture, familiarity with security best practices in a DevOps environment, and knowledge of database management and backup strategies. If you are passionate about your career and possess the required skills and experience, we invite you to be a part of our rapidly growing team. Reach out to us at careers@hashagile.com to explore exciting opportunities with us.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Google Cloud Platform (GCP) Cloud Engineer, your role involves designing, implementing, and managing cloud-based solutions on the Google Cloud Platform. Your primary focus will be to ensure the performance, security, and scalability of cloud infrastructure while collaborating with various teams to meet business needs and optimize cloud resources. Your key responsibilities will include designing GCP architecture, automating infrastructure using tools like Terraform and Ansible, managing security and compliance measures, monitoring system performance, optimizing resource utilization, and providing support for application deployments. Additionally, you will be responsible for implementing disaster recovery plans and data backup strategies. To excel in this role, you should have a strong understanding of Google Cloud Platform services, proficiency in Infrastructure as Code tools, experience with containerization and orchestration technologies like Docker and Kubernetes, strong scripting skills in Bash and Python, knowledge of networking concepts and security best practices, familiarity with monitoring and logging tools, and excellent problem-solving and troubleshooting skills. Your ability to work independently as well as part of a team, along with strong communication and collaboration skills, will be crucial for success in this role. As you progress in your career as a GCP Cloud Engineer, you may explore opportunities to advance into roles such as Cloud Solutions Architect, Lead Cloud Engineer, or Cloud DevOps Engineer. Specializing in areas like data engineering or security can also be a potential career path for you.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

kerala

On-site

As a Senior AI Engineer (Tech Lead) at EY, you will have the opportunity to leverage your technical expertise to develop and implement cutting-edge AI solutions. With a minimum of 4 years of experience in Data Science and Machine Learning, including NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture, you will play a crucial role in driving innovation and creating impactful solutions for enterprise industry use cases. Your responsibilities will include contributing to the design and implementation of state-of-the-art AI solutions, leading a team of 4-6 developers, and collaborating with stakeholders to identify business opportunities and define AI project goals. You will stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. By utilizing generative AI techniques, such as LLMs and Agentic Framework, you will develop innovative solutions tailored to specific business requirements. Your role will also involve integrating with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to enhance generative AI capabilities. Additionally, you will be responsible for implementing and optimizing end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Your expertise in data engineering, DevOps, and MLOps practices will be valuable in curating, cleaning, and preprocessing large-scale datasets for generative AI applications. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, and demonstrate proficiency in Python, Data Science, Machine Learning, OCR, and document intelligence. Strong collaboration with software engineering and operations teams, along with excellent problem-solving and analytical skills, will be essential in translating business requirements into technical solutions. Moreover, your familiarity with trusted AI practices, data privacy, security, and ethical considerations will ensure the fairness, transparency, and accountability of AI models and systems. Additionally, having a solid understanding of NLP techniques, frameworks like TensorFlow or PyTorch, and cloud platforms such as Azure, AWS, or GCP will be beneficial in deploying AI solutions in a cloud environment. Proficiency in designing or interacting with agent-based AI architectures, implementing optimization tools and techniques, and driving DevOps and MLOps practices will also be advantageous in enhancing the performance and efficiency of AI models. Join EY to build an exceptional experience for yourself and contribute to creating a better working world for all through the power of AI and technology.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer at Wabtec Corporation, you will play a crucial role in performing CI/CD and automation design/validation activities. Reporting to the Technical Project Manager and working closely with the software architect, you will be responsible for adhering to internal processes, including coding rules, and documenting implementations accurately. Your focus will be on meeting Quality, Cost, and Time objectives set by the Technical Project Manager. To qualify for this role, you should hold a Bachelor's or Master's degree in engineering in Computer Science with a web option in CS, IT, or a related field. You should have 6 to 10 years of hands-on experience as a DevOps Engineer and possess the following abilities: - A good understanding of Linux systems and networking - Proficiency in CI/CD tools like GitLab - Knowledge of containerization technologies such as Docker - Experience with scripting languages like Bash and Python - Hands-on experience in setting up CI/CD pipelines and configuring Virtual Machines - Familiarity with C/C++ build tools like CMake and Conan - Expertise in setting up pipelines in GitLab for build, Unit testing, and static analysis - Experience with infrastructure as code tools like Terraform or Ansible - Proficiency in monitoring and logging tools such as ELK Stack or Prometheus/Grafana - Strong problem-solving skills and the ability to troubleshoot production issues - A passion for continuous learning and staying up-to-date with modern technologies and trends in the DevOps field - Familiarity with project management and workflow tools like Jira, SPIRA, Teams Planner, and Polarion In addition to technical skills, soft skills are also crucial for this role. You should have a good level of English proficiency, be autonomous, possess good interpersonal and communication skills, have strong synthesis skills, be a solid team player, and be able to handle multiple tasks efficiently. At Wabtec, we are committed to embracing diversity and inclusion. We value the variety of experiences, expertise, and backgrounds that our employees bring and aim to create an inclusive environment where everyone belongs. By fostering a culture of leadership, diversity, and inclusion, we believe that we can harness the brightest minds to drive innovation and create limitless opportunities. If you are ready to join a global company that is revolutionizing the transportation industry and are passionate about driving exceptional results through continuous improvement, then we invite you to apply for the role of Lead/Engineer DevOps at Wabtec Corporation.,

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

hyderabad, telangana

On-site

You are a C#/.NET Developer with 0-2 years of experience, having a strong foundation in API development, database management, and optionally full-stack skills. Your role will involve building scalable applications and enterprise-grade solutions. Your responsibilities will include designing, developing, and maintaining RESTful APIs using C# and .NET Core. You will work with SQL Server or other relational databases to ensure robust data handling. Additionally, you will participate in system architecture discussions and design reviews. Optionally, you may contribute to front-end development using Angular or React. Your tasks will also involve debugging, testing, and deploying applications to production environments. Collaboration with QA, DevOps, and product teams is essential to ensure on-time delivery. To qualify for this role, you should have 0-2 years of professional experience in .NET development. Proficiency in C#, .NET Core, and API development is required. A solid understanding of relational databases such as SQL Server, PostgreSQL, or MySQL is necessary. Familiarity with Git, CI/CD, and Agile/Scrum methodologies is preferred. Experience with front-end technologies like Angular or React is an added advantage. Preferred qualifications include experience with cloud platforms like Azure or AWS, exposure to microservices, containerization (Docker/Kubernetes), and logging tools. Joining this team will provide you with the opportunity to work on high-impact projects collaboratively. You will be part of a flexible working culture that promotes a continuous learning environment, offering you the chance to grow both technically and professionally. This is a full-time position with benefits such as health insurance and Provident Fund. The work schedule includes evening shifts from Monday to Friday, as well as night shifts during US shift timings. Performance bonuses and yearly bonuses are part of the package. The work location for this role is in person at Hyderabad.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kozhikode, kerala

On-site

You have an exciting opportunity to join our team as a Senior Cloud Site Reliability Engineer (SRE). In this role, you will play a crucial part in ensuring the reliability, scalability, and efficiency of our cloud infrastructure. Your responsibilities will include designing, implementing, and managing cloud environments with a focus on availability, scalability, and performance. As a Senior Cloud Site Reliability Engineer, you will be expected to design, implement, and manage cloud infrastructure to guarantee high availability, scalability, and performance. You will also be responsible for server administration, including managing and optimizing systems at the OS level for both Linux and Windows environments. Additionally, you will optimize app server-level performance by enhancing web servers, databases, and caching systems. One of your key tasks will be to develop automation tools and frameworks to streamline deployments, monitoring, and incident response. You will monitor system health using tools such as CloudWatch, Prometheus, Grafana, and Nagios, troubleshoot issues, and proactively enhance system performance. Furthermore, you will establish and refine SRE best practices, implement CI/CD pipelines for seamless application deployment, and conduct root cause analysis for incidents. To excel in this role, you should have at least 3 years of experience as a Cloud Engineer or in a similar role. Strong expertise in Linux administration is required, and knowledge of Windows is a plus. Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud is essential, as well as proficiency in Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. You should possess scripting and automation skills in languages like Python or Bash, expertise in Kubernetes and container orchestration tools, and experience with monitoring and logging tools. If you are a problem-solver who thrives in a fast-paced environment and possesses excellent communication and collaboration skills, we encourage you to apply. You will have the opportunity to mentor junior SREs, collaborate with DevOps and security teams, and document best practices to optimize cloud security and application performance. Join us and contribute to a culture of continuous learning and improvement.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an IT specialist at Infineon, you will be responsible for establishing and monitoring key performance indicators (KPIs) to track the performance and health of IT operations. You will collaborate with cross-functional teams to develop and maintain dashboards that provide real-time insights into IT operational performance. In your new role, you will require a working knowledge of major DevOps tools and methodologies relevant to IT operations. This includes Continuous Integration/Continuous Delivery (CI/CD) tools such as Jenkins, GitLab CI, Azure DevOps Pipelines and their operational aspects. You will also need expertise in containerization technologies like Docker, Kubernetes, and their monitoring and management within operations. GitOps principles and tools for managing infrastructure and applications declaratively through Git, Infrastructure-as-Code (IaC) tools, and their impact on operational stability and repeatability will be essential. Familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana) for proactive issue detection and analysis is also required. To be best equipped for this task, you should hold a Master's degree in computer science, information technology, or a related field. Additionally, a minimum of 5-7 years of experience in IT operations management, focusing on data analysis, KPI monitoring, and process optimization within an IT environment is necessary. Proficiency in data analysis tools and techniques like SQL, Python, or R with a focus on IT-specific data sets and metrics is expected. Experience with dashboarding and data visualization tools like Tableau, Power BI, or similar platforms, specifically focusing on IT operational metrics, will be beneficial. A strong understanding of IT key performance indicators (KPIs) and their importance in driving operational excellence within the IT department is crucial. Excellent communication and presentation skills are also required, enabling you to convey complex IT operational insights to non-technical stakeholders effectively. If you are interested in this opportunity, please contact Chowta.external@infineon.com. Infineon is committed to driving decarbonization and digitalization as a global leader in semiconductor solutions for power systems and IoT. Join us in creating innovative solutions for green and efficient energy, clean and safe mobility, and smart and secure IoT. At Infineon, we value diversity and inclusion, offering a working environment characterized by trust, openness, respect, and tolerance. We provide equal opportunities to all applicants and employees based on their experience and skills. Be a part of our journey to make life easier, safer, and greener with Infineon.,

Posted 4 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

The Senior Software Engineer (Python) position in Gurugram requires 4-6 years of experience. The ideal candidate should possess the following technical expertise: Strong experience with OpenStack architecture and services including Nova, Neutron, Cinder, Keystone, and Glance. Knowledge of NFV architecture, ETSI standards, and VIM (Virtualized Infrastructure Manager). Hands-on experience with containerization platforms such as Kubernetes or OpenShift. Familiarity with SDN solutions like OpenDaylight or Tungsten Fabric. Experience with Linux-based systems and proficiency in scripting languages such as Python and Bash. Understanding of networking protocols such as VXLAN, BGP, OVS, and SR-IOV. Knowledge of distributed storage solutions like Ceph. In terms of tools, the candidate should have experience with monitoring and logging tools like Prometheus, Grafana, and ELK Stack. Additionally, proficiency in configuration management tools such as Ansible, Puppet, or Chef is required. Knowledge of CI/CD tools like Jenkins and GitLab CI is also essential. Preferred certifications for this role include OpenStack Certified Administrator (COA), Red Hat Certified Engineer (RHCE), VMware Certified Professional (VCP), and Kubernetes certifications like CKA or CKAD.,

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Kafka Administrator at our Merchant Ecommerce platform in Noida Sector 62, you will be responsible for managing, maintaining, and optimizing our distributed, multi-cluster Kafka infrastructure in an on-premise environment. You should have a deep understanding of Kafka internals, Zookeeper administration, and performance tuning to ensure operational excellence in high-throughput, low-latency production systems. Experience with API gateway operations (specifically Kong) and observability tooling is considered a plus. Your key responsibilities will include: - Managing multiple Kafka clusters with high-availability Zookeeper setups - Providing end-to-end operational support including deployment, configuration, and health monitoring of Kafka brokers and Zookeeper nodes - Conducting capacity planning, partition strategy optimization, and topic lifecycle management - Implementing backup and disaster recovery processes with defined RPO/RTO targets - Enforcing security configurations such as TLS encryption, authentication (SASL, mTLS), and ACL management - Optimizing Kafka producer and consumer performance to meet low-latency, high-throughput requirements - Planning and executing Kafka and Zookeeper upgrades and patching with minimal/zero downtime - Integrating Kafka with monitoring platforms like Prometheus, Grafana, or similar tools - Defining and enforcing log retention and archival policies in line with compliance requirements Additionally, you will be responsible for integrating Kafka metrics and logs with centralized observability and logging tools, creating dashboards and alerts to monitor Kafka consumer lag, partition health, and broker performance, and collaborating with DevOps/SRE teams to ensure visibility into Kafka services. You will also be involved in applying CIS benchmarks, performing automated security scans across Kafka nodes, managing secret and certificate rotation, supporting regular vulnerability assessments, and ensuring timely remediation. To be successful in this role, you should have: - 3+ years of hands-on Kafka administration experience in production environments - Strong understanding of Kafka internals and Zookeeper management - Experience in Kafka performance tuning, troubleshooting, and security mechanisms - Proficiency in monitoring and logging tools and scripting skills for operational automation Preferred qualifications include experience with API gateways, Kubernetes-based environments, compliance standards, security hardening practices, and Infrastructure as Code (IaC) tools. In return, we offer you a mission-critical role in managing large-scale real-time data infrastructure, a flexible work environment, opportunities for growth, and access to modern observability and automation tools.,

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a TCW Rates Application Support Specialist - Macro Trade Capture Next Gen at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. You will be assessed on critical skills relevant for success in the role, such as experience with skills to meet business requirements, as well as job-specific skill sets. To be successful in this role, you should have experience with: Basic/ Essential Qualifications: - Application Support: Providing expert-level L3 support for Next Gen systems in the macro trade capture division. - Issue Investigation: Conducting deep hands-on investigation and debugging of complex production issues. - Problem Resolution: Stepping through code, analyzing logs, and performing root cause analysis for system incidents. - Knowledge Transfer: Working closely with RTB (Real-Time Business) teams to ensure smooth transition and knowledge sharing. - Process Improvement: Identifying and implementing improvements to reduce support burden during rapid application changes. - Proven application support experience in enterprise environments. - Strong debugging and problem-solving skills with the ability to investigate complex technical issues. - SQL proficiency for database investigation and analysis. - Interest in expanding technical skills including Python and code analysis. - Motivation to work primarily from a support perspective rather than a development focus. Desirable skillsets/ good to have: - Experience with trading systems or financial applications. - Knowledge of Java applications and troubleshooting. - Familiarity with monitoring and logging tools. - Understanding of distributed systems architecture. This role will be based out of Pune. Purpose of the role: To design, develop, and improve software utilizing various engineering methodologies that provide business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements and devise solution strategies. - Collaboration with peers, participation in code reviews, and promotion of a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities. - Adherence to secure coding practices and implementation of effective unit testing practices. Assistant Vice President Expectations: - Advise and influence decision-making, contribute to policy development, and take responsibility for operational effectiveness. - Lead a team performing complex tasks, set objectives, and coach employees in pursuit of those objectives. - Demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Consult on complex issues, identify ways to mitigate risk, and take ownership for managing risk and strengthening controls. - Engage in complex analysis of data from multiple sources and communicate complex information effectively. - All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive.,

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a TCW Rates Application Support Specialist - Macro Trade Capture Next Gen at Barclays, where you will be at the forefront of advancing our digital landscape, driving innovation, and ensuring exceptional customer experiences through cutting-edge technology. As a TCW Rates Application Support Specialist - Macro Trade Capture Next Gen, your key responsibilities will include providing expert-level L3 support for Next Gen systems in the macro trade capture division, conducting in-depth investigation and debugging of complex production issues, stepping through code, analyzing logs, and performing root cause analysis for system incidents, collaborating closely with Real-Time Business teams for knowledge transfer and smooth transition, identifying and implementing process improvements to enhance support efficiency during rapid application changes, and demonstrating proven experience in application support in enterprise environments. You should possess strong debugging and problem-solving skills, proficiency in SQL for database investigation and analysis, an interest in expanding technical skills including Python and code analysis, and a motivation to primarily work from a support perspective rather than a development focus. Additionally, it would be beneficial to have experience with trading systems or financial applications, knowledge of Java applications and troubleshooting, familiarity with monitoring and logging tools, and an understanding of distributed systems architecture. This role will be based in Pune and aims to design, develop, and enhance software using various engineering methodologies to provide business, platform, and technology capabilities for customers and colleagues. Your responsibilities will include developing high-quality software solutions, collaborating with cross-functional teams to define software requirements and ensure alignment with business objectives, engaging in code reviews and promoting a culture of code quality and knowledge sharing, staying informed of industry technology trends, adhering to secure coding practices, and implementing effective unit testing practices. As an Assistant Vice President, you are expected to advise on decision-making, contribute to policy development, ensure operational effectiveness, lead a team in performing complex tasks, and demonstrate leadership behaviours such as listening, inspiring, aligning, and developing others. For individual contributors, guiding team members through assignments, identifying new directions for projects, and consulting on complex issues are key expectations. All colleagues at Barclays are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, and embody the Barclays Mindset of Empower, Challenge, and Drive.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As an experienced Azure DevOps Architect at our leading pharmacy client, your primary responsibility will be to lead the implementation of DevOps solutions using the Azure DevOps platform. You will leverage your deep understanding of DevOps practices, infrastructure automation, continuous integration (CI), continuous deployment (CD), and cloud-native application development to ensure scalability, security, and efficiency in all aspects of the projects. With a minimum of 10-12 years of overall experience in DevOps and cloud automation, you must possess hands-on experience in DevOps, cloud architecture, and/or software development. Specifically, you should have a strong background in Azure DevOps, including building and managing CI/CD pipelines, ARM Templates or Terraform, cloud deployment, and CI/CD processes. Proficiency in creating build and release pipelines, as well as managing Azure DevOps, is essential for this role. Your expertise should extend to working with Docker and Azure Kubernetes for deployment, along with helm charts for containerization and orchestration. Experience in deploying and managing Azure Cloud, along with in-depth knowledge of Azure services such as Azure Storage Account, Virtual Network, Managed Identity, Service Principle, Key Vault, Event Hub, and different deployment strategies, will be highly valued. Furthermore, familiarity with monitoring and logging tools, Agile methodologies, and integrating DevOps with Agile workflows will be advantageous in fulfilling the responsibilities of this role. As an Azure DevOps Architect, you will play a pivotal role in defining and implementing scalable, secure, and high-performance DevOps pipelines using Azure DevOps. Your responsibilities will include devising DevOps strategies encompassing CI/CD, infrastructure as code (IaC), release management, and environment management. You will collaborate with stakeholders to understand technical requirements and translate them into effective DevOps solutions. Additionally, you will be responsible for building, maintaining, and scaling automated CI/CD pipelines, defining branching strategies, implementing release management, and continuous deployment pipelines for multiple environments. Your role will also involve leading the automation of infrastructure provisioning, designing secure and scalable cloud architectures on Microsoft Azure, and implementing robust monitoring and logging solutions across cloud environments. Moreover, you will work closely with development, QA, operations, and IT teams to promote a DevOps culture of collaboration and shared responsibility. Mentoring junior DevOps engineers and providing ongoing support to team members will also be part of your responsibilities. Your expertise in Azure DevOps, CI/CD pipelines, cloud architecture, automation, security, and compliance, as well as monitoring, will be crucial in driving the success of projects and ensuring the highest standards of quality and efficiency. At our organization, you will have the opportunity to work on exciting projects for leading global brands across various industries. You will collaborate with a diverse team of highly talented individuals in a supportive environment that values work-life balance, professional development, and employee well-being. Join us at GlobalLogic, where we redefine digital engineering and help our clients innovate and thrive in the modern world.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a DevOps/SRE Engineer at Optum, a global organization dedicated to improving health outcomes through technology, your role will be pivotal in building and maintaining the cloud infrastructure to ensure reliability, scalability, and security of applications and services. You will work collaboratively with cross-functional teams to implement and automate CI/CD pipelines, manage cloud resources, and enhance development and deployment processes. Your responsibilities will include designing, implementing, and maintaining CI/CD pipelines using tools like Azure DevOps, Jenkins, and Git. Automating infrastructure provisioning and management through Ansible, Terraform, and Azure Cloud Native services will be a key aspect of your role. Collaboration with development teams to optimize application performance and scalability will also be crucial. Monitoring and alerting solutions using tools such as Dynatrace, Splunk, and Kibana will be part of your tasks. You will conduct security testing and vulnerability assessments utilizing SAST and DAST tools, troubleshoot and resolve production issues to ensure high availability and reliability of systems, and continuously enhance processes, tools, and infrastructure for increased efficiency. Staying abreast of industry trends and best practices in DevOps, SRE, and cloud technologies, identifying issues, recommending solutions, and complying with company policies and directives will be essential. The role requires a Bachelor's degree in Computer Science, Engineering, or related field, along with 3-6 years of experience in a similar role. Solid experience with Azure DevOps, Jenkins, Ansible, Terraform, Azure Cloud Native services, SAST and DAST tools, Git, Github, monitoring and logging tools, CI/CD pipelines, and automation workflows are necessary qualifications. Proven troubleshooting, communication, collaboration, and problem-solving skills are crucial. The ability to work both independently and in a team environment is also required. In this role, you will have the opportunity to make a significant impact on the communities served by advancing health equity on a global scale. Join us at Optum to contribute to caring, connecting, and growing together.,

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 15 Lacs

Pune

Work from Office

LPM internals,logging-auditingorsimilar, Ability to analysing&interpret system-level logs&command-line process data, Strong, communication&collaboration,C++, Bazel, and Python,Linux system,programming&Debugging

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies