Home
Jobs

4124 Logging Jobs - Page 41

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

3 Lacs

India

Remote

Linkedin logo

Job Title: DevOps/QA Specialist (4 Positions) Location: Remote (India) Contract: 6-month @ INR 25,000/month (Apply only if fine with this) Role & Responsibilities Automate deployment pipelines (Netlify, GitHub Actions or equivalent) QA testing of React front-ends, payment flows (Stripe Connect, Shopify), forms and analytics Monitor uptime and alerts; maintain basic dashboards Standardize logging and data capture (e.g. push events to MailerLite/Airtable) Write clear deployment and troubleshooting documentation Required Experience 2+ years in DevOps or QA for web applications Hands-on with Netlify (or similar), CI/CD tooling, serverless functions Experience integrating Stripe Connect and/or Shopify via Zapier or APIs Proficient in both automated and manual testing practices Strong problem-solving and independent communication Compensation & Next Steps Initial: 1-month contract at INR 15,000 Extension: Top performers may be offered a 6-month contract at INR 25,000/month Immediate start required To Apply: Email careers@alatreeventures.com with: Your resume and two links to relevant DevOps/QA projects Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At NICE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? A .NET Developer is a software engineer who specializes in developing applications using Microsoft's .NET framework, including technologies like C#, and .NET Core. They build web, desktop, cloud, and mobile applications for various industries. How will you make an impact? Application Development: Develop, test, and maintain applications using C#,and .NET Build web applications using MVC, Web API, and Blazor. Database Management: Work with SQL Server, MySQL, or PostgreSQL for data storage and management. Use Entity Framework (EF) Core for Object-Relational Mapping (ORM). Cloud & DevOps Integration: Deploy applications on AWS Implement CI/CD pipelines API & Microservices Development: Design and develop RESTful APIs and gRPC services. Work with Microservices architecture using Docker and Kubernetes. Security & Performance Optimization: Implement OAuth, JWT, and Identity Server for authentication and authorization. Optimize application performance through caching, logging, and debugging tools. Have you got what it takes? 8+ years of experience in Software Engineering. Proven track record of managing the development of enterprise-grade software products that can perform, scale, and integrate into a broad enterprise ecosystem. Experience developing and supporting multi-tenant cloud-native software delivered as-a-Service (SaaS). Good exposure to Service Oriented Architecture and associated design patterns for development, deployment, and maintenance. Familiar with DevOps processes and tools employed in SaaS architectures to support CI/CD and monitoring. Familiar with Quality targets and SLAs for SaaS applications. Experience of product development using Dot net technologies and web technologies. Good to have experience in JavaScript and angular. Familiarity and/or experience with public cloud infrastructures and technologies such as Amazon Web Services (AWS). Experience working in a global product software company for enterprise customers (Fortune 100 companies). Experience working abroad or with global teams is preferred. Demonstrated ability to deftly influence others, especially in sensitive or complex situations. Deep experience with agile software development techniques and pitfalls. Excellent communication skills, problem-solving and decision-making skills. Experience with Contact Center as a Service or Platform as a Service type of products. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6221 Reporting into: Tech Manager Role Type: Individual Contributor About NICE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NICE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NICE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NICE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Key Responsibilities: Design and implement secure, scalable, and highly available cloud infrastructure on AWS. Migrate on-premises workloads to AWS. Automate infrastructure provisioning using Infrastructure-as-Code tools like Terraform or CloudFormation . Monitor system performance, identify issues, and implement improvements. Collaborate with development and DevOps teams to integrate cloud solutions with CI/CD pipelines. Ensure cloud infrastructure aligns with security and compliance requirements (e.g., IAM policies, encryption, auditing). Implement cost optimization strategies for AWS usage. Troubleshoot complex infrastructure and application issues. Mentor junior engineers and participate in technical design discussions. Required Skills & Qualifications: Bachelor’s degree in computer science, Engineering, or related field. 5+ years of experience in infrastructure/DevOps, with at least 3 years hands-on with AWS. Strong expertise in core AWS services: EC2, S3, VPC, RDS, Lambda, IAM, ECS/EKS, CloudWatch, Route 53, etc. Hands-on experience with automation tools: Terraform , CloudFormation , Ansible . Strong understanding of networking, security, and identity management in cloud environments. Familiarity with containers and orchestration tools: Docker , Kubernetes . Experience with monitoring and logging tools (e.g., CloudWatch, ELK, Prometheus). Proficient in scripting (Bash, Python, or similar). AWS Certification (Solutions Architect – Associate/Professional or DevOps Engineer) is a strong plus. Preferred Qualifications: Experience with multi-account AWS environments and organizations. Knowledge of DevSecOps principles. Experience with serverless architecture and microservices. Familiarity with Agile/Scrum methodologies. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Managed Services Cross Technology Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their IT infrastructure and systems remain operational. Through the proactive monitoring, identifying, investigating, and resolving of technical incidents and problems, the Managed Services Cross Technology Engineer (L2) is able to restore service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA). The Managed Services Cross Technology Engineer (L2) focuses on second-line support for incidents and requests with a medium level of complexity and focusses across two or more technology domains - Cloud, Security, Networking, Applications and / or Collaboration etc. This role may also contribute to / support on project work as and when required. What You'll Be Doing The primary objective of this role is to Routing, Switching, Manage LAN, WAN, wireless Network and Knowledge of SDWAN Cisco Viptela Proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA). The Managed Services Cross Technology Engineer (L2) focuses on second-line support for incidents and requests with a medium level of complexity and focusses across two or more technology domains - Cloud, Security, Networking, Applications and / or Collaboration etc. This role may also contribute to / support on project work as and when required. Job Description Key Responsibilities: Proactively monitors the work queues. Hands on experience: Routing, Switching and Wireless technology. Performs operational tasks to resolve all incidents/requests in a timely manner and within the agreed SLA. Updates tickets with resolution tasks performed. Identifies, investigates, analyses issues and errors prior to or when they occur, and logs all such incidents in a timely manner. Captures all required and relevant information for immediate resolution. Provides second level support to all incidents, requests and identifies the root cause of incidents and problems. Communicates with other teams and clients for extending support. Executes changes with clear identification of risks and mitigation plans to be captured into the change record. Follows the shift handover process highlighting any key tickets to be focussed on along with a handover of upcoming critical tasks to be carried out in the next shift. Escalates all tickets to seek the right focus from CoE and other teams, if needed continue the escalations to management. Works with automation teams for effort optimization and automating routine tasks. Ability to work across various other resolver group (internal and external) like Service Provider, TAC, etc. Identifies problems and errors before they impact a client’s service. Provides Assistance to L1 Security Engineers for better initial triage or troubleshooting. Leads and manages all initial client escalation for operational issues. Contributes to the change management process by logging all change requests with complete details for standard and non-standard including patching and any other changes to Configuration Items. Ensures all changes are carried out with proper change approvals. Plans and executes approved maintenance activities. Audits and analyses incident and request tickets for quality and recommends improvements with updates to knowledge articles. Produces trend analysis reports for identifying tasks for automation, leading to a reduction in tickets and optimization of effort. May also contribute to / support on project work as and when required. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Relevant certifications include (but not limited to) - CCNP or equivalent certification CCNA certification in must, CCNP in Security or PCNSE certification is good to have Microsoft Certified: Azure Administrator Associate AWS Certified: Solutions Architect Associate Veeam Certified Engineer VMware certified Professional: Data Centre Virtualization Zerto, pure, vxrail Google Cloud Platform (gcp) Oracle Cloud Infrastructure (oci) SAP Certified Technology Associate - OS DB Migration for SAP NetWeaver 7.4 SAP Technology Consultant SAP Certified Technology Associate - SAP HANA 2.0 Oracle Cloud Infrastructure Architect Professional IBM Certified System Administrator - WebSphere Application Server Network Required Experience: Moderate level years of relevant managed services experience handling cross technology infrastructure. Moderate level knowledge in ticketing tools preferably Service Now. Moderate level working knowledge of ITIL processes. Moderate level experience working with vendors and/or 3rd parties. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Managed Services Cross Technology Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their IT infrastructure and systems remain operational. Through the proactive monitoring, identifying, investigating, and resolving of technical incidents and problems, the Managed Services Cross Technology Engineer (L2) is able to restore service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA). The Managed Services Cross Technology Engineer (L2) focuses on second-line support for incidents and requests with a medium level of complexity and focusses across two or more technology domains - Cloud, Security, Networking, Applications and / or Collaboration etc. This role may also contribute to / support on project work as and when required. What You'll Be Doing The primary objective of this role is to Routing, Switching, Manage LAN, WAN, wireless Network and Knowledge of SDWAN Cisco Viptela Proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA). The Managed Services Cross Technology Engineer (L2) focuses on second-line support for incidents and requests with a medium level of complexity and focusses across two or more technology domains - Cloud, Security, Networking, Applications and / or Collaboration etc. This role may also contribute to / support on project work as and when required. Job Description Key Responsibilities: Proactively monitors the work queues. Hands on experience: Routing, Switching and Wireless technology. Performs operational tasks to resolve all incidents/requests in a timely manner and within the agreed SLA. Updates tickets with resolution tasks performed. Identifies, investigates, analyses issues and errors prior to or when they occur, and logs all such incidents in a timely manner. Captures all required and relevant information for immediate resolution. Provides second level support to all incidents, requests and identifies the root cause of incidents and problems. Communicates with other teams and clients for extending support. Executes changes with clear identification of risks and mitigation plans to be captured into the change record. Follows the shift handover process highlighting any key tickets to be focussed on along with a handover of upcoming critical tasks to be carried out in the next shift. Escalates all tickets to seek the right focus from CoE and other teams, if needed continue the escalations to management. Works with automation teams for effort optimization and automating routine tasks. Ability to work across various other resolver group (internal and external) like Service Provider, TAC, etc. Identifies problems and errors before they impact a client’s service. Provides Assistance to L1 Security Engineers for better initial triage or troubleshooting. Leads and manages all initial client escalation for operational issues. Contributes to the change management process by logging all change requests with complete details for standard and non-standard including patching and any other changes to Configuration Items. Ensures all changes are carried out with proper change approvals. Plans and executes approved maintenance activities. Audits and analyses incident and request tickets for quality and recommends improvements with updates to knowledge articles. Produces trend analysis reports for identifying tasks for automation, leading to a reduction in tickets and optimization of effort. May also contribute to / support on project work as and when required. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Relevant certifications include (but not limited to) - CCNP or equivalent certification CCNA certification in must, CCNP in Security or PCNSE certification is good to have Microsoft Certified: Azure Administrator Associate AWS Certified: Solutions Architect Associate Veeam Certified Engineer VMware certified Professional: Data Centre Virtualization Zerto, pure, vxrail Google Cloud Platform (gcp) Oracle Cloud Infrastructure (oci) SAP Certified Technology Associate - OS DB Migration for SAP NetWeaver 7.4 SAP Technology Consultant SAP Certified Technology Associate - SAP HANA 2.0 Oracle Cloud Infrastructure Architect Professional IBM Certified System Administrator - WebSphere Application Server Network Required Experience: Moderate level years of relevant managed services experience handling cross technology infrastructure. Moderate level knowledge in ticketing tools preferably Service Now. Moderate level working knowledge of ITIL processes. Moderate level experience working with vendors and/or 3rd parties. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 6 days ago

Apply

25.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

eInfochips (An Arrow Company): eInfochips, an Arrow company (A $27.9 B, NASDAQ listed (ARW); Ranked #133 on the Fortune List), is a leading global provider of product engineering and semiconductor design services. 25+ years of proven track record, with a team of over 2500+ engineers, the team has been instrumental in developing over 500+ products and 40M deployments in 140 countries. Company’s service offerings include Silicon Engineering, Embedded Engineering, Hardware Engineering & Digital Engineering services. eInfochips services 7 of the top 10 semiconductor companies and is recognized by NASSCOM, Zinnov and Gartner as a leading Semiconductor service provider. Job Role : As a Azure engineer, you will be responsible for the deployment, management, and optimization of Azure cloud infrastructure. You will leverage your expertise in Ansible, Docker, Linux server administration, Shell scripting, Kubernetes, Helm charts, Prometheus, Grafana, Azure Monitor, and various Azure services to drive operational excellence. Key Responsibilities: Azure Cloud Management: Design, deploy, and manage Azure cloud environments. Ensure optimal performance, scalability, and security of cloud resources using services like Azure Virtual Machines, Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure SQL Database. Automation & Configuration Management: Use Ansible for configuration management and automation of infrastructure tasks. Implement Infrastructure as Code (IaC) using Azure Resource Manager (ARM) templates or Terraform. Containerization: Implement and manage Docker containers. Develop and maintain Dockerfiles and container orchestration strategies with Azure Kubernetes Service (AKS) or Azure Container Instances. Server Administration: Administer and manage Linux servers. Perform routine maintenance, updates, and troubleshooting. Scripting: Develop and maintain Shell scripts to automate routine tasks and processes. Helm Charts: Create and manage Helm charts for deploying and managing applications on Kubernetes clusters. Monitoring & Alerting: Implement and configure Prometheus and Grafana for monitoring and visualization of metrics. Use Azure Monitor and Azure Application Insights for comprehensive monitoring, logging, and diagnostics. Networking: Configure and manage Azure networking components such as Virtual Networks, Network Security Groups (NSGs), Azure Load Balancer, and Azure Application Gateway. Security & Compliance: Implement and manage Azure Security Center and Azure Policy to ensure compliance and security best practices. Required Skills and Qualifications: Experience: 5-8 years of experience in cloud operations, with a focus on Azure. Azure Expertise: In-depth knowledge of Azure services, including Azure Virtual Machines, Azure Kubernetes Service, Azure App Services, Azure Functions, Azure Storage, Azure SQL Database, Azure Monitor, Azure Application Insights, and Azure Security Center. Automation Tools: Proficiency in Ansible for configuration management and automation. Experience with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Containerization: Hands-on experience with Docker for containerization and container management. Linux Administration: Solid experience in Linux server administration, including installation, configuration, and troubleshooting. Scripting: Strong Shell scripting skills for automation and task management. Helm Charts: Experience with Helm charts for Kubernetes deployments. Monitoring Tools: Familiarity with Prometheus and Grafana for metrics collection and visualization. Networking: Experience with Azure networking components and configurations. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Communication: Excellent communication skills, both written and verbal, with the ability to work effectively in a team environment. Preferred Qualifications: Certifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect) are a plus. Additional Tools: Experience with other cloud platforms (AWS, GCP) or tools (Kubernetes, Terraform) is beneficial. Why Join Us? Opportunity to work on cutting-edge technologies. Lead a high-performing team in a fast-paced, dynamic environment. Location: Ahmedabad/Pune/Indore Interested candidates can share resume on arti.bhimani1@einfochips.com Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Location: Noida Berger Tower, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. We are seeking a skilled and motivated Full Stack Developer to join our team in building scalable, high-performance solutions. The ideal candidate will have solid experience in Node.js for UI development and Java (J2EE) for backend services, with a strong understanding of REST APIs, microservices, and modern DevOps practices. Familiarity with Karate for IVVQ and hands-on knowledge of Kubernetes, especially in an AWS EKS environment, is essential. You will be working in an Thales Adaptive connect (TAC) Agile team within the Thales DIS/MCS Business Line, attached to DES (Digital Engineering & Services). As a team member of this Agile team, you will be participating to the design, the implementation and the maintenance of the TAC product. Key Responsibilities Design, develop, and maintain frontend components using Node.js. Build robust backend services and business logic using Java (J2EE). Develop and consume RESTful APIs in a microservices architecture. Collaborate with cross-functional teams to define, design, and ship new features. Write and maintain automated test scripts using Karate for functional and integration testing (IVVQ). Deploy and manage applications in a Kubernetes cluster (EKS on AWS). Ensure code quality, performance, and responsiveness of applications. Participate in code reviews, agile ceremonies, and continuous improvement efforts. Required Skills And Qualifications Strong proficiency in Node.js with experience building UI/front-end components. Advanced Java / J2EE development experience. Hands-on experience designing and consuming REST APIs. Deep understanding of microservices-based architecture and best practices. Experience with Karate or similar tools for test automation in IVVQ environments. Working knowledge of Kubernetes, particularly within AWS EKS. Familiarity with CI/CD pipelines, containerization (Docker), and cloud-native development. Excellent problem-solving skills and ability to work independently and in a team. At least 5+ years of professional full stack development experience. Preferred Qualifications Experience with front-end frameworks (e.g., React, Angular) is a plus. Knowledge of monitoring tools (e.g., Prometheus, Grafana) and logging solutions (e.g., ELK, CloudWatch). Experience in Agile development environments. Why Join Us? Opportunity to work on modern, cloud-native solutions. Collaborative team environment with strong engineering culture. Flexible work arrangements and competitive compensation. Soft skills Humble, Hungry & People smart (Emotional Intelligence). Agile mindset who understands the importance of validation & DOD. Autonomous, Curious & team player Ability to work within a team Good communication and inter-personal interaction skills Problem solving mindset Capable of listening and interact with users (worldwide) and either solve their immediate problems or propose a new feature to fit their needs Must be a quick learner and adapt to new tools and technologies At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now! Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Fortanix is a dynamic start-up solving some of the world's most demanding data protection challenges for companies and governments around the world. Our disruptive technology maintains data privacy across its entire lifecycle -- at rest, in motion, and in use across any enterprise IT infrastructure -- public cloud, on-premise, hybrid cloud, and SaaS. With key strategic partners like Microsoft, Intel, ServiceNow, and Snowflake, Fortanix customers like PayPal, Google & Adidas are reaping the benefits. Recognized by Gartner as a "Cool Vendor", Fortanix is revolutionizing cyber security. Join the revolution! At Fortanix we are redefining what cloud security means. Our customers use our software platform to build and run software much more securely than was previously possible. We are seeking software engineers to extend the capability and performance of our cloud security solutions As a Software Engineer at Fortanix, you will play a critical role in designing, building, and maintaining our observability platform. You will work closely with cross-functional teams to enhance and optimize the performance and scalability of our cloud security solutions. In this role, you will: Collaborate with product managers and other engineers to determine customer requirements and translate them into technical solutions Design, develop, and deploy observability features and functionality for our cloud security platform Optimize and scale our observability infrastructure to handle large volumes of data efficiently Participate in code reviews and provide constructive feedback to ensure the overall quality and stability of the codebase Contribute to the continuous improvement of software development processes and practices We are looking for someone who: Has a deep understanding of observability concepts, tools, and techniques, including monitoring, logging, and distributed tracing Has strong software engineering skills and experience with backend development Is proficient in at least one programming language, such as Rust, Go, Java or C++ Has experience with cloud-based technologies, preferably AWS, Azure and GCP Is proficient with database architecture, scaling, and optimization Has competence with CI/CD procedures and microservice architecture Is familiar with containerization technologies like Docker and Kubernetes Has excellent problem-solving and analytical skills Is self-motivated and can work effectively both independently and as part of a team Communicates effectively and enjoys collaborating with others If you are passionate about observability and want to make a meaningful impact in the field of cloud security, we would love to hear from you. Join us at Fortanix and be part of our mission to redefine what cloud security means. Requirements Minimum of 5 years of professional experience as a software engineer Bachelor's degree in Computer Science, Engineering, or a related field Strong experience in backend development and building distributed systems Proficiency in at least one programming language, such as Rust, Go, Java or C++ Experience with cloud-based technologies, preferably AWS, Azure and GCP Familiarity with containerization technologies like Docker and Kubernetes Strong problem-solving and analytical skills Excellent communication and collaboration skills Benefits Mediclaim Insurance - Employees and their eligible dependents including dental coverage Personal Accident Insurance Internet Reimbursement Employee Stock Options Fortanix is an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, religion, age, gender, gender identity, sexual orientation, or any other status. If you're interested in working in a fast-growing, exciting working environment - we encourage you to apply ! Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Minimum qualifications: Bachelor's degree or equivalent practical experience. 10 years of experience as a product manager in the data or AI/ML domain. 5 years of experience in people management, with technical leadership. Preferred qualifications: Experience working in a fast-paced and agile environment with the ability to build and mentor high-performing teams. Experience with cloud-based data platforms (e.g., Cloud Computing Platform, GCP). Experience with building and scaling data products in a global context. Knowledge of data architectures, data warehousing, data lakes, and data pipelines. Knowledge of AI/ML concepts, including machine learning algorithms, model training, and deployment. Ability to think strategically and translate business needs into technical requirements. About The Job At Google, we put our users first. The world is always changing, so we need Product Managers who are continuously adapting and excited to work on products that affect millions of people every day. In this role, you will work cross-functionally to guide products from conception to launch by connecting the technical and business worlds. You can break down complex problems into steps that drive product development. One of the many reasons Google consistently brings innovative, world-changing products to market is because of the collaborative work we do in Product Management. Our team works closely with creative engineers, designers, marketers, etc. to help design and develop technologies that improve access to the world's information. We're responsible for guiding products throughout the execution cycle, focusing specifically on analyzing, positioning, packaging, promoting, and tailoring our solutions to our users. In this role, you will drive product leadership for logging, feature storage, and machine learning engineering data journeys that impact strategic efforts across Google. This requires defining the goal, strategy, and core product requirements across these areas. In addition, you will be responsible for managing a team of product managers, leveraging their strengths and skills while also helping them grow as product leaders. The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company. Responsibilities Define and deliver a compelling product goal and roadmap for data platforms that empower AI/ML innovation, and lead the discovery and development of new data products and features that address the evolving needs of data scientists, machine learning engineers, and researchers. Collaborate with engineering, data science, and research teams to translate business needs into technical requirements and prioritize initiatives. Conduct market research and engaged analysis to identify emerging trends and opportunities in the data and AI/ML space. Build, mentor, and lead a high-performing team of Product Managers, fostering a culture of innovation, collaboration, and ownership, and drive cross-functional collaboration across teams to ensure successful product launches and adoption. Communicate with stakeholders at all levels, including engineers, data scientists, executives, and customers. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

Linkedin logo

Role : DevOps Engineer with GCP Location: Kanpur, Goa Experience: 3+ Years No of Positions : 1 Job Type : Full-time Posting Date: 05 JUNE 2025 Start Date : ASAP Job Description: We are seeking a multifaceted professional who excels in both DevOps engineering and project management. The ideal candidate will play a crucial role in optimizing our development processes, ensuring seamless project delivery, and fostering collaboration across cross-functional teams. If you are a dynamic individual with a strong technical background and excellent project management skills, we invite you to join our innovative team. Responsibilities Implement and manage CI/CD pipelines to streamline the software delivery process. Utilize Infrastructure as Code (IaC) to maintain scalable and reliable infrastructure. Collaborate with development and operations teams to enhance communication and workflow. Implement and manage monitoring, logging, and security practices for system health. Project Management: Lead and manage end-to-end project lifecycles, ensuring deliverables meet quality standards and deadlines. Develop project plans, schedules, and budgets, and track progress against milestones. Coordinate with cross-functional teams to gather requirements and provide regular project updates. Identify and mitigate project risks, ensuring successful project delivery. Facilitate effective communication between technical and non-technical stakeholders. Collaboration: Bridge the gap between development and operations teams, fostering a culture of collaboration. Conduct regular meetings to ensure project goals align with business objectives. Facilitate knowledge sharing and cross-training within the team. Requirements and skills Bachelor’s degree in Computer Science, Information Technology, or a related field (preferred). Proven experience as a DevOps Engineer and Project Manager. Strong understanding of CI/CD pipelines, IaC, and infrastructure management. Experience with project management methodologies and tools. Proficiency in scripting languages (e.g., Python, Shell) for automation tasks. Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Excellent communication and collaboration skills. Ability to manage multiple projects simultaneously. Certifications in relevant technologies (e.g., AWS Certified DevOps Engineer, GCP) are a plus. We offer a competitive salary, performance-based incentives, and a supportive work environment that encourages professional growth and development, if you are a dynamic and results-driven professional with expertise in both DevOps engineering and project management, If this all sounds exciting, please apply using “ Apply Now ” or send your application to jobs@barytech.com asap. Thanks for your interest. We look forward to getting to know you. Show more Show less

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description About the Role: We are seeking a highly skilled and experienced Software Engineer III to join our growing engineering team. As a Software Engineer III, you will be a key contributor to the design, development, and maintenance of our core applications. You will work independently and collaboratively on complex projects, leveraging your expertise in Go and Java to build robust and scalable systems. This role requires a strong understanding of software engineering principles, experience with various technologies, and a passion for building high-quality, maintainable software. Responsibilities Develop & Maintain: Develop, test, deploy, and maintain high-quality, scalable, and reliable software applications using Go and Java, adhering to best practices and coding standards. Implement complex features with guidance, demonstrating a good understanding of design patterns and principles. Develop and maintain RESTful APIs. Participate in code reviews, providing constructive feedback and ensuring code quality. Database Management: Design and implement e ffi cient and robust database solutions leveraging relational (e.g., PostgreSQL, MySQL) and/or NoSQL databases (e.g., MongoDB). Write e ffi cient queries and understand database performance optimization techniques. Cloud Infrastructure: Work with AWS (or other cloud providers) utilizing services like EC2, S3, Lambda, etc. Contribute to the design and implementation of CI/CD pipelines. Understand cloud security best practices. Problem Solving & Troubleshooting: E ff ectively debug and resolve software issues. Identify and communicate potential risks in projects. Participate in Root Cause Analysis (RCA) sessions. Collaboration & Communication: Collaborate e ff ectively with other engineers, product managers, and designers. Clearly communicate technical information to both technical and non-technical audiences. Process Improvement: Contribute to improving team processes and suggest improvements to development workflows. Actively participate in knowledge sharing within the team. Testing & Instrumentation: Write unit and integration tests to ensure code quality and reliability. Understand and implement application monitoring and logging strategies. Qualifications Bachelor's degree in Computer Science or a related field, or equivalent experience. 3-5 years of professional software development experience, with experience in Go and Java. Experience designing and developing RESTful APIs. Experience working with relational and/or NoSQL databases. Familiarity with AWS (or other major cloud provider) services. Understanding of software design patterns and principles. Experience with version control systems (Git). Good problem-solving and debugging skills. Excellent communication and collaboration skills. Experience with Agile development methodologies is a plus. About Us Fanatics is building a leading global digital sports platform. We ignite the passions of global sports fans and maximize the presence and reach for our hundreds of sports partners globally by offering products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect, and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans; a global partner network with approximately 900 sports properties, including major national and international professional sports leagues, players associations, teams, colleges, college conferences and retail partners, 2,500 athletes and celebrities, and 200 exclusive athletes; and over 2,000 retail locations, including its Lids retail stores. Our more than 22,000 employees are committed to relentlessly enhancing the fan experience and delighting sports fans globally. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanatics.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Associations (UEFA). At Fanatics Commerce, we infuse our BOLD Leadership Principles in everything we do: Build Championship Teams Obsessed with Fans Limitless Entrepreneurial Spirit Determined and Relentless Mindset Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

India

On-site

Linkedin logo

Role Overview Build and deploy an end-to-end AI-driven SEO agent platform. You will own everything from prompt-engineering and RAG pipelines to CMS integration, staging environments, CI/CD, and a minimal frontend or plugin for demos. Key Responsibilities Design and implement LLM-powered agents (prompt flows, retrieval-augmented generation, embedding stores) using frameworks like LangChain, LlamaIndex (formerly GPT Index), or similar. Prototype and integrate vector-based retrieval with Pinecone, Weaviate, or Milvus. Build and secure REST APIs for WordPress, Shopify, headless CMS or React/Node.js platforms. Containerize services with Docker; orchestrate staging via Kubernetes or Terraform. Implement CI/CD pipelines (GitHub Actions, GitLab CI) with rollback and health-check gates. Scaffold lightweight frontend components or CMS plugins (React/Vue) to showcase agent capabilities. Automate SEO tasks on-page (metadata, schema injection), internal linking, and 404 recovery via code. Collaborate with prompt engineers and QA to validate agent outputs before production push. Required Experience 2+ years of software engineering with 2+ years in AI/ML or NLP projects. 1-3 years Python (FastAPI, Django) for API and agent development. Hands-on with LLM models (OpenAI GPT-4, Claude, or self-hosted LLaMA/Meta’s models) and fine-tuning pipelines. Experience with prompt frameworks: LangChain, LlamaIndex, Haystack, or equivalent. Proven RAG (Retrieval-Augmented Generation) implementation experience. Proficiency in Node.js and React (or Vue) for plugin/UI development. Strong background in Docker, Kubernetes (or Terraform), and CI/CD tools. Familiarity with CMS integration: WordPress REST API, Shopify Admin API, or headless CMS APIs. Solid prompt-engineering skills: few-shot, chain-of-thought, retrieval-augmented prompts. Knowledge of SEO best practices: metadata, JSON-LD schema, internal linking, and redirect management. Experience writing automated tests and health-check scripts for staging validations. Nice-to-Have Prior work on SEO automation, site-audit tooling, or custom CMS plugins. Experience deploying or fine-tuning open-source LLMs (e.g., LLaMA, Falcon, MPT). Exposure to Haystack, Retrieval QA stacks, or vector pipeline orchestration. Familiarity with monitoring/logging stacks (Prometheus, ELK, Grafana). Basic understanding of frontend styling (Tailwind, Material UI) for rapid prototyping. Apply if you can take full ownership of AI agent development and deployment, leveraging LangChain, style frameworks, and a range of LLMs, to deliver a working MVP in 4–6 weeks. Show more Show less

Posted 6 days ago

Apply

1.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Company Description CyberDisti is a next-generation cyber security value-added distributor offering comprehensive cyber security solutions tailored to meet clients' specific needs. Our vision is to provide advanced cyber security solutions to facilitate secure digital transformation for our customers. By partnering with leading vendors in the industry, CyberDisti holds a unique competitive advantage in delivering cutting-edge cyber security products and solutions. Job Description Analyze and investigate security events from various sources. Manage security incidents through all phases of the incident response process through to closure Using SIEM, Full Packet Capture, Intrusion Detection, Vulnerability Scanning and Malware analysis technologies for even detection and analysis. Update tickets, write incident reports and document actions for false positive reduction Developing knowledge of attack types and fine-tuning detective capabilities such as writing Snort/Sourcefire signatures Incident validation Detailed analysis of attacks and incident response Solution recommendation for issues Manage security devices Risk analysis for change management for security devices Escalation points for device issue resolution · Resolve escalation, Identified missed incidents, maintain knowledge base, defining security breaches Follow-up with the concerned departments/vendor on the remediation steps taken Resolve queries from Client’s stakeholders Coordinate and be present to discuss with Client stakeholders in person Qualifications & Skills 1+ years of experience in working with SIEMs/SEMs and other log analysis technologies Bachelor's in computer science or computer engineering Detailed understanding of the TCP and IP protocol suites and ability to dissect and explain the contents of traffic and packets. Demonstrated ability to work well independently with little input, and as a part of a team Experience with configuration of debug, event generation and logging functionality within application and operating systems, using Syslog or flat file generation. Operating systems and system administration skills in at least one of the following (Windows, Solaris, Linux) including good command line skills. 3-5 years of experience in SIEM, log monitoring, event correlation and analysis Experience in vulnerability assessments, penetration testing Experience in handling events, patch management, configuration management Understanding of TCP/IP, networking concepts and internet protocols Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At Swarovski, where innovation meets inspiration, our people desire to explore, experience and create. We are looking for an MLOps Engineer, also called AI DevOps in the internal Organisation where you will get a chance to work in a rewarding role within a diverse team that is pushing boundaries. Be part of a truly iconic global brand, learn and grow with us. We’re bold and inventive, revealing astonishing things like no one else can. A world of wonder awaits you. About The Job Engineer ML models to production in GCP owning the model maintenance, monitoring and support activities Enhance existing quantitative and generative AI solutions or developing new by translating business questions and requirements into actionable insights. Align with state-of-the art AI development and deployment standards. Deliver efficient solutions and identify possible opportunities of AI adoption. Leverage the standards, the Data Lake and the existing cloud infrastructure as well as the current technology landscape to design the solutions adopted by the AI Center of Excellence. Test the solutions and run pilot activities (e.g.: A/B testing), in close coordination with the business partners, needed to validate business outcomes. About You We are looking for a unique and amazing talent, who brings along the following: University degree/ Education, preferably in Mathematics, Statistics, Computer Science, or similar. Minimum 2 years of professional experience in a similar role within an international setting. Excellent analytical and problem-solving skills Excellent English proficiency, presentation, and communication skills Proven experience on AI solutions development and deployment in GCP, preferably Vertex AI. Proven experience in Python, CICD, Pipeline Frameworks (Kubeflow, Mlflow, TFX), database languages (e.g., SQL) and relevant Docker/Containerization basics. Good experience on Google Cloud, especially on VertexAI, Cloud Build, Monitoring and Logging. (certifications are a plus) About Swarovski Masters of Light Since 1895 Swarovski creates beautiful crystals-based products of impeccable quality and craftsmanship that bring joy and celebrate individuality. Founded in 1895 in Austria, the company designs, manufactures and sells the world's finest crystals, gemstones, Swarovski Created Diamonds and zirconia, jewelry, and accessories, as well as objects and home accessories. Swarovski Crystal Business has a global reach with approximately 2,400 stores and 6,700 points of sales in over 150 countries and employs more than 18,000 people. Together with its sister companies Swarovski Optik (optical devices) and Tyrolit (abrasives), Swarovski Crystal Business forms the Swarovski Group. A responsible relationship with people and the planet is part of Swarovski’s heritage. Today this legacy is rooted in sustainability measures across the value chain, with an emphasis on circular innovation, championing diversity, inclusion and self-expression, and in the philanthropic work of the Swarovski Foundation, which supports charitable organizations bringing positive environmental and social impact. Show more Show less

Posted 6 days ago

Apply

0.0 - 3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

We are seeking a motivated Junior Full Stack Developer to join our dynamic development team. This role is perfect for developers with 0-3 years of experience who are passionate about modern web technologies and eager to grow their skills in a collaborative environment. Ideal Candidate You're a curious problem-solver who enjoys working with modern technologies and isn't afraid to learn new tools. You have a solid foundation in web development fundamentals and are excited about leveraging AI tools to enhance your productivity. You thrive in collaborative environments and are eager to contribute to innovative projects while growing your technical expertise. Responsibilities ● Develop and maintain responsive web applications using Next.js and modern React patterns ● Build robust backend services using Python or Node.js ● Implement and manage cloud infrastructure on GCP or Azure platforms ● Design and optimize database schemas and API integrations using Supabase or similar backend-as-a-service platforms ● Collaborate with cross-functional teams using Git for version control and code collaboration ● Participate in code reviews and contribute to technical documentation ● Debug and troubleshoot issues across the full application stack ● Stay current with emerging technologies and best practices Skills & Experience ● Experience Level: 0-3 years in full-stack development ● Frontend: Proficiency in Next.js, React, HTML5, CSS3, and JavaScript/TypeScript ● Backend: Working knowledge of Python or Node.js for server-side development ● Cloud Infrastructure: Basic understanding of GCP or Azure services (Compute Engine, App Service, Storage, etc.) ● Backend-as-a-Service: Experience with Supabase, Firebase, or similar serverless backend solutions ● Version Control: Proficient with Git workflows and collaborative development practices ● AI Tools: Familiarity with AI-powered development tools (GitHub Copilot, ChatGPT, Claude, etc.) for code assistance and productivity Qualifications ● Understanding of RESTful APIs and GraphQL ● Experience with containerization (Docker) and basic DevOps practices ● Knowledge of database design and SQL ● Familiarity with testing frameworks (Jest, Pytest, Cypress) ● Understanding of responsive design principles and CSS frameworks ● Experience with CI/CD pipelines ● Basic knowledge of authentication and authorization patterns ● Exposure to monitoring and logging tools What We Offer ● Competitive salary commensurate with experience ● Comprehensive health, dental, and vision insurance ● Professional development opportunities and learning budget ● Flexible work arrangements (hybrid/remote options) ● Modern tech stack and cutting-edge projects ● Mentorship from senior developers ● Collaborative and inclusive work environment ● Career growth opportunities Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Bosch Coimbatore, Tamil Nadu, India Posted on Jun 10, 2025 Apply now Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Roles & Responsibilities: Job Description (Main Activities / Duties) Up to Level 2 Support for the Bosch infrastructure services with a focus on SD-WAN, our Global Backbone, as well as our central hubs in core locations worldwide. Build up and maintain monitoring & logging tools. Monitor performance, availability, and overall health of the network. Document and log issues and resolution steps. Acting in the operation of global IT services, solving problems, incidents, configurations, alerts, service requests and monitoring related to network services and solutions in Bosch datacenter networks worldwide and together with engineering teams, partners, and vendors. Escalate issues to the appropriate teams. Availability to work in shift hours, including weekends and holidays. Support projects like the rollout and implementation of the SD-WAN stack at Bosch locations. Work closely with Service Managers, Operation Managers and Engineering teams on opportunities for improvement. Work and collaborate on an international team operating, supporting, monitoring, implementing, replacing, extending, upgrading, and optimizing network solutions globally. Executing and optimizing operational processes, reviewing procedures and documents related to monitoring, supporting, and operating of the network solutions and technologies. Qualification Required skills: Degree in Computer Science, Network Analyst or equivalent Knowledge in networking technologies (e.g. OSPF, BGP, MPLS, QoS) Knowledge in VPN technologies (e.g. IPSec, SSL, DMVPN, GetVPN) Knowledge in SDWAN technologies and products (preferably Cisco Viptela) Broad knowledge in basic network and security concepts and implementations (NAT, DNS, Proxies, Load balancers, ACLs, etc.) Experience with major hard- and software platforms from Cisco (IOS, NX-OS) Preferrable: Fundamental Knowledge in software-driven networking (Python, Ansible, CI/CD, GIT) Language: English. Previous experience in implementation, operation, monitoring and support of network technologies and solutions. Knowledge in configuration and administration of network solutions and equipment, further networking protocols (IPsec, spanning-tree, mac, ARP…) as well Cisco ACI technology are welcomed. Experienced in network environments for support in troubleshooting, scalability issues, automation, and operation. Desirable knowledge in scripting (PowerShell, VBA, etc.) and programming languages (Phyton, Ansible, SQL, etc.). Knowledge of virtualization technologies (on premises and in the cloud) Previous experience working Monitoring and Operation of IT Infrastructure (cloud and on-premises) Certifications will be a differentiator (i.e. CISCO, ITIL, CCNA,etc) Personal Profile Initiative, dynamism, empathy, proactivity in proposing process improvements or problem solutions, organized, teamwork and collaboration, commitment (with the team, with tasks, deliveries, deadlines, schedules) are the main desired characteristics. Assertive communication, good English language proficiency for interaction with international teams and answering calls from customers from different countries. Qualifications BE/ B Tech or Equivalent Apply now See more open positions at Bosch Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Responsibilities We are seeking two experienced Senior Python Developers to play a key role in the design, development, and maintenance of our integration framework and our core platform. You will be responsible for building robust and scalable microservices, implementing complex data ingestion and control/orchestration integrations, and ensuring the overall quality and performance of the systems. This role requires strong expertise in Python, FastAPI, and a deep understanding of distributed systems and API development. Location : Zirakpur ,Chandigarh Experience: 5+ Years Key Responsibilities: • Design, develop, test, deploy, and maintain high-quality, scalable, and performant microservices for our integration framework using FastAPI and Python. • Contribute to the development and maintenance of our core platform (potentially involving Django). • Implement data ingestion (monitoring, data collection) and control/orchestration (configuration, provisioning) integrations with diverse external systems and data sources, following our defined integration plans. • Develop and manage RESTful APIs, ensuring they are well-documented and follow best practices. • Work extensively with asynchronous programming. • Define and utilize Pydantic models for data validation, serialization, and clear API contracts. • Integrate with and utilize message brokers for event-driven communication and task queuing. • Implement and manage caching strategies using Redis to optimize performance. Work with various data storage and processing solutions, including time-series data, and leverage libraries like pandas, Polars for efficient data manipulation. • Develop and maintain modular adapter components for different external systems, enabling both data polling and control/orchestration actions. • Contribute to the design and implementation of features like a Request Tracking System and Orchestration Transaction Management for system interaction workflows. • Write comprehensive unit, integration, and end-to-end tests to ensure code quality and reliability. • Collaborate effectively with cross-functional teams, including product managers, QA engineers, and other developers. • Participate in code reviews, architectural discussions, and contribute to improving development processes. • Troubleshoot, debug, and resolve complex technical issues in development and production environments. • Stay updated with emerging technologies and industry best practices. Required Skills and Experience: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 5+ years of professional software development experience with a strong focus on Python. • Proven experience in building and scaling applications using FastAPI. • Experience with Django is highly desirable. • Solid understanding and hands-on experience with RESTful API design, development, and security. • Proficiency in asynchronous programming with Python's asyncio. • Extensive experience with Pydantic for data modeling and validation. • Demonstrable experience with message broker technologies (NATS preferred; RabbitMQ, Kafka, or similar are also valuable). • Practical experience with caching mechanisms (Redis preferred). • Experience with SQL and NoSQL databases. Familiarity with Apache Parquet format is a significant plus. • Strong understanding of microservices architecture, distributed systems, and inter-service communication patterns. • Proficiency with software development best practices, including version control (Git), automated testing (unit, integration), and CI/CD concepts. • Excellent problem-solving, analytical, and debugging skills. • Strong communication and collaboration abilities. • Ability to work independently and as part of a team in a fast-paced environment. Desirable Skills (Nice-to-Haves): • Experience in developing large-scale system integration platforms or frameworks for managing distributed components. • Knowledge of common data exchange protocols and formats relevant to system integration. • Experience with data processing and analysis libraries like Pandas, Polars. • Basic knowledge of containerization (Docker) and orchestration (Kubernetes). • Experience with system monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK stack, Structlog). • Understanding of API security best practices (OAuth2, JWT, etc.). • Experience with dependency injection frameworks/patterns. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role: Python + microservices Experience range: 8-10 years Location: Current location must be Bangalore NOTE: Candidate interested for Walk-in drive in Bangalore must apply Job description: Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver). Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 2 years Location: Bangalore JobType: full-time About The Role We're building the technical backbone of an AI-first product, and we're looking for a Backend Software Engineer to help architect and scale it. This is a high-impact role for an engineer who enjoys solving infrastructure-level challenges, building robust systems from scratch, and shipping production-grade code in a fast-paced startup environment. You’ll collaborate closely with frontend, product, and AI teams to power real-time, large-scale applications that are redefining how users communicate with AI. What You’ll Be Doing Design and implement backend systems using GoLang and Node.js, ensuring modularity, scalability, and high availability. Develop and optimize RESTful APIs, integrating securely with identity protocols like OAuth, Auth0, and SAML. Build cloud-native microservices, implement caching strategies (e.g., Redis), and design infrastructure that supports geo-distributed users. Create and maintain CI/CD pipelines to support rapid feature releases and safe deployment cycles. Manage system performance, identify bottlenecks, and optimize code and infrastructure for scale. Contribute to architectural decisions and drive backend standards across the team. Own feature delivery end-to-end—from requirement scoping to production monitoring. What We’re Looking For 2–4 years of experience in backend development, preferably in fast-paced startups or product-first teams. Hands-on expertise in GoLang and Node.js , and building microservices-based architectures. Familiarity with MongoDB , Redis , and cloud platforms (AWS/GCP/Azure). Strong grasp of system design, distributed systems, data modeling, and API architecture. Experience with modern dev workflows—CI/CD, Git, Docker, logging & monitoring tools. A bias for action—you ship code, solve problems, and continuously improve. Bachelor’s degree in Computer Science or a related field. Candidates from Tier-1 institutes (IITs, NITs, BITS, IIITs) preferred. Bonus: Contributions to open-source projects or side projects that demonstrate backend expertise. Why Join Us? Opportunity to work on infrastructure powering AI-based video and communication tools . Fast-track your growth into senior technical roles or engineering leadership. Be part of foundational engineering decisions—shape our systems, not just maintain them. Collaborate with a world-class team building multi-agent AI systems and complex integrations. Tech Stack GoLang Node.js MongoDB Redis REST APIs Auth0 OAuth SAML CI/CD Docker Cloud Infrastructure Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. 🚀 We’re Hiring! Associate Support Consultant Join us in delivering exceptional client support and driving impactful healthcare solutions. What You’ll Do As an Associate Support Consultant , you’ll be the first line of support for our clients — solving issues, answering questions, and ensuring smooth product usage via phone, email, or chat. Your role will involve: ✅ Resolving client issues related to product functionality and system setup ✅ Logging, tracking, and documenting support cases ✅ Collaborating with product teams to escalate bugs or feedback ✅ Reproducing client issues and assisting in root cause identification ✅ Learning continuously through mentorship and knowledge base use Who You Are 🎓 Bachelors degree in any stream – Technical/Science/Humanities 🗣 Strong verbal and written communication skills (C1 English proficiency) 💡Curious, tech-friendly, and eager to learn troubleshooting basics 💻Comfortable working in a hybrid or remote support environment Bonus If You Have 🔹 2–3 years of experience in a customer or product support role 🔹 Basic familiarity with cloud, app support, or healthcare software Working Style 🕘 Standard weekday hours, with occasional after-hours or holiday support when needed 🌍 Enjoy a flexible work setup — from home and a professional office space 🌴 We appreciate your time off! Just a heads-up: during key business times, PTO might be limited to keep things running smoothly Ready to support world-class products and build your career in tech support? Apply Now! Let’s grow together. We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself! Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description PayPay is looking for an experienced Cloud-Based AI and ML Engineer. This role involves leveraging cloud-based AI/ML Services to build infrastructure as well as developing, deploying, and maintaining ML models, collaborating with cross-functional teams, and ensuring scalable and efficient AI solutions particularly on Amazon Web Services (AWS). Main Responsibilities 1. Cloud Infrastructure Management : - Architect and maintain cloud infrastructure for AI/ML projects using AWS tools. - Implement best practices for security, cost management, and high-availability. - Monitor and manage cloud resources to ensure seamless operation of ML services. 2. Model Development and Deployment : - Design, develop, and deploy machine learning models using AWS services such as SageMaker. - Collaborate with data scientists and data engineers to create scalable ML workflows. - Optimize models for performance and scalability on AWS infrastructure. - Implement CI/CD pipelines to streamline and accelerate the model development and deployment process. - Set up a cloud-based development environment for data engineers and data scientists to facilitate model development and exploratory data analysis - Implement monitoring, logging, and observability to streamline operations and ensure efficient management of models deployed in production. 3. Data Management : - Work with structured and unstructured data to train robust ML models. - Use AWS data storage and processing services like S3, RDS, Redshift, or DynamoDB. - Ensure data integrity and compliance with set Security regulations and standards. 4. Collaboration and Communication : - Collaborate with cross-functional teams including DevOps, Data Engineering, and Product Management teams. - Communicate technical concepts effectively to non-technical stakeholders. 5. Continuous Improvement and Innovation : - Stay updated with the latest advancements in AI/ML technologies and AWS services. - Provide through Automations means for developers to easily develop and deploy their AI/ML models on AWS. Tech Stack - AWS: - VPC, EC2, ECS, EKS, Lambda, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK,Amazon Kinesis, CodeCommit, CodeBuild, CodeDeploy, CodePipeline, AWS Lake Formation, AWS Glue, SageMaker and other AI Services. - Terraform, Github Actions, Prometheus, Grafana, Atlantis - OSS (Administration experience on these tools) - Jupyter, MLFlow, Argo Workflows, Airflow Required Skills and Experiences - More than 5+ years of technical experience in cloud-based infrastructure with a focus on AI and ML platforms - Extensive technical hands-on experience with computing, storage, and analytical services on AWS. - Demonstrated skill in programming and scripting languages, including Python, Shell Scripting, Go, and Rust. - Experience with infrastructure as code (IAC) tools in AWS, such as Terraform, CloudFormation, and CDK. - Proficient in Linux internals and system administration. - Experience in production level infrastructure change management and releases for business-critical systems. - Experience in Cloud infrastructure and platform systems availability, performance and cost management. - Strong understanding of cloud security best practices and payment industry compliance standards. - Experience with cloud services monitoring, detection, and response, as well as performance tuning and cost control. - Familiarity with cloud infrastructure service patching and upgrades. - Excellent oral, written, and interpersonal communication skills. Preferred Qualifications - Bachelor’s degree and above in a technology related field - Experience with other cloud service providers (e.g GCP, Azure) - Experience with Kubernetes - Experience with Event-Driven Architecture (Kafka preferred) - Experience using and contributing to Open Source tools - Experience in managing IT compliance and security risk - Published papers / blogs / articles - Relevant and verifiable certifications Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team. Show more Show less

Posted 6 days ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Engineer Lead - AEP Location: Remote Experience Required: 12–15 years overall experience 8+ years in Data Engineering 5+ years leading Data Engineering teams Cloud migration & consulting experience (GCP preferred) Job Summary: We are seeking a highly experienced and strategic Lead Data Engineer with a strong background in leading data engineering teams, modernizing data platforms, and migrating ETL pipelines and data warehouses to Google Cloud Platform (GCP) . You will work directly with enterprise clients, architecting scalable data solutions, and ensuring successful delivery in high-impact environments. Key Responsibilities: Lead end-to-end data engineering projects including cloud migration of legacy ETL pipelines and Data Warehouses to GCP (BigQuery) . Design and implement modern ELT/ETL architectures using Dataform , Dataplex , and other GCP-native services. Provide strategic consulting to clients on data platform modernization, governance, and data quality frameworks. Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders. Define and enforce data engineering best practices , coding standards, and CI/CD processes. Mentor and manage a team of data engineers; foster a high-performance, collaborative team culture. Monitor project progress, ensure delivery timelines, and manage client expectations. Engage in technical pre-sales and solutioning , driving excellence in consulting delivery. Technical Skills & Tools: Cloud Platforms: Strong experience with Google Cloud Platform (GCP) – particularly BigQuery , Dataform , Dataplex , Cloud Composer , Cloud Storage , Pub/Sub . ETL/ELT Tools: Apache Airflow, Dataform, dbt (if applicable). Languages: Python, SQL, Shell scripting. Data Warehousing: BigQuery, Snowflake (optional), traditional DWs (e.g., Teradata, Oracle). DevOps: Git, CI/CD pipelines, Docker. Data Modeling: Dimensional modeling, Data Vault, star/snowflake schemas. Data Governance & Lineage: Dataplex, Collibra, or equivalent tools. Monitoring & Logging: Stackdriver, DataDog, or similar. Preferred Qualifications: Proven consulting experience with premium clients or Tier 1 consulting firms. Hands-on experience leading large-scale cloud migration projects . GCP Certification(s) (e.g., Professional Data Engineer, Cloud Architect). Strong client communication, stakeholder management, and leadership skills. Experience with agile methodologies and project management tools like JIRA. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Scope of work: Visual Basics Script to Python 1. Planning1.1 ObjectiveRecreate the functionality of the original script using Python to perform normality checks, automate email reporting, and log processing. 1.2 Requirement Analysis1.2.1 Input/Output Requirements Input: CSV files, system configurations, or other data sources.Output: Generated reports, emails, and logs.1.2.2 System Requirements Python 3.xDependencies: pandas, openpyxl, smtplib, os, etc. 1.3 MilestonesScript conversion to Python.Functional testing of email integration and report generation.Deployment and maintenance. 2. Design2.1 System ArchitectureCore functionality divided into modules: Data Processing Module: Parses input files and validates data.Report Generation Module: Generates normality check reports.Email Automation Module: Sends reports via email.Error Logging Module: Logs errors for troubleshooting. 2.2 User Interface• Console-based interaction for configuration, with optional integration into a web-based dashboard. 3. Technology Stack3.1 Python LibrariesData handling: pandas, os, csv.Email: smtplib, email.File processing: openpyxl, xlsxwriter. 4. Develoment4.1 Setup and ConfigurationInitialize the Python project with appropriate file structure.Create a configuration file (config.json or .env) for constants like email credentials and paths. 4.2 Script Development4.2.1 Data Handling Read and validate input files (e.g., CSV/Excel).Handle missing or malformed data.4.2.2 Report Generation Use pandas to create data summaries or normality checks.Save output as .xlsx or .pdf.4.2.3 Email Automation Integrate smtplib to send emails with attachments.Ensure secure authentication (e.g., TLS/SSL).4.2.4 Error Handling and Logging 1. Implement structured logging using logging module. 5. Testing5.1 Unit Testing Test individual functions for data processing, report generation, and email sending.5.2 Integration Testing Verify end-to-end functionality, ensuring compatibility between modules.5.3 Environment Compatibility Testing Validate functionality across different OS environments (Windows, Linux). 5.4 Performance Testing Ensure the script handles large datasets efficiently. 6. Deployment6.1 DocumentationCreate a comprehensive user manual with steps for installation and execution.Document code using inline comments and docstrings. 6.2 Environment SetupProvide a requirements file (requirements.txt) for dependencies.Package the script for easy deployment using PyInstaller or similar tools. 7. Maintenance and Support7.1 Version Control Use Git for version tracking and collaborative development. 7.2 Updates and Bug Fixes Periodically review the script for improvements and new features. 7.3 Support• Provide ongoing support for troubleshooting and issue resolution. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as a " Full stack developer " at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a " Full stack developer " you should have experience with: Core Programming Skills Java (v8–17): Strong understanding of OOP, functional programming, and concurrency Spring Framework (Core, Boot, MVC, AOP, Security) Spring Boot: Microservices architecture, auto-configuration, starters Spring Data JPA & Hibernate RESTful Web Services (Design, development, documentation using RAML/OpenAPI) Exception handling, validation, logging (SLF4J, Logback, Log4j) Microservices & Cloud-Native Development Microservices design patterns (Circuit Breaker, API Gateway, Service Discovery, etc.) Service orchestration & inter-service communication (REST, gRPC, Kafka) API Gateway (e.g., Zuul, Spring Cloud Gateway) Configuration management (Spring Cloud Config, Consul) Observability: Actuator, Micrometer, Prometheus, Grafana DevOps & CI/CD Exposure to OpenShift Container Platform Deployment, scaling, and management of Spring Boot apps on OpenShift. Understanding of OpenShift templates, Routes, ConfigMaps, Secrets Integration with Jenkins/GitLab for CI/CD pipelines Experience with oc CLI and OpenShift Web Console Docker: Creating & managing Docker images for Spring Boot apps Kubernetes (basic to intermediate knowledge, esp. with OpenShift) Helm (basic understanding) CI/CD tools: Jenkins, GitHub Actions, GitLab CI Database & Persistence: (Good to have) SQL (Oracle, PostgreSQL, MySQL) NoSQL (MongoDB, Redis) Query optimization, indexing, performance tuning Liquibase / Flyway for DB versioning Testing & Quality Assurance: (Good to have) Unit Testing: JUnit, Mockito Integration Testing: TestContainers, Spring Test Contract Testing: Pact Performance Testing: JMeter (basic) Tools & IDEs IntelliJ IDEA / Eclipse Postman / Swagger UI Git, GitHub / GitLab / Bitbucket Maven / Gradle Soft Skills & Experience Agile / Scrum methodologies Code review & mentoring junior developers Client interaction & requirement gathering Troubleshooting in production (logs, metrics, APM tools) Clear documentation and reporting Bonus Skills Kafka or RabbitMQ for messaging ELK Stack / Splunk for centralized logging Experience with API Management tools (Apigee, Kong) Knowledge of Security practices (JWT, OAuth2, SSO) Hands-on with monitoring tools (New Relic, AppDynamics). You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled WebLogic Administrator with deep expertise in WLST scripting, DevOps practices, and containerization technologies. The ideal candidate will be responsible for administering and modernizing WebLogic environments across on-premises and Oracle Cloud Infrastructure (OCI) setups. This includes scripting with WLST, containerizing applications with Docker, managing deployments via Kubernetes, and integrating with CI/CD pipelines. Key Responsibilities Administer and optimize Oracle WebLogic Server environments in both on-prem and cloud (OCI) contexts. Perform WebLogic upgrades to the latest supported versions (e.g., 14.x). Automate WebLogic domain creation, configuration, and deployments using WLST (WebLogic Scripting Tool). Containerize WebLogic applications using Docker and orchestrate them via Kubernetes. Manage WebLogic domains using the WebLogic Kubernetes Operator, including domain resource configuration and lifecycle events. Design and implement secure, scalable Docker networking for clustered WebLogic environments. Deploy and manage infrastructure on OCI and/or on-prem, including use of Kubernetes (OKE preferred). Build and maintain CI/CD pipelines using Jenkins, GitLab CI or GitHub Actions, or OCI DevOps for seamless deployment and updates. Implement monitoring, logging, and alerting solutions to support operational excellence. Maintain documentation and provide knowledge transfer to teams as needed. Required Skills Mandatory: 7+ years of hands-on experience with Oracle WebLogic Server administration. Mandatory: Proven expertise in WLST scripting for automating WebLogic tasks (domain creation, deployments, configurations). Mandatory: Experience with WebLogic version upgrades (e.g., 11g/12c to 14.x). Proficiency with Docker, container networking, and Kubernetes orchestration. Hands-on experience managing WebLogic domains via WebLogic Kubernetes Operator. Strong knowledge of DevOps tools and practices, including CI/CD, automation, and configuration management. Scripting skills (WLST, Shell, Python) and experience with Infrastructure-as-Code tools (Terraform, Ansible). Familiarity with both on-prem infrastructure and OCI platforms. Preferred Qualifications Experience deploying and managing workloads on Oracle Cloud Infrastructure (OCI), especially using OKE. OCI certifications (e.g., Architect Associate, DevOps Professional) and Weblogic Administration Certification Experience with Helm, ingress controllers, and Kubernetes networking. Familiarity with observability tools like Prometheus, Grafana, OCI Monitoring, or ELK. Understanding of security practices for WebLogic, containers, and hybrid environments. Show more Show less

Posted 6 days ago

Apply

Exploring Logging Jobs in India

The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Chennai

These cities are known for their thriving industries where logging professionals are actively recruited.

Average Salary Range

The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.

Related Skills

In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.

Interview Questions

  • What is logging and why is it important in software development? (basic)
  • Can you explain the difference between logging levels such as INFO, DEBUG, and ERROR? (medium)
  • How do you handle log rotation in a large-scale application? (advanced)
  • Have you worked with any logging frameworks like Log4j or Logback? (basic)
  • Describe a challenging logging issue you faced in a previous project and how you resolved it. (medium)
  • How do you ensure that log files are secure and comply with data protection regulations? (advanced)
  • What are the benefits of structured logging over traditional logging methods? (medium)
  • How would you optimize logging performance in a high-traffic application? (advanced)
  • Can you explain the concept of log correlation and how it is useful in troubleshooting? (medium)
  • Have you used any monitoring tools for real-time log analysis? (basic)
  • How do you handle log aggregation from distributed systems? (advanced)
  • What are the common pitfalls to avoid when implementing logging in a microservices architecture? (medium)
  • How do you troubleshoot a situation where logs are not being generated as expected? (medium)
  • Have you worked with log parsing tools to extract meaningful insights from log data? (medium)
  • How do you handle sensitive information in log files, such as passwords or personal data? (advanced)
  • What is the role of logging in compliance with industry standards such as GDPR or HIPAA? (medium)
  • Can you explain the concept of log enrichment and how it improves log analysis? (medium)
  • How do you handle logging in a multi-threaded application to ensure thread safety? (advanced)
  • Have you implemented any custom log formats or log patterns in your projects? (medium)
  • How do you perform log monitoring and alerting to detect anomalies or errors in real-time? (medium)
  • What are the best practices for logging in cloud-based environments like AWS or Azure? (medium)
  • How do you integrate logging with other monitoring and alerting tools in a DevOps environment? (medium)
  • Can you discuss the role of logging in performance tuning and optimization of applications? (medium)
  • What are the key metrics and KPIs you track through log analysis to improve system performance? (medium)

Closing Remark

As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies