Home
Jobs

4124 Logging Jobs - Page 38

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Angular Expert in HTML,CSS. Proficiency in TypeScript and JavaScript. Expert knowledge of Angular architecture, including advanced component communication and state management (e.g., NgRx). Extensive experience with Angular CLI, with the ability to customize and optimize build configurations. Advanced understanding of RxJS for complex reactive programming scenarios. Experience with performance optimization techniques for large-scale Angular applications. Proficiency in lazy loading modules and optimizing change detection strategies. Expertise in setting up and maintaining comprehensive test suites using Jasmine, Karma, and Protractor. Ability to implement and enforce code quality standards and best practices. Strong experience with responsive design and accessibility standards. Ability to design and implement custom UI components using Angular Material or other libraries. Java In-depth knowledge of Java concurrency, collections, and design patterns. Extensive experience with Spring Framework, especially Spring Boot, Spring Security, and Spring Cloud for developing microservices. Ability to implement scalable, secure, and high-performance RESTful APIs Proficiency in advanced testing techniques and frameworks, including BDD/TDD with JUnit, TestNG, and Mockito. Cloud(AWS/Azure) Understanding of cloud computing concepts, benefits, and deployment models. Experience with core AWS services such as EC2, S3,IAM. Strong understanding of cloud security best practices, including identity and access management, encryption, and network security. Understanding of monitoring, logging, and alerting for cloud applications using AWS CloudWatch, Azure Monitor, or third-party tools. Experience with containerization technologies like Docker and orchestration with Kubernetes. General Experience with Git for version control. Ability to lead technical teams, mentor junior developers, and drive technical discussions and decisions. Experience in implementing architectural solutions that align with business goals and technical requirements. Commitment to continuous learning and staying updated with industry trends, tools, and technologies. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview The Java Support Analyst is responsible for maintaining, troubleshooting, and optimizing enterprise Java applications . This role involves incident resolution, performance tuning, API troubleshooting, database optimization, and CI/CD deployment support . The analyst will work in an Agile, DevOps-driven environment and support legacy modernization, application enhancements, stabilization, and performance improvements for mission-critical applications in Freight, Rail, and Logistics industries. Required Technical Skills 🔹 Java, Spring Boot, Hibernate, JPA, REST APIs, Microservices 🔹 Database performance tuning (Oracle, MySQL, PostgreSQL, SQL Server, MongoDB) 🔹 CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps) 🔹 Cloud platforms (AWS, Azure, GCP) and containerized deployments (Docker, Kubernetes) 🔹 Monitoring tools (Splunk, ELK, Dynatrace, AppDynamics, New Relic) 🔹 Security frameworks (OAuth, JWT, SAML, SSL/TLS, LDAP, Active Directory) Key Responsibilities 1️ ⃣ Incident & Problem Management ✅ Provide Level 2/3 support for Java applications, resolving production issues, API failures, and backend errors. ✅ Diagnose and troubleshoot Java-based application crashes, memory leaks, and performance bottlenecks . ✅ Analyze logs using Splunk, ELK Stack, Dynatrace, AppDynamics, or New Relic . ✅ Work with ITIL-based Incident, Problem, and Change Management processes. ✅ Perform root cause analysis (RCA) for recurring production issues and implement permanent fixes. 2️ ⃣ Java Application Debugging & Optimization ✅ Debug and analyze Java applications built on Spring Boot, Hibernate, and Microservices . ✅ Fix issues related to RESTful APIs, SOAP web services, JSON/XML parsing, and data serialization . ✅ Optimize Garbage Collection (GC), CPU, and memory utilization for Java applications. ✅ Work with Java profiling tools (JVisualVM, YourKit, JProfiler) to identify slow processes. ✅ Assist developers in resolving code-level defects and SQL performance issues . 3️ ⃣ API & Integration Support ✅ Troubleshoot REST APIs, SOAP services, and microservices connectivity issues . ✅ Monitor and debug API Gateway traffic (Kong, Apigee, AWS API Gateway, or Azure API Management) . ✅ Handle authentication and security for APIs using OAuth 2.0, JWT, SAML, and LDAP . ✅ Work on third-party system integrations with SAP, Salesforce, ServiceNow, or Workday. 4️ ⃣ Database Support & SQL Performance Tuning ✅ Analyze and optimize SQL queries, stored procedures, and indexing strategies . ✅ Troubleshoot deadlocks, connection pooling, and slow DB transactions in Oracle, PostgreSQL, MySQL, or SQL Server . ✅ Work with NoSQL databases like MongoDB, Cassandra, or DynamoDB for cloud-based applications. ✅ Manage ORM (Hibernate, JPA) configurations for efficient database transactions. 5️ ⃣ CI/CD & Deployment Support ✅ Support CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps . ✅ Work on Docker and Kubernetes-based deployments for Java applications. ✅ Assist in automated testing and validation before production releases. ✅ Troubleshoot deployment failures, rollback strategies, and hotfix releases . 6️ ⃣ Cloud & DevOps Support ✅ Monitor Java applications deployed on AWS, Azure, or GCP using CloudWatch, Azure Monitor, or Stackdriver . ✅ Support containerized deployments using Kubernetes, OpenShift, or ECS . ✅ Manage logging, monitoring, and alerting for cloud-native Java applications . ✅ Assist in configuring Infrastructure as Code (Terraform, Ansible, or CloudFormation) for DevOps automation. 7️ ⃣ Security & Compliance Management ✅ Ensure Java applications comply with security standards (GDPR, HIPAA, SOC 2, ISO 27001) . ✅ Monitor and mitigate security vulnerabilities using SonarQube, Veracode, or Fortify . ✅ Implement SSL/TLS security measures and API rate limiting to prevent abuse. 8️ ⃣ Collaboration & Documentation ✅ Work in Agile (Scrum/Kanban) environments for application support and bug fixes. ✅ Maintain technical documentation, troubleshooting guides, and runbooks . ✅ Conduct knowledge transfer sessions for junior support engineers. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. React Strong exp on Javascript, HTML and CSS, Expertise in React design patterns (HOCs, render props, compound components). Strong understanding of React performance optimization techniques. In-depth experience with state management tools (Redux Saga, Zustand, or similar). Knowledge of advanced React concepts like server-side rendering (Next.js) or static site generation. Familiarity with TypeScript in React projects. Proficient in writing maintainable CSS (CSS-in-JS, SCSS, Styled-Components). Java In-depth knowledge of Java concurrency, collections, and design patterns. Extensive experience with Spring Framework, especially Spring Boot, Spring Security, and Spring Cloud for developing microservices. Ability to implement scalable, secure, and high-performance RESTful APIs Proficiency in advanced testing techniques and frameworks, including BDD/TDD with JUnit, TestNG, and Mockito. Others Knowledge of Agile development processes and team collaboration tools (JIRA, Confluence). Exposure to cloud-native architectures and serverless computing. Code versioning: Version control systems (Git), Familiar with unit testing frameworks like Jest, Mocha and Enzyme. Hands on experience on Monitoring and Logging tools. Commitment to continuous learning and staying updated with industry trends, tools, and technologies. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

CACI Network Services is a rapidly expanding specialist IT and Networks consultancy offering a wide variety of opportunities to work within challenging and exciting environments with our major clients in Global Media, Banking, Government, Telecoms & Utilities. Project Overview: Our customer is in a transformative journey to modernize their network infrastructure. This project aims to enhance security, scalability, and agility while ensuring compliance with regulatory requirements. The Compliance Service Team will play a pivotal role in this initiative, focusing on device onboarding, certification, configuration management, and compliance reporting. We’re looking for talented developers to join our team and contribute to the success driving innovation and excellence in our network infrastructure. Essential Technical Skills: Front-end: Angular, TypeScript Back-end: Java, Python, Spring Boot Database: MongoDB, PL/SQL,NOSQL API Development: RESTful APIs Version Control: Git CI/CD: TeamCity Desirable Skills: Docker and Containerization Monitoring and Logging (e.g., Prometheus, Grafana, ELK Stack) Cloud Platforms (AWS, Azure, Google Cloud) Security and Compliance API Documentation (e.g., Swagger, OpenAPI) Code Quality Tools (e.g., SonarQube) Agile Methodologies (Scrum or Kanban) Soft Skills: Team Collaboration : Ability to work effectively with cross-functional teams, sharing knowledge and expertise. Proactive Approach : Anticipate challenges, identify opportunities, and take initiative to drive progress. Ownership and Accountability : Take ownership of tasks and projects, driving them to completion without needing constant guidance. End-to-End Understanding : Possess a holistic view of the project, understanding how individual components fit into the larger picture. Problem-Solving and Resilience : Drive issues to resolution, navigating complexities without getting bogged down. Training CACI Network Services develops individuals through a portfolio of training and development options such as certified training courses, workshops, technical conferences, boot camps, on-line training and much more. You will have the opportunity to work on some of the most advanced networking hardware in the industry as well as development of your abilities and talents to become one of the best in the field. Rewards and Benefits In return you will be awarded with a competitive salary, excellent benefits and the opportunity to develop your career and skills within a growing company. Equal Opportunities: CACI is proud to be an equal opportunities employer. Embracing the diversity of our people, we are on a journey to build a truly inclusive work environment where no one is treated less favourably due to ethnic origin, age, gender, veteran status, religion or belief, sexual orientation, marital status, and disability or health condition, actively working to prevent discrimination. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Greater Nashik Area

On-site

Linkedin logo

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role Anheuser-Busch InBev (AB InBev)’s Supply Analytics is responsible for building competitive differentiated solutions that enhance brewery efficiency through data-driven insights. We optimize processes, reduce waste, and improve productivity by leveraging advanced analytics and AI-driven solutions. As a Data Scientist you will work at the intersection of Conceptualize the analytical solution for the business problem by implementing statistical models and programming techniques. Application of machine learning solutions. Best in class cloud technology & micro-services architecture. Use DevOps best practices that include model serving, data & code versioning. Key tasks & accountabilities Develop and fine-tune Gen AI models to solve business problems, leveraging LLMs, and other advanced AI techniques. Design, implement, and optimize AI-driven solutions that enhance automation, efficiency, and decision-making. Work with cloud-based architectures to deploy and scale AI models efficiently using best-in-class microservices. Apply DevOps and MLOps best practices for model serving, data and code versioning, and continuous integration/deployment. Collaborate with cross-functional teams (engineering, business, and product teams) to translate business needs into AI-driven solutions. Ensure model interpretability, reliability, and performance, continuously improving accuracy and reducing biases. Develop internal tools and utilities to enhance the productivity of the team and streamline workflows. Maintain best coding practices, including proper documentation, testing, logging, and performance monitoring. Stay up to date with the latest advancements in Gen AI, LLMs, and deep learning to incorporate innovative approaches into projects. Qualifications, Experience, Skills Level Of Educational Attainment Required Academic degree in, but not limited to, Bachelors or master's in computer application, Computer science, or any engineering discipline. Previous Work Experience Minimum 3 years of relevant experience. Technical Skills Required Programming Languages: Proficiency in Python. Mathematics and Statistics: Strong understanding of linear algebra, calculus, probability, and statistics. Machine Learning Algorithms: Knowledge of supervised, unsupervised, and reinforcement learning techniques. Natural Language Processing (NLP): Understanding of techniques such as tokenization, POS tagging, named entity recognition, and machine translation. LLMs: Experience with Langchain, inferring from LLMs and fine tuning LLMs for specific tasks, Prompt Engineering. Data Preprocessing: Skills in data cleaning, normalization, augmentation, and handling imbalanced datasets. Database Management: Experience with SQL and NoSQL databases like MongoDB and Redis. Cloud Platforms: Familiarity with Azure and Google Cloud Platform. DevOps: Knowledge of CI/CD pipelines, Docker, Kubernetes. Other Skills Required APIs: Experience with FastAPI or Flask. Software Development: Understanding of software development lifecycle (SDLC) and Agile methodologies. And above all of this, an undying love for beer! We dream big to create future with more cheers. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Role Overview We are seeking a backend engineer with strong fundamentals, system-level thinking, and production-grade implementation experience. The ideal candidate is deeply technical and can make architectural and performance-related decisions with clarity. You will be responsible for building scalable, fault-tolerant systems and APIs that power core business functions. Key Responsibilities Design, develop, and maintain backend services using any language Golang, Python, C++, or Java within a scalable microservices architecture. Build and document reliable APIs (REST/gRPC) with attention to versioning, observability, and fault-tolerance Develop and optimize data pipelines and event-driven architectures using Kafka, SQS, or NATS Design and tune PostgreSQL schemas and queries for performance, reliability, and scalability Implement and optimize concurrency primitives (goroutines/threads, mutexes, context, rate limiters) Own your code in production — including debugging, monitoring (Prometheus, Grafana), and incident resolution Participate in architecture discussions and code reviews; uphold high code quality and design standards Required Skills and Qualifications Core Engineering 3–7 years of backend engineering experience with olang, Python, C++, or Java in production environments Strong understanding of concurrency, memory management, goroutine/thread scheduling, and synchronization primitives Ability to design and debug high-throughput, low-latency systems with attention to memory and CPU efficiency System Design and Infrastructure Experience with distributed messaging systems (Kafka/SQS/NATS): offset management, retries, ordering, delivery guarantees Familiarity with rate limiting, circuit breaking, retry logic, and backpressure in API and message systems Practical knowledge of containerized development (Docker), CI/CD, and cloud infrastructure (AWS preferred) Database Expertise Strong SQL skills — schema design, indexing, query optimization, ACID properties, migration strategies Experience with PostgreSQL or similar RDBMS handling large datasets and complex queries Production-Readiness Exposure to observability stack: structured logging, metrics (Prometheus/Grafana), alerts, and debugging tools Experience writing clean, testable code with unit/integration tests and version control workflows Preferred Qualifications (Good to Have) Experience with Redis, Elasticsearch, or time-series databases Background in fintech, trading systems, or high-throughput transactional systems Active contributor to design docs or architecture reviews Strong problem-solving record in system-level root cause analysis Culture Fit You reason about why , not just how You take ownership end-to-end — design to on-call You value clarity, performance, and maintainability over short-term hacks You communicate precisely and work well in high-performance teams Interview Process Resume and project review DSA + problem-solving round System design + infrastructure deep dive Final culture + ownership round Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

2.0 - 3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do This Role will mainly be working as part of Change Ops for the Cloud Ops L2 team, which is eventually responsible for all Changes, across AWS, Azure & Google Cloud Platforms. Few core responsibilities though not limited to would be as below. The Cloud Ops Administrator is responsible for managing Teradata’s as-a-Service offering on public cloud (AWS/Azure/GC) Delivery responsibilities in the areas of cloud network administration, security administration, instantiation, provisioning, optimizing the environment, third party software support. Supporting the onsite teams with migration from On premise to Cloud for customers Implementing security best practices, and analyzing the partner compatibility Manages and coordinates all activities necessary to implement the Changes in the environment. Ensures Change status, progress and issues are communicated to the appropriate groups. Views and implements the process lifecycle and reports to upper management. Evaluates performance metrics against the critical success factors and assures actions for streamline the process. Perform Change related activities documented in the Change Request to ensure the Change is implemented according to plan Document closure activities in the Change record and completing the Change record Escalate any deviations from plans to appropriate TLs/Managers Provide input for the ongoing improvement of the Change Management process Manage and support 24x7 VaaS environments for multiple customers. Devise and implement security, operations best practices. Implementing development, production environment for data warehousing cloud environment Backup, Archive and Recovery planning and execution of the cloud-based data warehouses across all the platforms AWS/Azure/GC resources. Ensuring SLA are met during implementing the change Ensure all scheduled changes are implemented within the prescribed window First level of escalation for team members First level of help/support for team members Who You’ll Work This Role will mainly be working as part of Change Ops for the Cloud Ops L2 team, which is eventually responsible for all Cases, Incidents, Changes, across Azure & Google Cloud Platforms. This will be reporting into Delivery Manager for Change Ops What Makes You a Qualified Candidate Minimum 2-3 years of IT experience in a Systems Administrator / Engineer role. Minimum 1 years of Cloud hands-on experience (Azure/AWS/GCP). Cloud Certification ITIL or other relevant certifications are desirable Service Now / ITSM tool day to day Operations Must be willing to provide 24x7 on-call support on a rotational basis with the team. Must be willing to travel – both short-term and long-term What You’ll Bring 4 Year Engineering Degree or 3 Year Masters of Computer Application. Excellent oral and written communication skills in the English language Teradata/DBMS Experience Hands on experience with Teradata administration and strong understanding of Cloud capabilities and limitations Thorough understanding of Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape Implement and support new and existing customers on VaaS infrastructure. Thorough understanding of infrastructure (firewalls, load balancers, hypervisor, storage, monitoring, security etc. ) and have experience with orchestration to develop a cloud solution. Should have good knowledge of cloud services for Compute, Storage, Network and OS for at least one of the following cloud platforms: Azure Managed responsibilities as a Shift lead Should have experience in Enterprise VPN and Azure virtual LAN with data center Knowledge of monitoring, logging and cost management tools Hands-on experience with database architecture/modeling, RDBMS and No-SQL. Should have good understanding of data archive/restore policies. Teradata Basic If certified with VMware skills will be added advantage. Working experience in Linux administration, Shell Scripting. Working experience on any of the RDBMS like Oracle//DB2/Netezza/Teradata/SQL Server,MySQL. Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Highlight of the engagement opportunity Nature of role: Graduate with technical background Number of years of experience expected: 2-5 years. Areas of experience preferred: Providing product support for GRC solutions, implementation of GRC solutions, support on No-code platforms, customer handling experience Educational qualification expected: Bachelor’s degree. Additional qualifications/ certifications required: NA. Preferable additional certifications: Experience with ticketing system Preferred geography of previous work experience: India, Middle East, APAC Language requirements: Ability to write and speak fluently in English. Excellent written and verbal communication skills, with the ability to articulate complex technical issues to both technical and non-technical stakeholders Application experience: Development/ Implementation experience to GRC tools such as Pentana, MetricStream, Open Pages, SAP Audit, etc. is preferred. Acies is seeking a highly motivated and customer-focused Support Engineer to join our Customer Support team. The successful candidate will be the first point of contact for our B2B clients, providing initial technical assistance and issue resolution for our software products across the treasury, risk, finance, and regulatory compliance domains. This role requires a blend of technical aptitude, problem-solving skills, and excellent communication to ensure client satisfaction and efficient support delivery. Key responsibility areas: First-Line Technical Support: Provide timely and effective initial technical support to clients via phone, email, and ticketing system for inquiries related to Acies' software products. Issue Triage & Resolution: Accurately identify, diagnose, and resolve basic technical issues, common user errors, and configuration problems. Follow documented troubleshooting steps and solutions. Incident Logging & Tracking: Meticulously log all client interactions, incidents, and requests in the ticketing system, ensuring clear and comprehensive documentation. Escalation Management: Efficiently escalate complex or unresolved issues to L2 Support Engineers, Product Managers, or Development teams, ensuring all relevant information is provided for swift resolution. Client Communication: Maintain professional communication with clients, providing regular updates on issue status and estimated resolution times. Knowledge Base Contribution: Contribute to and utilize the internal knowledge base, creating new articles and updating existing ones to improve self-service options and support efficiency. Monitoring & Reporting: Monitor system health, alert dashboards, and common support trends. Contribute regular reports on support metrics and common issues. Product Understanding: Develop a strong understanding of Acies' software products, their functionalities, and common use cases within treasury, risk, finance, and regulatory compliance. Continuous Improvement: Proactively identify opportunities for process improvements within the support function to enhance client satisfaction and operational efficiency Feedback to product teams on recurring issues and frequent user problems in order to improve in the base system. Other important information: Work permit requirements: Either Indian Citizen or having valid work permit to work in India. Period of engagement: Full-time position Probation period: 6 months Compensation: Compensation varies depending on the skill, fitment and role played by the person. Compensation discussions will take place post the selection process. Performance incentives: Typically, all roles at Acies have a performance incentive. Specific aspects will be discussed during the compensation discussion Leave: 22 working days a year. Additional leaves for national holidays, sick leaves, maternity and paternity, bereavement and studies vary based on the city and country of engagement. Other benefits: Other employment benefits including medical insurance will be informed during the compensation discussion. Career growth for full-time roles: Acies believes in a transparent and data-based performance evaluation system. You are encouraged to clarify any questions you have with respect to career growth with Acies personnel you interact with during the selection process. Career opportunities for part-time roles: Conversion of part-time roles to full-time roles depends on both performance of the individual and business needs. You are encouraged to ask about the prospects as you interact with Acies personnel during the selection process. Global mobility: Acies encourages mobility across our offices. Such mobility is, however, subject to business needs and regulations governing immigration and employment in various countries. Selection process: We seek to be transparent during the selection process. While the actual process may vary from the process indicated below, the key steps involved are as follows: Interview: There are expected to be at least 2 rounds of interviews. The number of interview rounds may increase depending on the criticality and seniority of the role involved. Final discussion on career and compensation: Post final selection, a separate discussion will be set up to discuss compensation and career growth. You are encouraged to seek any clarifications. Preparation required: It is recommended that you prepare on the following aspects before the selection process: Understanding of the support process and ticketing process Good spoken and written English Ability to handle difficult conversations and hence may have to face stress interview. For any additional queries you may have, you can send a LinkedIn InMail to us, connect with us at https://www.acies.consulting/contact-us.php or e-mail us at careers@acies.holdings. How to reach us: Should you wish to apply for this job, please reach out to us directly through LinkedIn or apply on our website career page - https://www.acies.consulting/careers-apply.html Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Data Engineer- ETL Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 6 days ago

Apply

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do This Role will mainly be working as part of Change Ops for the Cloud Ops L2 team, which is eventually responsible for all Changes, across AWS, Azure & Google Cloud Platforms. Few core responsibilities though not limited to would be as below. The Cloud Ops Administrator is responsible for managing Teradata’s as-a-Service offering on public cloud (AWS/Azure/GC) Delivery responsibilities in the areas of cloud network administration, security administration, instantiation, provisioning, optimizing the environment, third party software support. Supporting the onsite teams with migration from On premise to Cloud for customers Implementing security best practices, and analyzing the partner compatibility Manages and coordinates all activities necessary to implement the Changes in the environment. Ensures Change status, progress and issues are communicated to the appropriate groups. Views and implements the process lifecycle and reports to upper management. Evaluates performance metrics against the critical success factors and assures actions for streamline the process. Perform Change related activities documented in the Change Request to ensure the Change is implemented according to plan Document closure activities in the Change record and completing the Change record Escalate any deviations from plans to appropriate TLs/Managers Provide input for the ongoing improvement of the Change Management process Manage and support 24x7 VaaS environments for multiple customers. Devise and implement security, operations best practices. Implementing development, production environment for data warehousing cloud environment Backup, Archive and Recovery planning and execution of the cloud-based data warehouses across all the platforms AWS/Azure/GC resources. Ensuring SLA are met during implementing the change Ensure all scheduled changes are implemented within the prescribed window First level of escalation for team members First level of help/support for team members Who You’ll Work This Role will mainly be working as part of Change Ops for the Cloud Ops L2 team, which is eventually responsible for all Cases, Incidents, Changes, across Azure & Google Cloud Platforms. This will be reporting into Delivery Manager for Change Ops What Makes You a Qualified Candidate Minimum 2-3 years of IT experience in a Systems Administrator / Engineer role. Minimum 1 years of Cloud hands-on experience (Azure/AWS/GCP). Cloud Certification ITIL or other relevant certifications are desirable Service Now / ITSM tool day to day Operations Must be willing to provide 24x7 on-call support on a rotational basis with the team. Must be willing to travel – both short-term and long-term What You’ll Bring 4 Year Engineering Degree or 3 Year Masters of Computer Application. Excellent oral and written communication skills in the English language Teradata/DBMS Experience Hands on experience with Teradata administration and strong understanding of Cloud capabilities and limitations Thorough understanding of Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape Implement and support new and existing customers on VaaS infrastructure. Thorough understanding of infrastructure (firewalls, load balancers, hypervisor, storage, monitoring, security etc. ) and have experience with orchestration to develop a cloud solution. Should have good knowledge of cloud services for Compute, Storage, Network and OS for at least one of the following cloud platforms: Azure Managed responsibilities as a Shift lead Should have experience in Enterprise VPN and Azure virtual LAN with data center Knowledge of monitoring, logging and cost management tools Hands-on experience with database architecture/modeling, RDBMS and No-SQL. Should have good understanding of data archive/restore policies. Teradata Basic If certified with VMware skills will be added advantage. Working experience in Linux administration, Shell Scripting. Working experience on any of the RDBMS like Oracle//DB2/Netezza/Teradata/SQL Server,MySQL. Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Data Engineer- ETL Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Test Engineers for Banking project( Expeience in Finacle application Mandatory) Responsibilities Expeience in Finacle application Mandatory Participate in business walkthrough and understand the documents shared by the Bank Understand the business requirements, functionality, workflow, screen navigation and acquire good knowledge on the application to be tested Raise functional / business clarifications Design the Test case and Test data document Incorporation of review comments on the Test ware prepared Logging of test execution results – pass logs, defect logs, re-raise logs and closure logs Essential Skills Expeience in Finacle application Mandatory Experience 2+ Yrs Show more Show less

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Responsibilities Manage and maintain Azure Kubernetes Services. Establish, deploy, and maintain CI/CD pipelines to automate the build, test, and deployment processes. Investigate and resolve issues related to the application infrastructure, continuous integration, and deployment pipelines. Desired Skills And Experience Candidate Profile ◼ 5 to 10 years of experience in a DevOps role preferably in Investment Banking. ◼ Certified Kubernetes Administrator having certification status in the active state. ◼ Experience in managing and working with Kubernetes environments and observability tools. ◼ Strong knowledge of containerization and orchestration of microservices. ◼ Experience with Docker/Podman, Helm, ArgoCD GitOps tool, Terraform. ◼ Experience with Azure Infrastructure including Entra ID, Azure Kubernetes Service, Azure Storage, Azure Redis,and other Azure cloud related technologies. ◼ Experience with Prometheus, Grafana, Loki, Tempo, Grafana Agent, Azure Monitor logging and observability tools. ◼ Good exposure to Bamboo CI/CD tools, Bitbucket, GIT. ◼ Experience with production environment troubleshooting and debugging. ◼ Automation scripting (Bash, Powershell, Python). ◼ Good exposure on git branching strategies. ◼ Be able to demonstrate a high level of professionalism, organisation, self-motivation, and a desire for self improvement. ◼ Should be self-driven and proactive having ability to plan, schedule and manage a demanding workload. Nice To Have Skills ◼ Implement backup and disaster recovery strategies and participate in annual DR tests and assist with executing the DR test plan. ◼ Develop and utilize cost tracking tools and methodologies to provide transparent and accurate financial reporting for all projects. Identify areas where cloud spend can be optimized to reduce wastage and costs. ◼ Good knowledge of scheduling jobs via Apache Airflow. ◼ Good knowledge of Azure Landing zone, Azure networking concepts such as private links. ◼ Good knowledge or experience in deploying and maintaining Azure Databricks infra. ◼ Good Java, NodeJs skills. ◼ Good understanding of Kafka streaming and MongoDB. ◼ Knowledge of DevSecOps practices. Key Responsibilities ◼ Implement and maintain infrastructure-as-code (IaC) using tools such as Terraform. ◼ Utilize containerization technologies like Azure Kubernetes to orchestrate and manage containerized applications in a production environment. ◼ Manage and maintain the lifecycle of core application suite that provide common capabilities such as continuous deployment, observability, and Kafka streaming. ◼ Monitor and troubleshoot infrastructure and application issues using monitoring tools. ◼ Collaborate with infra teams to provision and manage infra resources required by FO IT development teams in Azure cloud. ◼ Establish, deploy, and maintain CI/CD pipelines to automate the build, test, and deployment processes. ◼ Investigate and resolve issues related to the application infrastructure, continuous integration, and deployment pipelines. ◼ Identify areas that benefit from automation and build automated processes wherever possible. ◼ Design and develop application health dashboards, alerting and notification delivery systems to help with observability of application stack in Azure cloud. ◼ Collaborate with development, testing, and operations teams to gather, understand, and analyze functional requirements. ◼ Implement and enforce security best practices throughout the infrastructure, including identity and access management (RBAC), encryption, and secure network configurations. Show more Show less

Posted 6 days ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About YUBI Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfilment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India’s fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All 5 of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Loans – Term loans and working capital solutions for enterprises. Yubi Invest – Bond issuance and investments for institutional and retail participants. Yubi Pool– End-to-end securitisations and portfolio buyouts. Yubi Flow – A supply chain platform that offers trade financing solutions. Yubi Co.Lend – For banks and NBFCs for co-lending partnerships. Currently, we have boarded over 4000+ corporates, 350+ investors and have facilitated debt volumes of over INR 40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionising the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 650+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Responsibilities: Develop and enhance the ML platform to standardise model development and deployment workflows. Create reusable components to streamline the Data Science team's efforts and expedite the model lifecycle. Integrate models seamlessly with various products and systems. Implement robust logging and instrumentation for monitoring scoring requests for models in production. Establish systems for continuous model monitoring and trigger mechanisms for retraining based on performance metrics. Design and build A/B testing frameworks with support for canary deployments and shadow models to evaluate different model versions. Integrate data pipelines necessary for model retraining and update activities in production. Scale training and inference capabilities using standardised environment setups and deployment strategies. Incorporate open-source frameworks and proprietary tools into the MLOps pipeline to achieve development goals. Prototype and evaluate different open-source frameworks to identify optimal technology stacks for the pipeline. Focus on CI/CD pipeline integration for models and ensure seamless deployments in various environments. Requirements Overview: We are seeking a DevOps Engineer who excels in managing and automating cloud infrastructure, container orchestration, and deployment pipelines while possessing a solid understanding of software development practices. In this role, you will focus on creating robust, scalable infrastructure solutions, automating environments, and supporting application deployments. Key Responsibility Design, build, and maintain containerized environments using Docker and Kubernetes. Develop, deploy, and monitor applications and services within cloud environments (AWS(preferable), Azure, GCP). Automate infrastructure provisioning and configuration management using popular scripting languages (Python, Bash, etc.). Collaborate with development teams to support application deployment pipelines and integrate CI/CD practices. Debug and troubleshoot issues in production systems, ensuring high availability and performance. Implement monitoring, logging, and alerting mechanisms to proactively manage system health. Evaluate open-source and commercial tooling to optimize infrastructure and deployment workflows. Assist Data Science or development teams with setting up environments for experimentation and production deployment. Required Experience & Expertise: 1-3+ years of experience in DevOps, infrastructure management, or related fields with a strong emphasis on automation and containerization. Extensive hands-on experience with Docker and Kubernetes for building and managing containerized applications. Proficient programming and scripting skills, preferably in Python, with the ability to develop automation scripts and tools. Solid understanding of public cloud infrastructure (AWS(preferable), Azure, GCP) and associated services. Experience setting up and managing CI/CD pipelines and integrating configuration management tools. Strong problem-solving skills, with the ability to analyse complex issues and provide effective solutions. Exposure to deploying and monitoring applications in production environments. Familiarity with infrastructure-as-code (IaC) frameworks such as Terraform or CloudFormation is a plus. Preferred Qualifications: Background or exposure to Data Science related deployments or applications. Ability to work collaboratively in a fast-paced, cross-functional team environment. Experience evaluating and adopting new technologies and methodologies to streamline DevOps processes. Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Tools Administrator (Zabbix and ServiceNow) Purpose: The Tools Administrator will be responsible for administering Zabbix and ServiceNow tools, ensuring seamless monitoring and incident management processes. Main Priorities: 1. Zabbix administration 2. ServiceNow administration 3. Sound understanding of monitoring metrics and ITSM concepts 4. Familiarity with various operating systems and platforms 5. Incident management and response 6. Collaboration with Infra & Application teams 7. Incident logging and resolution tracking Key Outputs: Complete tasks and tickets within SLA's in Zabbix and ServiceNow Relationships: Internal: Team Leads, Project Managers, Transition Managers, Operation Managers Qualifications: Bachelor's/Master's degree in Computer Science, Software Engineering, or related area Skills/Knowledge: 1. Experience with Business Rules in ServiceNow 2. Customization of console and dashboards 3. Hands-on experience with Zabbix and ServiceNow 4. Hands-on experience with Linux/Unix and Windows 5. Basic scripting knowledge (Python/Shell) 6. Basic knowledge of networks and databases 7. Strong knowledge of ITSM tools 8. Good knowledge of CMDB and Asset Management 9. Strong troubleshooting skills 10. Strong reporting skills 11. Excellent communication skills Experience: Minimum 4 years of relevant experience in administering Zabbix and ServiceNow Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We are looking for a Senior Site Reliability Engineer to enhance, support, and troubleshoot our SaaS platform. We’re looking for someone with a wide breadth of knowledge, experience, and interest in a range of technology domains. The skillset is typically somewhere between a web developer and system administrator, a bit of a generalist with the ability to dig deep when necessary. We deal with many different technologies; a desire to learn and a hunger to work on challenging projects is a must. Key Responsibilities Triage, troubleshoot, and fix production problems in every layer of the stack Design, develop, improve, and tune logging, monitoring, and alerting Identify manual work, document the fix in the form of a runbook, then automate it away Write software to improve reliability and recoverability of production systems Perform and automate system administration tasks Participate in on-call rotation supporting production systems Mentor junior and mid-level members of the team Drive large projects from a technical perspective Qualifications Bachelors degree in Computer Science or related field, or equivalent work experience Competencies 6+ years of software development experience 6+ years of Linux system administration experience 6+ years of performance engineering experience Strong understanding of SRE concepts and DevOps principles Strong understanding of microservice environments and distributed systems Experience with containerization and container orchestration Experience troubleshooting complex systems Experience with application performance monitoring Experience with relational databases and SQL Familiarity with front-end technologies Ability to clearly communicate technical concepts Nice to have Datadog Opsgenie Atlassian Suite (Jira, Confluence, BitBucket) Java/Spring Python Javascript/React SQL Ansible Jenkins Tomcat Git Redis RabbitMQ Splunk/Kibana Terraform Typical Office Environment: Requires extensive sitting with periodic standing and walking. May be required to lift up to 35 pounds unassisted. May be required to lift over 35 pounds using assistive device and/or team lift. Requires significant use of personal computer, phone and general office equipment. Needs adequate visual acuity, ability to grasp and handle objects. Needs ability to communicate effectively through reading, writing, and speaking in person or on telephone. Nextiva DNA (Core Competencies) Nextiva’s most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking, and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude: They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸‍ - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled Data Engineer to join our dynamic team. In this role, will be responsible for implementing and maintaining scalable data pipelines and infrastructure on AWS cloud platform. The ideal candidate will have experience with AWS services, particularly in the realm of big data processing and analytics. The role involves working closely with cross-functional teams to support data-driven decision-making and focus on delivering business objectives while improving efficiency and ensuring high service quality. Key Responsibilities: Design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Knowledge of real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. Optimize performance of data processing jobs and ensure system scalability and reliability. Collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS Collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Lead and mentor junior data engineers, providing guidance on best practices, code reviews, and technical solutions. Evaluating and implementing new frameworks, tools for data engineering Strong analytical and problem-solving skills with attention to detail. To maintain a healthy working relationship with the business partners/users and other MLI departments Responsible for overall performance, cost and delivery of technology solutions Key Technical competencies/skills required: Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift,RDS (Postgres) Athena, Kinesics, etc. Expertise in data modelling and knowledge of modern file and table formats. Proficiency in programming languages such as Python, PySpark, SQL/PLSQL for implementing data pipelines and ETL processes. Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in enterprise Knowledge of modern data stack and keeping the technology stack refreshed. Knowledge of DevOps to perform CI/CD for data pipelines. Knowledge of Data Observability, automated data lineage and metadata management would be an added advantage. Cloud/hybrid cloud (preferable AWS) solution for data strategy for Data lake, BI and Analytics Set-up logging, monitoring, alerting, dashboards for cloud solution and data solution Experience with data warehousing concepts. Desired qualifications and experience: Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred). Proven experience of 7+ years as a Data Engineer or similar role with a strong focus on AWS cloud Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills. AWS certifications (e.g., AWS Certified Big Data - Specialty) are a plus Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins . Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers Show more Show less

Posted 6 days ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary QA - Analyst/Senior Analyst (ITS-Belgium) - Deloitte Support Services India Pvt. Ltd. (USI Supporting BE T&L DAF) USI Supporting Belgium Technology team develops and maintains products built on varied technologies. USI Belgium ITS has various groups which provide the best of the breed solutions to the clients by following an agile system development methodology. This position is specifically for the Tax & Legal Digital Asset Factory team, which is an integral part of the USI BSO NSE (BE) ITS team. If you are a technology enthusiast and looking forward to work on the latest and in-demand technology stack, then this role is for you. Your Role: Possess good knowledge of working with automation tools such as Selenium etc Setup, configure, and maintain automated testing environments with focus on enablement of continuous testing. Creating testing plans, documentation, process, and testing execution Demonstrate knowledge in using various testing methodologies Agile and Scrum and logging results Familiarize themselves with the business functionality and technology used for assigned applications (under test). Support, train and mentor others on the various aspects of test automation Your Profile: Should have Engineering/Masters/bachelor’s degree in computer science with 1-3 years of Experience. Should have good automation knowledge in one or more tools (Selenium, Tosca, JMeter, Postman. etc). Good knowledge in API testing. Knowledge in performance testing. Good knowledge in C# .NET Should have basic knowledge on ALM /Azure DevOps/CI&CD. Experience in defining, designing, creating, and executing tests in cross browser compatibility. Knowledge in Test Automation Infrastructure setup. Excellent interpersonal and communication skills Work Location: Hyderabad/Bengaluru Shift Timings: 11 AM to 8 PM The team Do you want technical challenges and fun? Do you want to make an impact in digital transformation? Join the Tax and Legal Digit@l Asset Factory! The vision of the Digit@l Asset Factory is to enable the Tax & Legal practice to become the undisputed digital leader in Belgium, regionally and globally. The team supports this vision by leveraging (cloud) technology to create innovative solutions that differentiate us from competition through increased efficiency and an end-to-end globally consistent digital experience. Qualifications Required: Bachelor’s or Master’s degree in Technical Field Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304425 Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Client: UK-Based Client Location: Remote Type: Freelancing/ Remote Availability: 8 hours/day Shift Timing: 2 PM IST – 11 PM IST (UK Shift) Experience: 3 to 9 years Skills: SAP BO Admin If you're interested, please share your CV at: 📧 thara.dhanaraj@excelenciaconsulting.com 📞 WhatsApp/Call: +91 73584 52333 JD: 3 to 5 years of experience in SAP Business Objects. • Install, configure, and administer SAP BusinessObjects BI 4.x in a secure, compliant environment. • Maintain system availability and performance to support mission-critical compliance and audit reports. • Collaborate with compliance officers, auditors, and data governance teams to define report requirements and ensure accuracy. • Manage security architecture including user roles, group policies, and integration with LDAP/Active Directory in accordance with compliance regulations (e.g., SOX, HIPAA, GDPR). • Create, maintain, and monitor BO Universes, Web Intelligence reports, and schedule publications for compliance and regulatory purposes. • Ensure auditability of the platform: implement and manage logging, versioning, and activity tracking. • Perform regular system patching, upgrades, and security hardening as required by internal IT security standards. • Support the development and automation of compliance-related KPIs and dashboards. • Participate in audits, provide evidence of controls, and assist with data lineage and traceability. • Document procedures, change logs, and system configurations to support audit readiness. • Strong understanding of BusinessObjects components (CMC, BI LaunchPad, IDT/UDT, Web Intelligence, Crystal Reports). • Proficient in SQL and working with data sources like SAP HANA, Oracle, or MS SQL Server. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About the Job We are looking for a hybrid DevOps & QA Automation Engineer to own the integrity, scalability, and reliability of both our infrastructure and our automated testing framework. You will ensure that our deployments are fast, secure, and compliant, and that our financial applications meet the highest standards of quality and performance. As the successful candidate, you will be expected to bring significant hands-on experience with Google Cloud Platform (GCP), leveraging its full suite of services to build and maintain a secure, scalable, and compliant infrastructure for our financial products. Core Responsibilities DevOps Responsibilities: Manage and scale Kubernetes environments on Google Kubernetes Engine (GKE) Architect and maintain GCP-native infrastructure using Terraform, Helm, and GCP-specific modules Automate CI/CD pipelines using GitLab CI/CD, integrating seamlessly with GCP services Implement robust monitoring, alerting, and logging via Prometheus, Grafana, ELK/EFK, and GCP Logging Enforce security best practices, including IAM policies, GCP Secret Manager, firewall configurations, and TLS/mTLS Design and execute backup and disaster recovery strategies using Velero and GCP-native snapshot tools Manage GCP DNS, ingress via NGINX, and certificate lifecycle using cert-manager on GCP QA Automation Responsibilities Design, develop, and maintain automated test scripts for web, mobile, and backend systems Integrate automated tests into CI/CD pipelines Conduct REST API testing (e.g., Postman, Rest Assured) Perform regression testing and participate in code/test plan reviews Track and log defects; support resolution with developers Ensure compliance with security and regulatory standards in testing You Will Be a Good Fit If You: Are aligned with our values: Belief, Accountability & Ownership, Positivity, Execution, Speed Have 5+ years of experience in DevOps, QA, or both, ideally within fintech or banking Are proficient in CI/CD tools, automated testing frameworks (Selenium, Playwright, Cypress) Have experience in Java and scripting (Bash, Python, Go) Have a strong grasp of Kubernetes, GCP, Terraform, and secure infrastructure design Have worked in regulated environments and understand compliance and data security Have deep expertise in GCP services, architecture, and security best practices Have managed production workloads using GKE, Cloud Logging, GCP Secret Manager, IAM, and other GCP tools Are confident working in a GCP-first DevOps environment, optimizing both cost and performance across its service offerings Show more Show less

Posted 6 days ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

On-site

Company Profile LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We are dedicated, open-access partners with a commitment to excellence in delivering services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. We are evolving our Cloud Site Reliability Engineering team to move beyond support and operations. As a Lead Cloud SRE Engineer, you will form part of a diverse and inclusive organization that has full ownership of the availability, performance, and scalability of one of the most impactful platforms at LSEG. Role Profile In this role, you will be joining our Cloud SRE team within Cloud & Productivity Engineering as a Lead SRE Engineer. This team focuses on applying software Engineering practices to IT operations tasks to maintain and improve the availability, scalability and reliability of our Cloud platform hosting LSEG applications. We strive to improve automation and increase the systems' self-healing capabilities. We monitor, measure and optimize the platform’s performance, pushing our capabilities forward, exceeding our customer needs. We also work alongside architects, developers, and engineers to ensure efficient enterprise scale AWS Landing Zone platforms and products, while playing an active involvement in decision making areas such as automation, scalability, capacity, reliability, business continuity, disaster recovery and governance Tech Profile/Essential Skills BS/MS degree in Computer Science, Software Engineering or related STEM degree, or meaningful professional experience. Proven 5 years' experience in Site Reliability Engineering with a focus on Cloud Platform Landing Zones and services Proven leadership skills with experience in mentoring and guiding engineering teams. Relevant Cloud certifications such as AWS Solutions Architect Professional or AWS DevOps Professional Ability to work in a fast-paced, dynamic environment and adapt to changing priorities Experience with DevSecOps practices, including automation, continuous integration, continuous delivery, and infrastructure as code using tools such as Terraform and Gitlab. 5 years demonstrable experience with creating and maintaining CI/CD pipelines and repositories. Experience working in Agile environments, with a demonstrable experience of Agile principles, ceremonies and practices. Experience implementing and managing platform and product observability including dashboarding, logging, monitoring, alerting and tracing with Datadog or Cloud native tooling. Strong problem-solving skills, root cause analysis, and incident/service management Excellent verbal and written communication skills, with the ability to collaborate effectively with multi-functional teams. Preferred Skills and Experience Proven experience deploying AWS Landing Zones in accordance with the AWS Well-Architected Framework. Solid working knowledge in setting up enterprise scale Azure Landing Zones and hands on experience with Microsoft’s Cloud Adoption Framework. Proficiency in programming languages such as Python, Java, Go, etc. Sound understanding of financial institutes and markets. Education and Professional Skills Relevant Professional qualifications. BS/MS degree in Computer Science, Software Engineering or related STEM degree. Detailed Responsibilities Lead, engineer, maintain, and optimize hybrid Cloud Platforms and Services, focusing on automation, reliability, scalability, and performance. Lead, supervise and mentor peers, providing guidance and support to ensure high performance and professional growth within the team. Be accountable for the team's work, ensuring high standards and successful project outcomes. Collaborate with Cloud Platform engineering teams, architects, and other cross-functional teams to enhance reliability in the build and release stages for the cloud platform and products. Develop and deploy automation tools and frameworks to reduce toil. Provide multi-functional teams guidance and mentorship on best-practices for Cloud products and services. Adhere to DevSecOps best practices, industry standards to optimize the platform release strategy. Develop and maintain observability dashboards and self-healing capabilities. Continuously seek opportunities for automation and customer self-service to solve technical issues, reduce toil and providing innovative solutions. Participate in Agile ceremonies and activities to meet engineering and business goals. Create and maintain up-to-date comprehensive documentation for landing zone components, processes, and procedures. Foster a culture of customer excellence and continuous improvement for the SRE function. Follow and adhere to established ITSM processes and procedures (Incident, Request, Change and Problem Management) LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.

Posted 6 days ago

Apply

Exploring Logging Jobs in India

The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Chennai

These cities are known for their thriving industries where logging professionals are actively recruited.

Average Salary Range

The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.

Related Skills

In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.

Interview Questions

  • What is logging and why is it important in software development? (basic)
  • Can you explain the difference between logging levels such as INFO, DEBUG, and ERROR? (medium)
  • How do you handle log rotation in a large-scale application? (advanced)
  • Have you worked with any logging frameworks like Log4j or Logback? (basic)
  • Describe a challenging logging issue you faced in a previous project and how you resolved it. (medium)
  • How do you ensure that log files are secure and comply with data protection regulations? (advanced)
  • What are the benefits of structured logging over traditional logging methods? (medium)
  • How would you optimize logging performance in a high-traffic application? (advanced)
  • Can you explain the concept of log correlation and how it is useful in troubleshooting? (medium)
  • Have you used any monitoring tools for real-time log analysis? (basic)
  • How do you handle log aggregation from distributed systems? (advanced)
  • What are the common pitfalls to avoid when implementing logging in a microservices architecture? (medium)
  • How do you troubleshoot a situation where logs are not being generated as expected? (medium)
  • Have you worked with log parsing tools to extract meaningful insights from log data? (medium)
  • How do you handle sensitive information in log files, such as passwords or personal data? (advanced)
  • What is the role of logging in compliance with industry standards such as GDPR or HIPAA? (medium)
  • Can you explain the concept of log enrichment and how it improves log analysis? (medium)
  • How do you handle logging in a multi-threaded application to ensure thread safety? (advanced)
  • Have you implemented any custom log formats or log patterns in your projects? (medium)
  • How do you perform log monitoring and alerting to detect anomalies or errors in real-time? (medium)
  • What are the best practices for logging in cloud-based environments like AWS or Azure? (medium)
  • How do you integrate logging with other monitoring and alerting tools in a DevOps environment? (medium)
  • Can you discuss the role of logging in performance tuning and optimization of applications? (medium)
  • What are the key metrics and KPIs you track through log analysis to improve system performance? (medium)

Closing Remark

As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies