Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager Product Analyst – Quality The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of the company's IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share the best practices across the Tech Centers. Role Overview As a Sr Specialist, Product Analyst – Quality, you will be responsible for driving solution design, implementation, and continuous improvement of the different QMS tools with specific alignment to the Quality business processes. Essential skills include a strong technical as well as business background, proficiency in project management methodologies (Agile, Scrum), and excellent organizational abilities. This role is positioned within the Quality Value Team, will have advanced experience in the life sciences industry, specifically Quality Management Systems and technology landscapes; specifically, Veeva Vault Quality; will have knowledge of GxP and will play a critical role during the solution design to satisfy business needs and assuring adoptability to future system scalability. What Will You Do In This Role Apply a structured approach to discover, document, and manage business processes, user and stakeholder needs, including opportunity statements, use cases, insights, and requirements. Design, develop, and deploy UiPath RPA solutions to automate Quality Management System processes including document routing, compliance reporting, deviation management, and audit preparation workflows. Build and maintain low-code applications on the Appian platform to support Quality business processes, translating complex QMS requirements into process models, SAIL interfaces, CDTs, records, and reports. Integrate Appian Quality solutions with external systems including Veeva Vault Quality, TrackWise, and enterprise databases via APIs, web services, and other integration methods. Gather insight into user journeys, behavior, motivation, and pain points. Expose unarticulated problems and unmet needs. Document business process, business, and user needs in the form of problem statements to make up the backlog. Facilitate the “how” with the Development team. Gain expertise in the business area. Manage business analysis per agreed priority backlog items in JIRA. Participating in impact assessment activities, reviewing proposed changes and ensuring impact is understood. Deliver product enhancements through agreed backlog process to ensure Quality solutions evolve to meet business needs. Ensure Quality solutions remain compliant as a Validated Solution through verification testing, documentation, and validation efforts. Provide overall leadership, guidance, and management of all aspects of a given solution, including requirements gathering, enhancements delivery plan, and implementation. Initiate projects including defining a scope/charter, identifying stakeholders, and establishing governance. Act as a bridge between Business SMEs, technical teams, and non-technical stakeholders. Communicate delivery status, solution health, risks, and issues to all parties involved and ensure that everyone is aligned and informed. Conduct product status meetings and present updates to stakeholders and senior management. Evaluate delivery performance and implement continuous improvement practices. Understand the technical aspects as well as business process impacts to make informed decisions, provide guidance, and communicate effectively with the development team. This includes having a deep understanding of the QMS business processes, technology stack, architecture, and potential technical challenges. Work closely with the Product Owner to prioritize and refine the product backlog, ensuring that the team focuses on delivering the most valuable features. Identify potential risks and develop mitigation strategies. Proactively address issues that could impact project success. What Should You Have Minimum Level of Education Required Bachelor’s Degree in Computer Science, Engineering, MIS, and Science OR in a related field. The job requires a solid academic background in how Information Technology supports the delivery of business objectives. Preferred Level of Education Veeva Certifications (Veeva Vault/Vault Quality Suite/ QMS). The role holder has completed the Certified Vault Training and is up to date. 3+ years of experience in technical project management, with a strong understanding of project management methodologies (Agile, Scrum, Waterfall). Understanding Quality Management System Capabilities (Audit/Inspection management, CAPA management, Deviations management, Complaint management). Experience in solution delivery with GMP systems. Experience with architecture, integration, interfaces, portals, and/or analytics. Experience with UiPath RPA development including bot creation, workflow orchestration, exception handling, and deployment in GxP-regulated environments with validation documentation requirements. Proficiency in Appian low-code platform development including process modeling, SAIL interface design, data integration with external databases (SQL, Oracle), and API connectivity for Quality system implementations. Understanding of Systems Development Life Cycle (SDLC), and current Good Manufacturing Practice (cGMP) processes. Knowledge and experience with QMS relevant tools like Veeva Vault Quality and TrackWise. Proven experience leading complex technical projects in a fast-paced environment. Strong technical background with knowledge of software development, systems integration, or related areas. Excellent organizational, leadership, and decision-making skills. Strong analytical and problem-solving abilities. Effective communication and interpersonal skills to liaise with cross-functional teams. Ability to manage multiple projects simultaneously and adapt to changing priorities Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Asset Management, Benefits Management, Management System Development, Product Management, Requirements Management, Stakeholder Relationship Management, Strategic Planning, System Designs Preferred Skills Job Posting End Date 09/8/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R352363
Posted 2 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake
Posted 2 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineer at Rapid Circle, you will have the opportunity to make a significant impact and drive positive change through your work. Our Cloud Pioneers are dedicated to assisting clients in their digital transformation journey, and if you are someone who thrives on constant improvement and positive changes, then this role is perfect for you. In this position, you will collaborate with customers on various projects across different sectors, such as healthcare, manufacturing, and energy. Your contributions may involve ensuring the secure availability of research data in the healthcare industry or working on challenging projects in the manufacturing and energy markets. At Rapid Circle, we foster a culture of curiosity and continuous improvement. We are dedicated to enhancing our expertise to assist customers in navigating a rapidly evolving landscape. Through knowledge sharing and exploration of new learning avenues, we aim to stay at the forefront of technological advancements. As our company experiences rapid growth, we are seeking the right individual to join our team. In this role, you will have the autonomy to pursue personal development opportunities. With a wealth of in-house expertise (MVPs) spanning across the Netherlands, Australia, and India, you will have the chance to collaborate closely with international colleagues. This collaborative environment will enable you to challenge yourself, carve out your growth path, and embrace freedom, entrepreneurship, and continuous development key values at Rapid Circle. Your responsibilities as a Site Reliability Engineer will include: - Collaborating with development partners to shape system architecture, design, and implementations for improved reliability, performance, efficiency, and scalability. - Monitoring and measuring key services, raising alerts as necessary. - Automating deployment and configuration processes. - Developing reliability tools and frameworks for use by all engineers. - Participating in On-Call duties for critical systems, leading incident response, and conducting post-mortem analysis and reviews. Desired Skills: - Good understanding of IAM (Identity and Access Management) in the cloud. - Familiarity with the DevOps and SAFe/Scrum methodologies. Certifications: - Must have: Active Kubernetes certification. - Good to have: Cloud Certification. Key Skills: - Expertise or deep working knowledge in Kubernetes and Image Building for Cloud Platform (AWS & Azure) security services. - Proficiency in Hashicorp products such as Vault and Terraform. - Expertise in coding infrastructure, automation, and orchestration. - Working knowledge of Kubernetes, Terraform, Prometheus, Elastic, Jenkins, or similar tools. - Proficient in multiple cloud platforms, including AWS and Azure.,
Posted 2 days ago
0.0 years
0 Lacs
Varthur, Bengaluru, Karnataka
On-site
Outer Ring Road, Devarabisanahalli Vlg Varthur Hobli, Bldg 2A, Twr 3, Phs 1, BANGALORE, IN, 560103 INFORMATION TECHNOLOGY 4230 Band B Satyanarayana Ambati Job Description Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 2 days ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Date Opened 07/30/2025 Job Type Full time Industry IT Services City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 Job Description Sr. AWS DevOps Engineer Cloud: AWS (expert - ) + Azure (familiar) o AWSexpertise hands on (VPC, EKS, storage, networking, RDS, redis, ELK, ACM, vault, KMS, Rabbit MQ …) o Azure familiarity (AKS, storage, cosmos, postgres, redis, Key Vault), Rabbit MQ, KeyCloak…) OS: Windows, Linux, Programming: PowerShell, Python, Shell Understanding of C#, Javascript – languages, build and deployment process SCM and Build Orchestrator: GitLab CICD, GitHub Actions Artifact Management: Artifactory Quality: SonarQube, megalinter, MSTest Automation Tools: Terraform, Ansible, Chef, Vault Containers/ Virtualization: Docker/ docker compose, K8S Process: GitOps, GitFlow, Branching, Versioning, Tagging, Release
Posted 2 days ago
0.0 - 12.0 years
0 Lacs
Delhi, Delhi
On-site
About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with Pyxis leverages a broad portfolio of 50+ alternative datasets to provide real-time market intelligence and customer insights through a unique business model that enables us to provide our clients with competitive intelligence unrivaled in the market today. We provide insights and data via custom one-time projects or ongoing subscriptions to data feeds and visualization tools. We also offer custom data and analytics projects to suit our clients’ needs. Pyxis can help teams answer core questions about market dynamics, products, customer behavior, and ad spending on Amazon with a focus on providing our data and insights to clients in the way that best suits their needs. Refer to: www.pyxisbybain.com What you’ll do Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Managing periodic reporting on the progress to the management About you A Bachelor’s or Master’s degree in Computer Science or related field 4 + years of software development experience with 3+ years as a devops engineer High proficiency in cloud management (AWS heavily preferred) including Networking, API Gateways, infra deployment automation, and cloud ops Knowledge of Dev Ops/Code/Infra Management Tools: (GitHub, SonarQube, Snyk, AWS X-ray, Docker, Datadog and containerization) Infra automation using Terraform, environment creation and management, containerization using Docker Proficiency with Python Disaster recovery, implementation of high availability apps / infra, business continuity planning What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.
Posted 2 days ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description: Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Dumad, Vadodara, Gujarat
On-site
Associate Configuration Engineer GEA is one of the largest suppliers for the food and beverage processing industry and a wide range of other process industries. Approximately 18,000 employees in more than 60 countries contribute significantly to GEA’s success – come and join them! We offer interesting and challenging tasks, a positive working environment in international teams and opportunities for personal development and growth in a global company. Why join GEA Job information Reference Number JR-0034496 Job function Engineering Position type Full time Site Block No 8,, P.O. Dumad, Savli Road, Vadodara- 391740 Gujarat Your responsibilities and tasks: You will be responsible to building configurators which will generate automatic 3D models and drawings. This job requires individuals to be focused, structured and independent. You will work with Internal Stakeholders globally and with colleagues from other departments, hence required proactive approach and excellent communication skills. You must be patient and thorough in your work. Primary tasks include: Build configurators using iLogic, Inventor and related supportive tools. Must be able to understand in interpret requirements technically as well as CAD Configuration point of view. Suggest different CAD automation projects to automate repeated processes & to improve the quality of Design. Continually search for new ways to meet or exceeded expectation to create value to stakeholder Proposes value addition solution and in constant seek out diverse thinking to maximize use of stakeholder’s experience, background, and perspective. Co-ordination with other team members to understand assigned work and plan for delivery Complete projects on time with right quality Collaborate and co-ordinate with internal global colleagues. Managing communication with Global stakeholders in different time zones as and when required Provide End to End solution to engineering value chain. Create & develop various tools to automate repetitive engineering processes to reduce Engineering Hours by different Innovation. Create, maintain, and build relationship with internal stakeholder. Secondary tasks include: Contribute and support design projects of Powder technology products (Fluid bed, Solid-Feed or Powder Handling) Create 3D models and 2D drawings as per project requirement Co-ordinate with Design and projects team located in Europe. Your profile and qualifications: You hold Degree/Diploma or equivalent in Engineering and minimum 3-5 years of experience in your respective field. 3-5 years of experience working with Autodesk Inventor (Advance skill required) Good knowledge in iLogic, Parametric/Skeleton modelling. Understand and create user design interface forms in Inventor to create 3D Model for various product configurations. Must have knowledge of workflow and integration between Autodesk Vault (or other PLM & PDM tools) and Autodesk Inventor. Self-driven, learn continuously, extract learning from experience, share and help team Excellent communication and collaboration skills, Must have Good English skill – Verbal and written Takes on tough challenges with sense of ownership. Approach work individually and with teams with optimism and solution-oriented Agile mindset. Knowledge of Tacton configuration, VBA or any other programming language. Familiarity with Agile methodologies and working in Agile development environments. Added Advantage Experience with tacton Design Automation Experience with Agile project management Experience or knowledge of programming language. Knowledge of different foreign languages
Posted 2 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
JOB with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on experience in coding and Azure Cloud. Working Hours 8 Hours , With a 4 Hours Of Overlap During EST Time Zone. ( 12 PM 9 PM) This Overlap Hours Is Mandatory As Meetings Happen During This Overlap Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery. Integrate and support third-party APIs and external services. Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack. Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC). Participate in Agile/Scrum ceremonies and manage tasks using Jira. Understand technical priorities, architectural dependencies, risks, and implementation challenges. Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and SKILLS : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core. JavaScript, jQuery, REST APIs. Expertise in MS SQL Server, including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types. Skilled in unit testing with XUnit, MSTest. Strong in software design patterns, system architecture, and scalable solution design. Ability to lead and inspire teams through clear communication, technical mentorship, and ownership. Strong problem-solving and debugging capabilities. Ability to write reusable, testable, and efficient code. Develop and maintain frameworks and shared libraries to support large-scale applications. Excellent technical documentation, communication, and leadership skills. Microservices and Service-Oriented Architecture (SOA). Experience in API Integrations. 2+ years of hands with Azure Cloud Services, including : Azure Functions. Azure Durable Functions. Azure Service Bus, Event Grid, Storage Queues. Blob Storage, Azure Key Vault, SQL Azure. Application Insights, Azure SKILLS ( GOOD TO HAVE) : Familiarity with AngularJS, ReactJS, and other front-end frameworks. Experience with Azure API Management (APIM). Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes). Experience with Azure Data Factory (ADF) and Logic Apps. Exposure to Application Support and operational monitoring. Azure DevOps CI/CD pipelines (Classic / YAML). (ref:hirist.tech)
Posted 2 days ago
4.0 - 11.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 4yrs to 11 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements
Posted 3 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join Inito's DevOps team, playing a crucial role in building, maintaining, and scaling our cloud infrastructure and operational excellence. This role offers a unique opportunity to contribute across development and operations, streamlining processes, enhancing system reliability, and strengthening our security posture. You will work closely with engineering, data science, and other cross-functional teams in a fast-paced, growth-oriented environment. Responsibilities Assist in managing and maintaining cloud infrastructure on AWS, GCP, and on-premise compute (including bare-metal servers). Support and improve CI/CD pipelines, contributing to automated deployment processes. Contribute to automation efforts through scripting, reducing manual toil, and improving efficiency. Monitor system health and logs, assisting in troubleshooting and resolving operational issues. Develop a deep understanding of application working, including memory & disk usage patterns, database interactions, and overall resource consumption to ensure performance and stability. Participate in incident response and post-mortem analysis, contributing to faster resolution and preventing recurrence. Support the implementation and adherence to cloud security best practices (e. g., IAM, network policies). Assist in maintaining and evolving Infrastructure as Code (IaC) solutions. Requirements Cloud Platforms: At least 2 years of hands-on experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP), including core compute, storage, networking, and database services (e. g., EC2 S3 VPC, RDS, GCE, GCS, Cloud SQL). On-Premise infrastructure: Setup, automation, and management. Operating Systems: Proficiency in Linux environments and shell scripting (Bash). Scripting/Programming: Foundational knowledge and practical experience with Python for automation. Containerization: Familiarity with Docker concepts and practical usage. Basic understanding of container orchestration concepts (e. g., Kubernetes). CI/CD: Understanding of Continuous Integration/Continuous Delivery principles and experience with at least one CI/CD tool (e. g., Jenkins, GitLab CI, CircleCI, GitHub Actions). Familiarity with build and release automation concepts. Version Control: Solid experience with Git for code management. Monitoring: Experience with basic monitoring and alerting tools (e. g., AWS CloudWatch, Grafana). Familiarity with log management concepts. Networking: Basic understanding of networking fundamentals (DNS, Load Balancers, VPCs). Infrastructure as Code (IaC): Basic understanding of Infrastructure as Code (IaC) principles. Good To Have Skills & Qualifications Cloud Platforms: Hands-on experience with both AWS and GCP. Hybrid & On-Premise Cloud Architectures: Hands-on experience with VMware vSphere / Oracle OCI or any on-premises infrastructure platform. Infrastructure as Code (IaC): Hands-on experience with Terraform or AWS CloudFormation. Container Orchestration: Hands-on experience with Kubernetes (EKS, GKE). Databases: Familiarity with PostgreSQL and Redis administration and optimization. Security Practices: Exposure to security practices like SAST/SCA or familiarity with IAM best practices beyond basics. Awareness of secrets management concepts (e. g., HashiCorp Vault, AWS Secrets Manager) and vulnerability management processes. Observability Stacks: Experience with centralized logging (e. g., ELK Stack, Loki) or distributed tracing (e. g., Jaeger, Zipkin, Tempo). Serverless: Familiarity with serverless technologies (e. g., AWS Lambda, Google Cloud Functions). On-call/Incident Management Tools: Familiarity with on-call rotation and incident management tools (e. g., PagerDuty). DevOps Culture: A strong passion for automation, continuous improvement, and knowledge sharing. Configuration Management: Experience with tools like Ansible for automating software provisioning, configuration management, and application deployment, especially in on-premise environments. Soft Skills Strong verbal and written communication skills, with an ability to collaborate effectively across technical and non-technical teams. Excellent problem-solving abilities and a proactive, inquisitive mindset. Eagerness to learn new technologies and adapt to evolving environments. Ability to work independently and contribute effectively as part of a cross-functional team. This job was posted by Ronald J from Inito.
Posted 3 days ago
0 years
0 Lacs
Sonipat, Haryana, India
On-site
Job Requirements Job Requirements Role/ Job Title: Relationship Manager - Current Account Function/ Department: Branch Banking Job Purpose The role includes managing assigned client portfolio to ensure superior service delivery leading to cross sell. It would include CASA Build up as per branch targets, improving product holding per customer through cross sell of all banking products, acquiring new clients and managing the walk-in clients. The role entails managing all cash, routine transactions for bank customers including fund transfer, accepting deposits & withdrawals, and managing deliverables. Roles & Responsibilities CASA values build up and new client acquisition. Ensure effective client engagement leading to cross sell. Increase in 'Product holding per customer' within mapped portfolio. Ensure all the clients engaged are profiled and presented with suitable banking products. Be solution oriented and ensure effective on-boarding on Mobile/Net Banking, Bill Pay, SIP, Insurance & Investment solutions, Retail and SME Loans and relevant banking programs. Ensure monthly operating plan is met to improve scorecard and decile rankings. Coordinate with respective relationship managers for closure of business loans, working capital, POS, CMS, trade transactions generated through client engagement. Responsible for creating a customer-focused approach for quick resolution of all queries and complaints to achieve NPS benchmarks. Custodian of the branch vault, manage vault limits, cash, and non-cash transactions. Ensure Nil instances of cash shortage or excess at teller counter. Updating the key registers regularly and reviewing branch reports like end of day (EOD) cash position report LTR, Instruments issued etc. Monitoring of dummy accounts, suspense accounts, deferred accounts, accounts payable/ receivable, reconciliation and maintenance of suspense accounts register as per the required format. Ensure strict adherence to the bank policies and compliance. Secondary Responsibilities Perform audit and ensure compliance to internal and external regulations and guidelines. Provide best in-class customer service to all clients to become their primary banker. Education Qualification Graduation: Any Post-graduation: Any
Posted 3 days ago
9.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary Job title: DevSecOps - Manager About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk Deloitte’s DevSecOps CI/CD Security Transformation and Secure Software Development Lifecycle engagement archetypes provide frameworks, templates, and leading practices for integrating security into software delivery pipelines. These resources include step-by-step workflows, staffing guidance, and project management tools to support DevSecOps roles and responsibilities The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do Roles & Responsibilities: As a DevSecOps Manager, your core responsibility will be leading the implementation and ongoing management of DevSecOps practices across client's cloud and on-premises environments, which includes the following: Conduct interviews and assessments to understand client requirements, current state and DevSecOps practice maturity. Define strategy and take responsibility in driving adoption of security automation, continuous integration/continuous delivery (CI/CD), and compliance within the software development lifecycle of client's environment. Understand and be compliant with the Service Level Agreements defined for the DevSecOps services Oversee the development and integration of security tools and automation for services such as threat modeling, security architecture reviews, secure development practices, code analysis, vulnerability scanning, API security, configuration management etc. Manage and mentor DevSecOps team and client's cross-functional teams, setting goals and tracking performance. Report on DevSecOps metrics, security posture, and process improvements to leadership and client stakeholders. Stay current with emerging DevSecOps tools, security threats, and regulatory requirements. Facilitate use of technology-based tools or methodologies to continuously improve the monitoring, management and reliability of the services provided to client. Required Skills 9+ years of experience in application security development, security testing, integrating security tools, deployment and security management phases, with atleast 2+ years of leading the Devsecops projects. Strong understanding of security frameworks (e.g., NIST 800-53, PCI DSS,, ISO 27001, CIS Controls) and regulatory requirements (e.g., GDPR, HIPAA, PCI DSS) Investigative and analytical problem-solving skills along with excellent communication, project management, and stakeholder engagement skills. Experience in collecting, analyzing, and interpreting qualitative and quantitative data from defined application security services related sources (tools, monitoring techniques etc.) Understanding of solution designs and technical architectures to identify potential security risks and recommend mitigation strategies. Exposure to threat modeling exercise, zero trust architecture principles and secure by design practice. Knowledge and experience of OWASP Top 10, SANS Secure Programming, Security Engineering Principles; Hands-on experience in performing secure code reviews and penetration testing Hands-on experience in running, installing and managing SAST, DAST , SCA and IAST solutions, such as Checkmarx, Fortify and Contrast in large enterprise Understanding of leading vulnerability scoring standards, such as CVSS, and ability to translate vulnerability severity as security risk; Strong knowledge of CI/CD tools and hands on experience on at least one CI/CD tool set and building pipelines (including in cloud) using Team city, Bamboo, Jenkins, Chef, Puppet, selenium, AWS and AZURE DevOps; Hands on experience on container technology such as Kubernetes, Dockers, AKS, EKS. Knowledge of cloud environments and deployment solutions such as server less computing; Must have cloud security specialization in Security; and Certification such as EC-Council CEH (Certified Ethical Hacker), DevSecOps Professional (CDP) , ISC2 Certified Cloud Security Professional (CCSP), Certified API Security Professional (CASP) , CTMP (Certified Threat Modeling Professional) etc. are preferred. Qualification Bachelor's degree or higher in Computer Science, IT or equivalent experience. Experience in cloud service providers such as AWS, GCP, Azure, Oracle and multi-cloud DevSecOps implementations. Background in Agile or Scrum methodologies. Solid and demonstrable comprehension of Information Security including OWASP/SANS, Security Test Case development (or mis-use case). Understanding of security essentials including; networking concepts, defense strategies, and current security technologies Experience with securing IaC templates (e.g., Terraform, CloudFormation) and integrating IaC scanning tools into pipelines to detect misconfigurations and vulnerabilities early in the provisioning process Experience in implementing and managing security measures within Kubernetes environments, designing and enforcing advanced security protocols for API infrastructure, managing and optimizing our containerized applications using Docker, automating and managing our infrastructure as code using Terraform, automating IT processes and configurations using Ansible, and identifying and mitigating potential security threats through comprehensive threat modeling practices. Familiarity with container security best practices, including image scanning, runtime protection, and orchestration security (e.g., Docker, Kubernetes). Experience with secrets management tools (e.g., HashiCorp Vault, AWS Secrets Manager). Ability to research and characterize security threats to include identification and classification of application related threat indicators. Good to have: Skills in scripting languages (e.g., Groovy for Jenkins, Bash, Python) to customize pipeline steps and automate repetitive tasks. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306776
Posted 3 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
ABOUT US Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 59 offices in 37 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. We complement our tailored, integrated expertise with a vibrant ecosystem of digital innovators to deliver better, faster and more enduring outcomes. Our 10-year commitment to invest more than $1 billion in pro bono services brings our talent, expertise and insight to organizations tackling today’s urgent challenges in education, racial equity, social justice, economic development and the environment. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. WHO YOU’LL WORK WITH: You’ll join the Product, Practice and Knowledge (PPK) department as part of the Knowledge Management team that support a range of industry and capability practice areas.The global PPK group is a key function, which helps to identify, create, and leverage “best of Bain” content, expertise, and also helps Bain practice areas develop commercial strategies. The Knowledge Management team is critical to harnessing the best of our consulting staffs’ individual and collective expertise, making it possible for us to deliver extraordinary results for our clients. WHAT YOU’LL DO Associate, Knowledge Management support global knowledge management within an industry or capability Practice by: Managing and preparing content contributions to the global knowledge base Removing confidential information from client engagement materials and standardizing those as per Bain standards (sanitizing & disguising) Writing abstracts and tagging materials to ensure Bain case teams can find the right content easily within Bain’s internal knowledge base Posting content on Bain’s internal knowledge base so that the materials can be leveraged by global teams working on similar topics Ensuring case teams follow compliance guidelines when submitting case summaries, proposals etc. Overseeing the sanitizing & disguising efforts performed by the Junior Knowledge Associate team for the practice, coaching on practice-specific requirements and ensuring quality requirements are met Managing the quality of content by identifying duplicative content, storylining content and archiving lower usage content from the knowledge base Supporting the creation and periodic refresh of select practice content, credentials, and the Practice area pages overall Supporting Senior Knowledge Specialists with answering straightforward requests and knowledge capture tasks like taking and uploading notes from calls with consulting teams Perform practice analytics using tools including Alteryx/Tableaux and Excel to provide insight for Practice operational activities Supporting Senior Knowledge Specialists to create and distribute regular newsletters to Practice affiliates on latest cases, proposals, practice knowledge and IP developments Maintaining Practice trackers, databases and affiliate lists/profiles ABOUT YOU Candidates should be post-graduates with a strong academic record 1- 2 years of relevant experience in consulting or research background Strong Microsoft Excel and PowerPoint skills, Hands-on experience with tools such as Alteryx and Tableau is a plus Possess excellent analytical, communication, and team player skills Ability to handle multiple tasks and work under pressure Strong skills in Microsoft Excel and PowerPoint are required WHAT MAKES US A GREAT PLACE TO WORK We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.
Posted 3 days ago
3.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About SuperOps: SuperOps is a SaaS startup empowering IT service providers and IT teams around the world with technology that is cutting-edge, future-ready, and powered by AI. We are backed by marquee investors like Addition, March Capital, Matrix Partners India, Elevation Capital, and Tanglin Venture Partners. Founded by Arvind Parthiban, a serial entrepreneur, and Jayakumar Karumbasalam, a veteran in the IT space, SuperOps is built on the back of a team of engineers, product architects, designers, and AI experts, who want to reshape the world of IT. Now we have taken on a market that is plagued by legacy solutions and subpar experiences. The potential to do something great is immense. So if you love to grow, be part of a kickass team that inspires you to do more, and make an everlasting mark in the world of IT, SuperOps is the place to be. We also believe that the journey is as important as the destination. We want to build the best products out there and have fun while doing so. So come, be part of our A-star team of superheroes. About the role: Do you live and breathe infrastructure as code? Are you excited by orchestration services in a multi-cloud environment (AWS and Google Cloud)? Do you want to work closely with the AI team with the possibility of growing your skills in AI Ops? As an early member of the DevOps team, you have the opportunity to help us scale our infrastructure, ensure minimum downtime, and implement the best cloud security practices. You'll work directly with our Tech Lead/DevOps Manager, collaborating on technical decisions. You will be responsible for the following (not exhaustive): Orchestrate all services Work independently in a constantly changing and growing environment Implementing CI/CD pipelines and automation Implement security best practices You can find the right balance of speed and accuracy, prioritize your tasks, take responsibility, and get things done while maintaining high standards. Key skills: 3 - 6 years of experience working as a DevOps Engineer. Hands-on experience with container orchestration systems such as Kubernetes. Expertise in Docker And Kubernetes. Experience in AWS and Google Cloud. Experience delivering infrastructure as code(IAC) using Terraform/Cloudformation. Experience with logging, monitoring, and alerting systems and tools. Working knowledge of CI/CD tools like Jenkins, Spinnaker, and Argo. Nice to have: Hands-on experience with Helm. Experience on one of many Build, Deploy, Operate, and monitoring tools. Experience in security best practices and HashiCorp Vault. Spinnaker/Argo would be a big plus. Why SuperOps High growth: Come and be part of a global rocketship. We are an international company primed for growth with customers worldwide and offices in the US, New Zealand, and India. Dynamic and growing environment: Step into a dynamic, adaptive, and interdisciplinary work environment where you can learn and grow your career. Collaborative culture: We have a fun and informal culture with low bureaucracy. We foster collaboration and an approach where you can voice your ideas in any forum.
Posted 3 days ago
5.0 years
0 Lacs
Cannanore
On-site
1. About the Role At Summit Solutions , we deliver enterprise-grade platforms that require reliable, scalable, and secure infrastructure. We are looking for an Experienced DevOps Engineer (Azure) to design, automate, and optimize CI/CD pipelines, cloud environments, and containerized deployments across all our engineering teams (Backend, Frontend, Mobile, and Database). This role involves Azure cloud expertise, infrastructure-as-code, and strong collaboration with development and database teams to ensure smooth application delivery and operations. 2. What You’ll Do Design, build, and maintain CI/CD pipelines for multiple projects (Backend APIs, React apps, Flutter mobile apps,.NET services). Manage Azure cloud infrastructure (App Services, Azure Kubernetes Service (AKS), Azure SQL, Storage, Functions). Automate infrastructure provisioning using Infrastructure-as-Code (IaC) tools like Terraform or Bicep. Set up and manage containerized environments (Docker, Kubernetes). Implement application and infrastructure monitoring (Azure Monitor, Prometheus, Grafana). Collaborate with developers to streamline build and deployment processes. Ensure high availability, disaster recovery, and performance optimization of deployed systems. Manage secrets, configurations, and security policies using Azure Key Vault and best practices. Conduct cost optimization and scaling strategies for cloud resources. Support database backup, migration, and performance monitoring with DBA teams. Introduce and maintain DevSecOps practices for secure deployments. 3. What You’ll Need 5+ years of experience as a DevOps or Cloud Engineer, with hands-on expertise in Azure . Strong knowledge of CI/CD pipelines (Azure DevOps, GitHub Actions, or Jenkins). Experience with Kubernetes (AKS), Docker, and container orchestration . Expertise in Terraform/Bicep for infrastructure automation. Familiarity with microservices deployment and service meshes (Istio/Linkerd) . Knowledge of security best practices , including role-based access control (RBAC), firewalls, and network configurations. Experience with logging and monitoring tools (Azure Monitor, ELK Stack, or equivalent). Strong scripting skills (PowerShell, Bash, or Python). Bonus: Experience with multi-cloud or hybrid environments (AWS/GCP) . Job Type: Full-time Schedule: Day shift Experience: DevOps: 5 years (Required) Azure: 5 years (Required) Work Location: In person Speak with the employer +91 8943669038
Posted 3 days ago
4.0 years
4 - 5 Lacs
Cochin
On-site
Minimum Required Experience : 4 years Full Time Skills python Git API SQL CI/CD Tableau Sdlc mulesoft INFORMATICA Databricks Azure Iics Aws Powerbi Datastage Description Overall 4+ years’ experience in implementing solutions for Integration of applications Role and Responsibilities : Perform development and testing activities as per SDLC framework. Constantly think scale, think automation. Measure everything. Optimize proactively. Identify technical risks to the sprint commitments early on and escalate accordingly. Will have to learn Model N application and adapt at earliest Onshore facing - take project requirements, come up with a technical design and perform required documentation Should be able to adapt to any role based on proj,ect situation and ensure project success Skills and Requirements: At least 2 years of experience building and scaling APIs Working experience in Python. Additionally has a working knowledge of other integration technologies like Informatica / IICS/ DataStage / Mulesoft. Should have strong experience in working with Sql and related technologies Experience in building pipelines from scratch as part of data migration/conversion projects Experience in basic Database administrative activities like creating tenant, clusters, score, key vault etc Experience with Git and CI/CD. Should have experience with Performance tuning, Query tuning by generating and explaining plan for SQL queries. Knowledge of any reporting tool Tableau, Power BI would be an added advantage. Eagerness to learn new technology and solve problems. Addons (certifications and not course completion): Any one of Informatica/Mulesoft/Databricks certifications Cloud certifications (AWS/Azure) Python certifications added an advantage
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Title: Senior Software Engineer Function/Department: Technology Location: Hyderabad – Work From Office Employment Type: Full-time Reports To: Pavan Kumar Vemuri Job Summary: As a developer for .Net Applications in Azure, you will be responsible for design, development, unit testing, deployment, and support of .Net API applications. Expertise in .NET, Azure, SQL, and API integration will be essential in ensuring quality project delivery. Responsibilities: Work as part of a project team for designing, developing, unit testing, and maintaining .Net API applications using .NET, SQL, and API integration. Deploy applications in Azure AKS using CICD pipelines. Work with technical lead and architects to understand business and technical requirements and implement code as per solution designed by leads. Suggest technical designs and solutions for given business requirements and review with technical leads. Conduct peer code reviews to ensure adherence to coding standards, best practices, and performance optimization. Collaborate with the Quality Assurance team to ensure high-quality application releases. Complete assigned tasks in a timely manner Stay updated with industry trends and best practices to foster innovation and improvements. Requirements: Strong experience in Object-Oriented Programming (OOP) and Software Development Strong proficiency in .NET, SQL, and API integration. Experience in Azure Kubernates, Storage, App Services, Azure SQL, Key Vault, Managed Identities Experience in Agile software development methodologies. Familiarity with DevOps practices and CI/CD pipelines. Preferred Qualifications: Bachelor's degree in Computer Science, Software Engineering, or a related field. 3-5 years of relevant experience Certification in relevant technologies or frameworks. Knowledge of front-end frameworks, such as Angular, React.js etc. Excellent analytical and problem-solving skills. Strong communication and interpersonal skills. Ability to thrive in a fast-paced and collaborative environment. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now : Chubb External Careers
Posted 3 days ago
0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Title: Senior Software Engineer Function/Department: Technology Location: Hyderabad – Work From Office Employment Type: Full-time Reports To: Pavan Kumar Vemuri Job Summary: As a developer for .Net Applications in Azure, you will be responsible for design, development, unit testing, deployment, and support of .Net API applications. Expertise in .NET, Azure, SQL, and API integration will be essential in ensuring quality project delivery. Responsibilities: Work as part of a project team for designing, developing, unit testing, and maintaining .Net API applications using .NET, SQL, and API integration. Deploy applications in Azure AKS using CICD pipelines. Work with technical lead and architects to understand business and technical requirements and implement code as per solution designed by leads. Suggest technical designs and solutions for given business requirements and review with technical leads. Conduct peer code reviews to ensure adherence to coding standards, best practices, and performance optimization. Collaborate with the Quality Assurance team to ensure high-quality application releases. Complete assigned tasks in a timely manner Stay updated with industry trends and best practices to foster innovation and improvements. Requirements: Strong experience in Object-Oriented Programming (OOP) and Software Development Strong proficiency in .NET, SQL, and API integration. Experience in Azure Kubernates, Storage, App Services, Azure SQL, Key Vault, Managed Identities Experience in Agile software development methodologies. Familiarity with DevOps practices and CI/CD pipelines.
Posted 3 days ago
0 years
10 Lacs
Hyderābād
On-site
Company Description Entain India is the engineering and delivery powerhouse for Entain, one of the world’s leading global sports and gaming groups. Established in Hyderabad in 2001, we’ve grown from a small tech hub into a dynamic force, delivering cutting-edge software solutions and support services that power billions of transactions for millions of users worldwide. Our focus on quality at scale drives us to create innovative technology that supports Entain’s mission to lead the change in global sports and gaming sector. At Entain India, we make the impossible possible, together. Job Description We are seeking a talented and motivated SRE Engineer III to join our dynamic team. In this role, you will execute a range of site reliability activities, ensuring optimal service performance, reliability, and availability. You will collaborate with cross-functional engineering teams to develop scalable, fault-tolerant, and cost-effective cloud services. If you are passionate about site reliability engineering and ready to make a significant impact, we would love to hear from you! Key Responsibilities: Lead team of SRE Engineers Implement automation tools, frameworks, and CI/CD pipelines, promoting best practices and code reusability. Enhance site reliability through process automation, reducing mean time to detection, resolution, and repair. Identify and manage risks through regular assessments and proactive mitigation strategies. Develop and troubleshoot large-scale distributed systems in both on-prem and cloud environments. Deliver infrastructure as code to improve service availability, scalability, latency, and efficiency. Monitor support processing for early detection of issues and share knowledge on emerging site reliability trends. Analyze data to identify improvement areas and optimize system performance through scale testing. Take ownership of production issues within assigned domains, performing initial triage and collaborating closely with engineering teams to ensure timely resolution. Qualifications For Site Reliability Engineering (SRE) , key skills and tools are essential for maintaining system reliability, scalability, and efficiency. Given your expertise in observability, compliance, and platform stability , here’s a structured breakdown: Key SRE Skills Infrastructure as Code (IaC) – Automating provisioning with Terraform, Ansible, or Kubernetes. Observability & Monitoring – Implementing distributed tracing, logging, and metrics for proactive issue detection. Security & Compliance – Ensuring privileged access controls, audit logging, and encryption . Incident Management & MTTR Optimization – Reducing downtime with automated recovery mechanisms . Performance Engineering – Optimizing API latency, P99 response times, and resource utilization . Dependency Management – Ensuring resilience in microservices with circuit breakers and retries. CI/CD & Release Engineering – Automating deployments while maintaining rollback strategies . Capacity Planning & Scalability – Forecasting traffic patterns and optimizing resource allocation. Chaos Engineering – Validating system robustness through fault injection testing . Cross-Team Collaboration – Aligning SRE practices with DevOps, security, and compliance teams . Essential SRE Tools Monitoring & Observability : Datadog, Prometheus, Grafana, New Relic. Incident Response : PagerDuty, OpsGenie. Configuration & Automation : Terraform, Ansible, Puppet. CI/CD Pipelines : Jenkins, GitHub Actions, ArgoCD. Logging & Tracing : ELK Stack, OpenTelemetry, Jaeger. Security & Compliance : Vault, AWS IAM, Snyk. Additional Information We know that signing top players requires a great starting package, and plenty of support to inspire peak performance. Join us, and a competitive salary is just the beginning. Working for us in India, you can expect to receive great benefits like: Safe home pickup and home drop (Hyderabad Office Only) Group Mediclaim policy Group Critical Illness policy Communication & Relocation allowance Annual Health check And outside of this, you’ll have the chance to turn recognition from leaders and colleagues into amazing prizes. Join a winning team of talented people and be a part of an inclusive and supporting community where everyone is celebrated for being themselves. At Entain India, we do what’s right. It’s one of our core values and that’s why we're taking the lead when it comes to creating a diverse, equitable and inclusive future - for our people, and the wider global sports betting and gaming sector. However you identify, across any protected characteristic, our ambition is to ensure our people across the globe feel valued, respected and their individuality celebrated. We comply with all applicable recruitment regulations and employment laws in the jurisdictions where we operate, ensuring ethical and compliant hiring practices globally. Should you need any adjustments or accommodations to the recruitment process, at either application or interview, please contact us.
Posted 3 days ago
12.0 years
34 - 45 Lacs
India
Remote
**Need to be Databricks SME *** Location - offshore ( Anywhere from India - Remote ) - Need to work in EST Time (US shift) Need 12+ Years of experience. 5 Must Haves: 1. Data Expertise -- worked in Azure Data Bricks/Pipeline/ Shut Down Clusters--2 or more years' experience 2. Unity Catalog migration -- well versed--done tera form scripting in Dev Ops--coding & understand the code--understanding the logics of the behind the scenes--automate functionality 3. Tera Form Expertise -- code building --- 3 or more years 4. Understanding data mesh architecture -- decoupling applications -- ability to have things run in Parallel -- clear understanding -- 2 plus years of experience Microsoft Azure Cloud Platform 5. Great problem Solver Key Responsibilities: Architect, configure, & optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment. Set up & manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, & Networking. Design & implement a monitoring & observability framework using tools like Azure Monitor, Log Analytics, & Prometheus / Grafana. Collaborate with platform & data engineering teams to enable microservices-based architecture for scalable & modular data solutions. Drive automation & CI / CD practices using Terraform, ARM templates, & GitHub Actions/Azure DevOps. Required Skills & Experience: Strong hands - on experience with Azure Databricks, Delta Lake, & Apache Spark. Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, & Networking. Proven experience in microservices architecture & container orchestration. Expertise in infrastructure-as-code, scripting (Python, Bash), & DevOps tooling. Familiarity with data governance, security, & cost optimization in cloud environments. Bonus: Experience with event - driven architectures (Kafka / Event Grid). Knowledge of data mesh principles & distributed data ownership. Interview: Two rounds of interviews (1st with manager & 2nd with the team) Job Type: Full-time Pay: ₹3,400,000.00 - ₹4,500,000.00 per year Schedule: US shift
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Delhi
On-site
COMPANY PROFILE Bain & Company is one of the top management consulting firms in the world that helps the world’s most ambitious change makers define the future. Across 65 cities in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition, and redefine industries. We complement our tailored, integrated expertise with a vibrant ecosystem of digital innovators to deliver better, faster, and more enduring outcomes. The firm established several functions in the Indian market early 2000s and its remit across functions has expanded over time. Since 2019, these functions have become part of Global Business Services (GBS). Global Business Services (GBS) is a network of five interconnected business-function hubs across India, Poland, Malaysia, Mexico and Portugal, serving Bain globally to run our business, support other functions, and help drive innovation internally. We are over 1000 business professionals – serving functions in operations, HR, finance, legal, tech, marketing, research, and data analytics – who support our offices globally. Our mantra of “shared innovation, seamless execution,” underpinned by a passion for results, teamwork, and creativity, helps Bain stay at the top of our game operationally. POSITION SUMMARY We are seeking a Senior Administrator to join our Infrastructure Operations team. In this role, you will be responsible for designing, implementing, and maintaining secure, scalable, and reliable infrastructure and cloud platforms. You will work across public cloud environments (Azure, AWS, or GCP), private data centers, and modern automation ecosystems to support both operational and project-based initiatives. This individual will lead infrastructure automation efforts, support internal service requests through ServiceNow, drive cloud modernization projects via Jira, and ensure systems are performant, secure, and aligned with industry best practices. The ideal candidate is proactive, collaborative, and passionate about operational excellence and infrastructure innovation. RESPONSIBILITIES & DUTIES Participate in managing day-to-day operations of cloud infrastructure environments including access management, performance management, monitoring, and assessment of metrics. Handson for Deployments, provisioning, Templates, Networking, Configuration, upgrades, App Gateway services, API, CI/CD Pipelines, express route configuration & subscription management. Collaborate with cross-functional teams including architects, developers, and security teams on new deployments, issues, and improvements to existing services: assist with migrations, integrations and identity enablement for SaaS or custom products; deploy IaaS and PaaS infrastructure in AWS, Azure, or GCP using Terraform, Ansible, or another Infrastructure-as-Code tooling. Serve as a technical escalation point for complex infrastructure and cloud-related issues, handling advanced troubleshooting and resolution. Design and deploy scalable cloud infrastructure (AWS, Azure, GCP) using Infrastructure as Code (IaC) tools such as Terraform and Ansible. Develop reusable modules, templates, and scripts to automate the provisioning and maintenance of infrastructure and services. Support and maintain cloud and on-prem environments, ensuring uptime, availability, and security to work on various Windows/Linux OS, Storage, Backup, DR and Patching process. Execute infrastructure projects and initiatives tracked via Jira. Build and maintain tools for monitoring, alerting, and observability (e.g., Datadog, Prometheus, Grafana). Create and maintain Standard Operating Procedures (SOPs), technical documentation, and runbooks. Mentor junior engineers and contribute to team knowledge-sharing and process improvement initiatives. Participate in a rotating on-call schedule, performing after-hours implementations or incident response as needed. Ensure adherence to cloud security best practices, identity and access management, and compliance standards. REQUIRED QUALIFICATIONS Bachelor’s degree with a demonstrated interest in technology, technology issues, and analytical analysis. Vendor Certifications a plus: Azure, AWS, Terraform, GCP, Python EXPERIENCE & TECHNICAL SKILLS : 5–7 years of relevant experience in Cloud Administration, DevOps, or related roles. Strong expertise in at least one public cloud platform (Azure, AWS, or GCP); experience with hybrid or private data centers is a plus. Proficiency with Infrastructure as Code and automation tools: Terraform, Ansible, GitHub Actions, PowerShell, and Python. Hands-on experience with Docker and Kubernetes for containerization and orchestration. Experience with CI/CD pipelines, deployment automation, and version control practices. Familiarity with monitoring and logging stacks such as Datadog, Prometheus, Grafana, etc. Practical knowledge of Go or Python for platform automation and API integration. Understanding of cloud landing zones and environment provisioning best practices. Experience with HashiCorp tools including Vault and Terraform Enterprise. Knowledge of identity and access management systems such as Active Directory, Azure AD, Okta, or LDAP. Strong grasp of cloud security principles, network security, and compliance frameworks PREFERRED CHARACTERISTICS : Excellent communication and interpersonal skills; able to effectively collaborate across teams and levels of the organization. Strong analytical mindset with the ability to identify and resolve infrastructure and performance bottlenecks. Passion for continuous learning, innovation, and driving operational excellence. Self-motivated, organized, and capable of managing multiple priorities in a fast-paced environment. ADDITIONAL INFORMATION: This role may require occasional work outside of standard business hours for system maintenance or incident response. This is a hybrid position with flexibility depending on organizational needs.
Posted 3 days ago
15.0 years
0 Lacs
Gurgaon
On-site
Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : CyberArk Privileged Access Management Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: We are looking for an experienced CyberArk PAM Specialist to design, implement, and support CyberArk Privileged Access Management (PAM) solution. Roles and Responsibilities: 1. Define, design, and implement CyberArk Privilege Cloud (SaaS). 2. Install and configure cloud connectors. 3. Configure MFA, SAML, LDAP, SIEM integration 4. Troubleshoot and resolve CyberArk related technical issues. 5. Work closely with application teams to onboard different types to systems to CyberArk 6. Generate custom CPM,PSM plugins if required 7. Support application onboarding, including access policies, group assignments, and role management. 8. Communicate effectively with business teams, external clients, and solution providers. 9. Document technical designs, solutions, and implementation plans. 10. Work independently and take ownership of technical deliverables. Professional & Technical Skills: Must Have: 1. Strong experience in CyberArk P-cloud, Conjur Secrets Management, CyberArk PAM (Vault, CPM, PSM, PVWA, AAM) 2. Solid understanding of security standards and protocols including SSO, MFA, SAML, OAuth, OIDC, LDAP, RADIUS, and Kerberos. 3. Proficient in CyberArk and related technologies. Experience in system administration, scripting (UNIX, Linux scripting), Rest API, LDAP directories, Active Directory 4. Experience in providing guidance in CyberArk strategy; must have PAM deep-dive experience. 5. Strong understanding of PAM Architecture, deployment methodologies and best practices. 6. Effective at presenting information to different audiences at the correct level of detail (e.g., from engineering teams to executive management). 7. Be a product and domain expert in PAM domain experienced in conducting environment assessments and health checks in line with best practices. 8. Strong troubleshooting and problem-solving skills. 9. Experience in EPM is desirable but not mandatory 10. Excellent verbal and written communication skills. 11. Ability to work independently on technical tasks and client engagements. 12. Candidate must be an independent self-starter able to perform all deployment activities with oversight and as a member of a project team. 13. Candidate must have Sentry Certification. Nice to have CyberArk CDE 14. Good to Have Skills : Thycotic (Delinea), Beyond Trust, HashiCorp Vault Additional Information: 1. 9+ years’ experience related to designing, deploying, and configuring PAM solutions, or 6+ years direct PAM consulting experience. 2. Candidate must have completed 16 years of full-time education. 3. This position is open to Bengaluru, Chennai,Pune,Hyderabad, Gurugaon Accenture locations. 15 years full time education
Posted 3 days ago
5.0 years
3 - 13 Lacs
India
On-site
Job Title: Azure DevOps Engineer Experience: 5+ Years Location: [Jaipur] Job Type: Full-Time Notice Period: [Immediate / 15 Days / Negotiable] About the Role: We are looking for an experienced Azure DevOps Engineer with over 5 years of expertise in managing cloud infrastructure, CI/CD pipelines, automation, and deployment processes using Microsoft Azure. The ideal candidate will play a key role in optimizing our software delivery and operational efficiency through modern DevOps practices. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using Azure DevOps. Manage and automate infrastructure provisioning using tools like ARM templates, Terraform, or Bicep. Monitor system performance, availability, and security using Azure Monitor and other logging tools. Work closely with development teams to integrate DevOps solutions and resolve issues in the build and deployment process. Set up and manage Azure services such as AKS, App Services, Azure Functions, Key Vault, and Azure Storage. Automate manual processes using PowerShell, Bash, or other scripting languages. Ensure proper version control, branching strategies, and code review processes. Maintain high availability and disaster recovery strategies for production systems. Implement DevSecOps practices to ensure security is integrated into the DevOps pipeline. Requirements: 5+ years of experience in DevOps engineering with at least 3+ years on Azure . Strong hands-on experience with Azure DevOps (Pipelines, Repos, Boards, Artifacts) . Experience with Infrastructure as Code (IaC) tools such as Terraform, ARM Templates, or Bicep . Knowledge of containerization and orchestration tools like Docker and Kubernetes (AKS) . Proficiency in scripting languages (PowerShell, Bash, Python). Familiarity with Git , branching strategies, and version control best practices. Experience with monitoring and logging tools : Azure Monitor, App Insights, Log Analytics, etc. Strong understanding of networking, security, and identity management in Azure. Preferred Qualifications: Microsoft Certified: Azure DevOps Engineer Expert (AZ-400) or relevant Azure certifications. Experience with Agile/Scrum methodologies. Exposure to hybrid cloud or multi-cloud environments. Job Type: Full-time Pay: ₹329,525.90 - ₹1,341,704.05 per year Benefits: Provident Fund Work Location: In person
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough