Jobs
Interviews

2588 Vault Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

8 - 30 Lacs

Chennai, Tamil Nadu, India

On-site

Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse

Posted 1 month ago

Apply

14.0 - 19.0 years

11 - 16 Lacs

Hyderabad

Work from Office

10 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.

Posted 1 month ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Hyderabad

Work from Office

8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or master's degree in computer science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge Strong communication and collaboration skills

Posted 1 month ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

Hyderabad

Work from Office

8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Mentor team members and contribute to DevOps best practicesIntegrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.

Posted 1 month ago

Apply

170.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary The primary responsibility of this position is to manage the day-to-day operations for the recertification function by Conducting data integrity check within IAM systems (OneCert, AMC, SAM, One Vault, Snow, etc) and native systems to ensure processes / procedures are followed without any deviations and as per documented process. Perform application recertification and access recertification function, help BO/ISO to fix the gaps between different systems and help them complete certifications. Key Responsibilities Conduct data integrity check within IAM systems (OneCert, AMC , SAM,One Vault, Snow etc) and native systems to ensure processes / procedures are followed by IAM operation team as per documented. Conduct root cause analysis on the gaps identified during data integrity and validation check Help BO/ISO to complete the recertification after data synchronization. Prepare data for User Access Recertification before the launch. Review all the attributes in each application and access before recertification campaign. Perform reporting with using Database and SQL queries in Sailpoint to check time stamp Conduct and strict adherence to established process to ensure all reports are generated and perform reconciliation as per documented process. Accountable for Identity Access Management (IAM) process adherence, enforcement, implementation, ensure to comply with the Bank policy. Accountable to perform manual administrative tasks related to IAM service request fulfilment and other BAU activities. Establish/participate in active collaborations with external team for reconciliation gap fixing. Work with support to understand the application limitations and opportunities for improvement. Responsible to highlight/mitigate risk pertaining to IAM process and to The Bank. Responsible to support IAM leaders to maintain and enhance IAM process. Responsible to demonstrate good working etiquette, strong team work and communication to the team. Undertake periodical review and other exercises in relation with Application security for compliance to current procedures/processes and implement enhancements to address non-compliance and security requirements. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Lead to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Application Business Owner CISO’s and Line Managers Risk and Compliance teams ICS/IAM domain Leads/Heads Technology Teams Application Support Services Skills And Experience Proficiency in Spreadsheet programs (Ms-Excel) Macro recording/editing and error handling Database and SQL Queries Qualifications Education Bachelor’S In Computer Science or Related Field Certifications EXCEL VBA AND MACROS , SQL AND DATABASE QUERIES Languages English About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Recruitment Assessments Some of our roles use assessments to help us understand how suitable you are for the role you've applied to. If you are invited to take an assessment, this is great news. It means your application has progressed to an important stage of our recruitment process. Visit our careers website www.sc.com/careers

Posted 1 month ago

Apply

6.0 - 11.0 years

12 - 16 Lacs

Hyderabad

Work from Office

SRE is part of an application team matrixed to the Cloud Services Team to perform a specialized function that focuses on the automation of availability, performance, maintainability and optimization of business applications on the platform. To be effective in the position, a SRE must have strong AWS, Terraform and GitHub skills as the platform is 100% automated. All changes being applied to the environment must be automated with Terraform and checked into GitHub version control. A matrixed SRE will be provided the Reliability Engineering role in the accounts they are responsible for. This role includes the rights to perform all the necessary functions required to support the applications in the IaaS environment. An SRE is required to adhere to all Enterprise processes and controls (ie ChgMgt, Incident and Problem Mgmt, etc) and ensure alignment to Cloud standards and best practices. Ability to write and implement infrastructure as code and platform automation Experience implementing Infrastructure as Code Terraform Collaborate with Cloud Services and Application teams to deliver projects Deploy infrastructure as code (IaC) releases to QA, staging, and production environments Responsible for building the automation for any account customizations required by the application custom roles, policies, security groups, etc DevOps Engineer Should be having Minimum 5years of working experience especially as DevOps Engineer/SRE Should be working as IC role with very good communication skills Verbal & Written OS Knowledge Should have 3Years hands on working experience on Linux SCM Should have 3 years of hands on working experience in Git Preferably GitHub Enterprise Cloud ExperienceShould have a thorough knowledge of AWS Certification is preferred CICD Tool 4 Years hands on working experience in Jenkins If not any other CICD tool EKS CICD Working experience with Jenkins and if not any other CICD tool for EKS. Jenkins Pipeline script hands on experience with pipeline script is preferred. Containers Minimum 1 Year hands-on working experience in Docker/Kubernetes. Preferred if candidate is certified CKA(Certified Kubernetes Administrator) Mulesoft Runtime Fabric Install configure Anypoint Runtime Fabric environment and deploy application on runtime fabric. Cloud Infra Provisioning Tool 2 Years hands on working experience in Terraform/ Terraform Enterprise/Cloud Formation Application Provisioning Tool 2 Years hands on working experience in Puppet/Ansible/Chef Data Components Should have good knowledge and Min 1 year of working experience with ELK, Kafka, Zookeeper HDF knowledge added advantage Tools Consul Vault Knowledge is added advantage Scripting Knowledge 3 years hands on working experience on any scripting language Shell/Python/Ruby etc Very good troubleshooting skills and should have hands on working experience in production deployments and Incidents. Mulesoft Knowledge Added advantage Java SpringBoot Knowledge Added advantage.

Posted 1 month ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Hyderabad

Work from Office

8 years of hands-on experience in Thought Machine Vault, Kubernetes, Terraform, GCP/AWS, PostgreSQL, CI/CD REST APIs, Docker, Kubernetes, Microservices Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Ensure compliance, implement monitoring and automation Guide developers on schema design and query optimization Conduct DB health audits and capacity planningCollaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or master's degree in computer science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge Strong communication and collaboration skills

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We’re looking for Senior Engineers with a deep backend focus to join our team. In this role, you can expect to: Design, prototype and implement features and tools while ensuring stability and usability Collaborate closely with Product Design and Product Management partners, as well as engineers on your team and others Act as a subject matter expert on quality development with an emphasis on Golang development Lead and execute large-scale projects, ensuring the reliable delivery of key features from design through full implementation and troubleshooting. Drive end-to-end project lifecycle, including architecture design, implementation, and issue resolution, with a focus on quality and efficiency. Evaluate project tradeoffs and propose solutions, proactively removing blockers and keeping stakeholders informed on progress, issues, and milestones. Collaborate with internal teams, customers, and external stakeholders to design solutions that align with requirements and customer needs. Advocate for strategic technical roadmap initiatives that enhance the system’s overall effectiveness across teams and the organization. Debug and resolve complex issues to improve the quality and stability of products or solutions Review and assess code for quality, design patterns, and optimization opportunities, ensuring best practices are followed Mentor and guide software engineers, sharing technical knowledge and promoting best practices in development processes Facilitate collaborative team activities, such as code pairing and group troubleshooting, to foster a productive and cohesive team environment Support reliable production environments, including participating in an oncall rotation Strive for quality through maintainable code and comprehensive testing from development to deployment Required education Bachelor's Degree Required technical and professional expertise Typically, you should have at least six or more years of experience as an engineer You have professional experience developing with modern programming languages and frameworks, and are interested in working in Golang and Ruby specifically You have experience working with distributed systems, particularly on a cloud provider such as AWS, Azure or GCP, with a focus on scalability, resilience and security. Emerging ability to direct work and influence others, with a strategic approach to problem-solving and decision-making in a collaborative environment Demonstrated business acumen and customer focus, with a readiness for change and adaptability in dynamic situations Cloud-native mindset and solid understanding of DevOps principles in a cloud environment Familiarity with cloud monitoring tools to implement robust observability practices that prioritize metrics, logging and tracing for high reliability and performance. Intentional focus on stakeholder management and effective communication, fostering trust and relationship-building across diverse teams Integrated skills in critical thinking and data-driven analysis, promoting a growth mindset and continuous improvement to support high-quality outcomes Preferred technical and professional experience You have experience using HashiCorp products (Terraform, Packer, Waypoint, Nomad, Vault, Boundary, Consul). You have prior experience working in cloud platform engineering teams #LI-Hybrid #LI-SG1

Posted 1 month ago

Apply

3.0 years

8 - 30 Lacs

Hyderabad, Telangana, India

On-site

Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse

Posted 1 month ago

Apply

3.0 years

8 - 30 Lacs

Pune, Maharashtra, India

On-site

Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse

Posted 1 month ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall. Responsibilities Role Overview: We’re looking for a seasoned Technical Lead to architect and drive the development of enterprise-grade applications using .NET Core, ReactJS, and Azure Cloud. This role combines hands-on engineering with leadership responsibilities, making it ideal for someone who thrives in full-stack environments and cloud-native ecosystems. Key Responsibilities: Lead the design and development of scalable web applications using .NET Core , C# , and ReactJS Architect and implement cloud-native solutions on Microsoft Azure Collaborate with cross-functional teams to define technical requirements and delivery timelines Mentor developers, conduct code reviews, and enforce best practices Drive DevOps practices using Azure DevOps , CI/CD pipelines, and infrastructure automation Ensure application performance, security, and maintainability Participate in sprint planning, architecture reviews, and stakeholder presentations Required Skills 10+ years of experience in software development with a strong focus on .NET technologies Proficiency in ReactJS , JavaScript/TypeScript , and modern front-end tooling Hands-on experience with Azure services (App Services, Functions, Cosmos DB, Key Vault, etc.) Strong understanding of RESTful APIs , microservices , and cloud architecture Familiarity with Azure DevOps , Git, and Agile methodologies Excellent communication and leadership skills.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with Bain Capability Network (BCN) collaborates with global case teams to address clients' pressing business challenges. Integrated with Bain's diverse capabilities and industry practices, leveraging sector expertise, data, research, and analytics to enhance intellectual property and deliver impactful client solutions. As part of BCN Data Engineering team, you will play a pivotal role in supporting Bain & Company’s client engagements (case work) and the development of innovative, data-driven products. This role requires a blend of technical expertise, problem-solving, and collaboration, as you’ll work closely with Bain consultants, product teams, and global stakeholders to deliver impactful data solutions . What you’ll do Write complex code to develop scalable, flexible, user-friendly applications across a robust technology stack. Evaluate potential technologies for adoption, including open-source frameworks, libraries, and tools. Construct, test, install, and maintain software applications. Contribute to the planning for acceptance testing and implementation of new software, performing supporting activities to ensure that customers have the information and assistance they need for a successful implementation. Develop secure and highly performant services and APIs. Ensure the maintainability and quality of code. About you A Bachelor’s or Master’s degree in Computer Science or related field 3 to 5 years of experience in full stack development Proficiency in back-end technologies such as Node.js, Python (Django/Flask) Experience working with relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB) Strong proficiency in JavaScript, TypeScript, or similar programming languages Familiarity with modern development tools like Git, Docker, and CI/CD pipelines. Experience with front-end frameworks (e.g., React.js, Angular, or Vue.js). Knowledge of RESTful APIs and/or GraphQL. Understanding of front-end and back-end architecture and design principles. Basic knowledge of cloud platforms (e.g., AWS, Azure, or Google Cloud) and containerization tools like Docker or Kubernetes. Sound SDLC skills, preferably with experience in an agile environment Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business . What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Roles and Responsibility Collaborate with cross-functional teams to implement Veeva Vault solutions. Provide technical support and training to end-users on Veeva Vault functionality. Develop and maintain documentation of Veeva Vault configurations and workflows. Troubleshoot and resolve issues related to Veeva Vault implementation. Work closely with stakeholders to understand business requirements and develop solutions. Ensure data quality and integrity by implementing best practices for data management. Job Requirements Strong understanding of Veeva Vault concepts and principles. Excellent communication and interpersonal skills. Ability to work effectively in a team environment. Strong problem-solving and analytical skills. Experience working with employment firms or recruitment services firms is preferred. Familiarity with industry-specific regulations and standards is an asset.

Posted 1 month ago

Apply

7.0 years

5 - 20 Lacs

Hyderābād

Remote

Job Summary: We are looking for a highly skilled Senior .NET Developer with strong expertise in Angular (v10+) and Microsoft Azure cloud services. The ideal candidate will be responsible for designing, developing, and maintaining scalable web applications and cloud-based solutions. This role requires a deep understanding of both backend (.NET Core, C#) and frontend (Angular) technologies, along with experience in deploying and managing applications on Azure. Key Responsibilities: Design, develop, and maintain web applications using .NET Core / .NET 6+ and C# . Develop rich UI components and SPAs using Angular 10+ , TypeScript, and RxJS. Implement RESTful APIs and integrate with frontend components. Deploy, monitor, and manage cloud applications using Microsoft Azure services. Work with Azure DevOps for CI/CD, pipelines, and release management. Utilize Azure services like App Services, Functions, Key Vault, Blob Storage, Azure SQL, etc. Write unit, integration, and end-to-end tests using tools like xUnit, Jasmine, Karma . Participate in code reviews, sprint planning, and team collaboration activities. Optimize performance and scalability of applications. Ensure application security and compliance with best practices. Required Skills: 7+ years of experience with .NET / .NET Core / C# development. 4+ years of strong experience with Angular 10+ , TypeScript, HTML5, CSS3. Hands-on experience with Azure PaaS services and serverless architecture. Solid understanding of RESTful APIs , JSON, and integration patterns. Proficiency with Entity Framework Core , LINQ, and SQL Server. Experience with Azure DevOps , Git, and CI/CD pipelines. Familiarity with agile methodologies (Scrum/Kanban). Preferred Skills: Experience with Azure B2C , Azure API Management , and App Insights . Understanding of containerization using Docker and orchestration via Kubernetes (AKS) . Experience in microservices architecture. Background in performance tuning and cloud cost optimization. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Nice to Have: Microsoft certifications (e.g., AZ-204 , AZ-400 , or AZ-900 ). Experience in integrating third-party services or legacy systems. Strong communication and mentoring skills. Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹596,847.08 - ₹2,003,671.14 per year Schedule: Monday to Friday Night shift Work Location: Remote Speak with the employer +91 9381007947 Expected Start Date: 10/07/2025

Posted 1 month ago

Apply

9.0 years

0 Lacs

Gurgaon

On-site

**What the role is all about:** We are looking for an Engineering Manager with 9-16 years of experience to lead the Audience Data team within the Personalization & Privacy division of our Consumer Group. This team is tasked with managing a significant data asset comprising terabytes of user interactions from our website and apps. This data is crucial for our personalization efforts, machine learning, product insights and analytics, as well as customer reporting functions. You will collaborate closely with our AI team, providing rich datasets essential for developing predictive models and personalized recommenders to enhance the accuracy and effectiveness of our machine learning solutions. Additionally, these comprehensive datasets facilitate in-depth analysis and understanding of user behaviour, supporting data-driven decision-making across our business. **While no two days are likely to be the same, your typical responsibilities will include:** + End to end technical delivery of complex initiatives under our product management pipeline using agile methods and frameworks working with cross-disciplinary teams. + Provide technical leadership and guidance to the team, serving as a subject matter expert in data engineering and related technologies. + Contribute to the design and architecture of scalable, efficient, and secure data solutions, considering long-term scalability and maintainability. + Provide guidance, support, and leadership to the team. + Establish effective ways of working within and across teams. + Embrace continuous learning, leveraging latest development trends to solve complex challenges. + Drive delivery practices with Delivery Lead running agile scrum ceremonies and producing agile artefacts. + Contribute to the adoption of best practices, coding standards, and engineering principles across the team to ensure a high-quality and maintainable codebase. + Collaborate with the development team to implement shift-left testing practices, ensuring early and frequent testing throughout the development lifecycle. + Conduct performance analysis, optimization, and tuning of data processing workflows and systems to enhance efficiency and meet performance targets. + Support the team’s iterations, scope, capacity, risks, issues, and timelines. + Participate in technical discussions, code reviews, and architectural reviews to maintain code quality, identify improvement opportunities, and ensure adherence to standards. + Mentor and coach engineers, fostering their professional growth, assisting them in overcoming technical challenges and create a culture of quality and efficiency, leading to reduced time-to-market and enhanced product quality. + Collaborate with stakeholders to define data governance policies, data quality standards, and data management processes. + Drive continuous improvement initiatives, such as automation, tooling enhancements, and process optimizations, to increase productivity and operational efficiency. + Act as a strong advocate for data-driven decision-making, promoting a data-driven culture within the organization. **Who we’re looking for:** + 9-16 years of experience working with platform and data engineering environments. + Proven people leadership and mentoring experience + Extensive experience in designing, coding, and testing data platform / management tools and systems. + Excellent knowledge of software development principles and best practices. + Proficiency in programming languages commonly used in platform and data engineering, such as Python, Java, or Go. + Strong skills in analytical SQL. + Experience with data engineering and any associated technologies such as dbt, Airflow, BigQuery / Snowflake, data lakes, Hive for ELT/ELT. + Experience with data modelling methodologies like Kimball or Data Vault 2.0 preferred. + Experience with Data Observability (Data Quality Monitoring) preferred. + Exposure to, or knowledge of Kafka, Google Pubsub, Apache Flink (or Spark) and streaming SQL is preferred. + Exposure to Linux and shell scripting. + Experience with DevOps practices and techniques, such as Docker and CI/CD tools. + Exposure to data management practices (data catalogues. data security) + Excellent communication skills and the ability to collaborate effectively with business stakeholders and cross-functional teams. + Ability to manage the competing demands of multiple projects in a timely manner. + Effectively communicate complex solutions to audiences with varying technical backgrounds, fostering consensus and collaboration. + Ability to work collaboratively and autonomously in a fast-paced environment. + Willingness to learn new and complex technologies, and ability to share knowledge with the team. **Bonus points for:** + Experience in using and managing Cloud infrastructure in AWS and / or GCP. + Experience with any Infrastructure as Code techniques, particularly Terraform. + Exposure to platform engineering concepts or developer experience & tooling. **What we offer:** + A hybrid and flexible approach to working. + Transport options to help you get to and from work, including home pick-up and drop-off. + Meals provided on site in our office. + Flexible leave options including parental leave, family care leave and celebration leave. + Insurances for you and your immediate family members. + Programs to support mental, emotional, financial and physical health & wellbeing. + Continuous learning and development opportunities to further your technical expertise. **The values we live by:** Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. **Our commitment to Diversity, Equity, and Inclusion:** We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more elective and fuel disruptive thinking – be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you’ve got the skills, dedication and enthusiasm to learn but don’t necessarily meet every single point on the job description, please still get in touch. **REA Group in India** You might already recognize our logo. The REA brand does have an existing presence in India. In fact, we set up our new tech hub in Gurugram to be their Neighbours! REA Group holds a controlling interest in REA India Pte. Ltd., operator of established brands Housing.com, Makaan.com and PropTiger.com, three of the country’s leading digital property marketplaces. Through our close connection to REA India, we’ve seen first-hand the incredible talent the country has to offer, and the huge opportunity to expand our global workforce. Our Cyber City Tech Center is an extension of REA Group; a satellite office working directly with our Australia HQ on local projects and tech delivery. All our brands, across the globe, connect regularly, learn from each other and collaborate on shared value initiatives.

Posted 1 month ago

Apply

1.0 years

2 - 5 Lacs

Gurgaon

Remote

Job Title: Performance Marketing Analyst Location: Gurgaon (Hybrid / Remote) Type: Full-Time Experience: 1–5 years Compensation: Competitive + Performance-based incentives About Us : IGV Vault Pvt Ltd. We are driving innovation through ventures like IG Vault (fintech), GoWashXpress (car care), VURS (beauty tech), Feliz Health (telehealth), and more. We’re on a mission to grow performance-led brands with cutting-edge technology and data-driven marketing. What You’ll Do As a Performance Marketing Analyst, you’ll take ownership of driving ROI-focused digital campaigns across multiple channels. You’ll plan, execute, analyze, and optimize performance marketing strategies to generate qualified leads, drive conversions, and scale business growth. Key Responsibilities : Plan and execute campaigns on Google Ads, Meta, LinkedIn, and programmatic platforms Manage budgets, bids, and targeting to meet CPA, ROAS, and funnel goals Track campaign performance via Google Analytics, Meta Insights, and custom dashboards Perform audience segmentation, A/B testing, and funnel optimisation Collaborate with creative teams for landing pages, ad creatives, and messaging Provide weekly/monthly reports with actionable insights Research market trends, competitor activities, and customer behaviour Recommend and test new platforms or strategies (YouTube, WhatsApp, influencer performance, etc.) You’ll Excel If You Have : 1–5 years of hands-on experience in performance marketing or digital analytics Proficiency with tools like Google Ads, Facebook Ads Manager, Google Analytics (GA4), Tag Manager, and Hotjar Strong analytical mindset and ROI-first thinking Experience with attribution models and conversion tracking Familiarity with Excel/Google Sheets for reporting Bonus: Experience in fintech, SaaS, consumer tech, or early-stage startups What We Offer : Hybrid or remote flexibility Transparent performance-linked incentives Fast career growth in a dynamic team Opportunity to work across multiple brands and industries Access to modern tools, training, and creative freedom Apply if your are willing to work in a start-up based company. Job Type: Full-time Pay: ₹250,000.00 - ₹500,000.00 per year Benefits: Commuter assistance Flexible schedule Paid sick time Paid time off Work from home Schedule: Day shift Supplemental Pay: Performance bonus Experience: Performance marketing: 1 year (Required) Location: Gurgaon, Haryana (Required) Work Location: Remote

Posted 1 month ago

Apply

8.0 years

0 Lacs

Andhra Pradesh

On-site

Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. 8+ years of hands-on experience in JAVA FULL STACK - JAVA SPRING BOOT Java 11+, Spring Boot, Angular/React, REST APIs, Docker, Kubernetes, Microservices Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

6.0 - 11.0 years

7 - 11 Lacs

Noida, Hyderabad, Gurugram

Work from Office

: 6+ Years : Automation Anywhere 360 Developer : Gurugram/Noida Period: Immediate Type: Contract Description: Must have hands on RPA development Experience and well versed on both AA V11.x and AA 360 versions. To have working knowledge of at least One programming language and Coding best practices. Experienced in feasibility Study to check technical fitment of specific process steps to RPAautomation. Must have hands on experience in Solution Design (SDD/TDD) involving Web/Mainframe(Terminal Emulator)/API/PDF/SAP/SFautomations Awareness and Exp in Citrix/Java based/Web Service/REST APIAutomations Knowledge &Experience in IQ BOT/DocumentAutomation/ WLM/Queue mgmt./multiboot Architecture/AARI Experienced in Preparing PDD, SDD, AOH, test scenarios, UAT and Perform code review. Must have knowledge on A360 Control room, bot deployment &Credential vault management. Must have Knowledge of Reusable components. Good knowledge on DB queries & DB Connectivity handling Mentor and groom junior team members/developers in overcoming technical roadblocks. Suggest mechanisms for best performance of RPA solution & knowledge of best practices and coding standards. Capable of Identifying technical exceptions and propose handling and debugging techniques. Experience on Debug Logs and Reporting features Must complete at least A360 Advanced certification (Masters certification preferred) Experience working with one or moreautomationtools BP, UiPath, Power Automate, WF etc(preferred

Posted 1 month ago

Apply

10.0 - 15.0 years

0 Lacs

India

Remote

Job Title: Senior M365 Solutions Architect Location : Pan India (Remote) Experience: 10-15 Years Job Description:- The Microsoft Office 365 Messaging Solution Architect will utilize the expert-level experience and knowledge of Office 365 architecture, administration, and best practice recommendations combined with a strong current and deep experience with Microsoft Exchange Server, Microsoft Office Communications Server, Microsoft Office SharePoint Server, Active Directory, and Forefront Identity Manager, to assist True Tandem’s customers with the design, development, and deployment of messaging and communication solutions. Key Functional Areas of Expertise Architects and consulting roles in the projects, Design and implementation, hands-on experience Technical specialization / External certifications Build the vital competency centers Excellent quality of delivery Build a portfolio of successful projects, references, and credentials Market research Technical Expertise 10 to 15 or more years of experience in Architecture and Designing solutions, Migrating on-prem Exchange to O365/Exchange Online. Expertise on Migration tools like Bittitan, Quest etc. Experience working in a Transitional multi-supplier environment within a large-scale organization. Experience in leading significant technical solution design and development, leveraging existing tool suites, and proposing best-of-breed solutions. Must have strong hands-on experience working on Exchange Online Experience implementing M365 DLP, Cloud App Security, Defender, and Conditional Access To maintain Exchange Hybrid Co-Existence in a multi-Active Directory Forest/Exchange Org Topology To maintain Data Sovereignty compliance (GDPR) when Multi-Geo must have to be used Migrations from Lotus Notes & GroupWise, along with InterOrg (Exchange to Exchange) and GSuite/O365 Tenant to Tenant Migrations Migrations from SharePoint on-premises, Gsuite, and other CMS tools to SharePoint Online and OneDrive for Business Must have Architectural understanding of how Third Party Archive & Journaling Email Data (Enterprise Vault, Mimecast) on legal hold is migrated appropriately (to maintain Data Immutability) Must have Architectural Proficiency in integrating Microsoft and non-Microsoft Mobile Devices/Mobile Application Management & Unified Endpoint Management solutions with Exchange/Office 365 (Intune, Air watch, Workspace One, etc.) as well as other third-party email-aware apps (Unified Communications/Messaging, Fax, Printer/Scanners) Must have Architectural Proficiency (Design, Build, Migrate) with Office 365 Email Protection Mechanisms: Microsoft and non-Microsoft Email Hygiene/Gateway Products (Proof-Point, Exchange Online Protection, Mimecast, etc.) SPF, DKIM, and DMARC (along with ARC) Office 365 Advanced Threat Protection, Must have an architectural understanding of Microsoft 365 protections CAS, DLP, ATP, AIP Data Loss Prevention, RMS/IRM, Azure Information Protection (Classification-Labelling & Sensitive Information Types), Office 365 Message Encryption & Transport Rules Legal/Litigation Holds, Retention, Deletion, and Data Immutability Understanding of Message Retention and Data Compliance Requirements A complete knowledge of Microsoft 365 and the Collaboration technology stack Proficiency in all technical aspects of M365 implementation and Azure Active Directory (AAD) services are required. Strong communication skills –express key ideas and obtain tangible feedback from cross-functional team members and stakeholders. Hand- on experience with implementation, deployment, migration, and support of core M365 services, including (but not limited to): Exchange Online – including mailbox migration, EOP, and Exchange Administration Preferred Qualifications: • Microsoft 365 Certified: Enterprise Administrator Expert • Microsoft 365 Certified: Messaging Administrator Associate • Experience with Azure Cloud, scripting, and automation

Posted 1 month ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Urgent requirement for Devops Engineer. Location Bangalore. Mandatory Skills AWS Services, Ansible, Graphana Terraform, Jenkins, Kubernetes Github, Vault, JFrog, Linux Services. Shift Timings General Shift. POSITION GENERAL DUTIES AND TASKS Below is the list of tools but not limited to , required preferably to have knowledge and exposure about 1. AWS services (RDS, EC2, S3, Kubernetes, SNS, Iam, AWS CLI, Route, Lambds, IoT, Greengrass, kafka, VPC, Security groups) 2. Ansible 3. Graphana 4. Terraform, 5. Terragrunt, 6. Jenkins

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Senior Azure Developer (Remote / WFH) Summary: As a Senior Azure Developer, you will lead the design, development, and implementation of complex cloud-based applications on the Microsoft Azure platform. You will provide technical leadership and mentor junior and mid-level developers. Responsibilities: Lead the design and development of cloud-based applications. Collaborate with stakeholders to define project requirements. Write high-quality, scalable, and maintainable code. Conduct code reviews and provide technical guidance. Implement and manage CI/CD pipelines. Ensure the security and performance of applications. Troubleshoot and resolve advanced technical issues. Optimize application architecture and performance. Create and maintain detailed documentation. Stay updated with the latest Azure technologies and industry trends. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 7+ years of experience in cloud development. Expert understanding of Microsoft Azure services. Proficiency in programming languages such as C#, JavaScript, or Python. Excellent problem-solving and analytical skills. Strong communication and leadership abilities. Experience with Agile methodologies. Preferred Certifications: Microsoft Certified - Azure DevOps Engineer Expert and Microsoft Certified - Azure Solutions Architect Expert Required Knowledge and Skills: Expert knowledge of Azure services like Azure App Service, Azure Functions, and Azure Storage. Leading the design and architecture of Azure-based applications, ensuring scalability, security, and performance. Proficiency in RESTful APIs and web services. Experience with version control systems like Git. Strong knowledge of SQL and NoSQL databases. In-depth understanding of DevOps practices. Experience with CI/CD pipelines. Strong understanding of networking concepts. Knowledge of security best practices in cloud environments. Ability to write clean, maintainable code. Experience with performance optimization. Hands-on writing automated test cases in Nunit/xunit/MSTest framework Hands-on with Azure containerization services Hands-on with ADF or Synapse Technologies, Coding Languages, and Methodologies: Microsoft Azure (Key Vault, Service Bus Queues, Storage Queues, Topics, Blob storage, Azure Container services (kubernetes, docker), App Services [Web Apps, Logic Apps, Function Apps], Azure functions (time triggered, durable), Azure AI services) Azure SQL, Cosmos DB .NET Core (latest version) API s, APIM Angular/ React JavaScript, Python SQL, Azure SQL, Cosmos DB Azure containerization services (Docker, Kubernetes) ADF or Synapse Nunit/xunit/MSTest framework Git Agile methodologies CI/CD pipelines IaC (Infrastructure as Code) - ARM/Bicep/TerraForms Azure DevOps Outcomes: Lead the design and development of complex cloud-based applications. Collaborate effectively with stakeholders. Write high-quality and scalable code. Provide technical leadership and mentorship. Implement and manage CI/CD pipelines. Ensure application security and performance. Troubleshoot and resolve advanced technical issues. Optimize application architecture and performance. Create and maintain detailed documentation. Stay updated with Azure technologies and industry trends.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: SSE – DevOps Engineer Mode of work: Work from Office Experience: 4 - 10 Years of Experience Know your team At ValueMomentum’s Engineering Center, we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Requirements - Must Have: 5+ years in DevOps with strong data pipeline experience Build and maintain CI/CD pipelines for Azure Data Factory and Databricks notebooks The role demands deep expertise in Databricks, including the automation of unit, integration, and QA testing workflows. Additionally, strong data architecture skills are essential, as the position involves implementing CI/CD pipelines for schema updates. Strong experience with Azure DevOps Pipelines, YAML builds, and release workflows. Proficiency in scripting languages like Python, PowerShell, Terraform Working knowledge of Azure services: ADF, Databricks, DABs, ADLS Gen2, Key Vault, ADO . Maintain infrastructure-as-code practices Collaborate with Data Engineers and Platform teams to maintain development, staging, and production environments. Monitor and troubleshoot pipeline failures and deployment inconsistencies. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry. Our culture – Our fuel At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other. People first - Empower employees to succeed. Nurture leaders - Nurture from within. Enjoy wins – Recognize and celebrate wins. Collaboration – Foster a culture of collaboration and people-centricity. Diversity – Committed to diversity, equity, and inclusion. Fun – Create a fun and engaging work environment. Warm welcome – Provide a personalized onboarding experience. Company Benefits Compensation - Competitive compensation package comparable to the best in the industry. Career Growth - Career development, comprehensive training & certification programs, and fast track growth for high potential associates. Benefits: Comprehensive health benefits and life insurance.

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

**Immediate joining ( WFH ) InfraSingularity aims to revolutionize the Web3 ecosystem as a pioneering investor and builder. Our long-term vision is to establish ourselves as the first-of-its-kind in this domain, spearheading the investment and infrastructure development for top web3 protocols. At IS, we recognize the immense potential of web3 technologies to reshape industries and empower individuals. By investing in top web3 protocols, we aim to fuel their growth and support their journey towards decentralization. Additionally, our plan to actively build infrastructure with these protocols sets us apart, ensuring that they have the necessary foundations to operate in a decentralized manner effectively. We embrace collaboration and partnership as key drivers of success. By working alongside esteemed web3 VCs like WAGMI and more, we can leverage their expertise and collective insights to maximize our impact. Together, we are shaping the future of the Web3 ecosystem, co-investing, and co-building infrastructure that accelerates the adoption and growth of decentralized technologies. Together with our portfolio of top web3 protocols (Lava, Sei, and Anoma) and our collaborative partnerships with top protocols (EigenLayer, Avail, PolyMesh, and Connext), we are creating a transformative impact on industries, society, and the global economy. Join us on this groundbreaking journey as we reshape the future of finance, governance, and technology. About the Role We are looking for a Senior Site Reliability Engineer (SRE) to take ownership of our multi-cloud blockchain infrastructure and validator node operations. This role is critical in ensuring high performance, availability, and resilience across a range of L1/L2 blockchain protocols. If you're passionate about infrastructure automation, system reliability, and emerging Web3 technologies, we’d love to talk. What You’ll Do Own and operate validator nodes across multiple blockchain networks, ensuring uptime, security, and cost-efficiency. Architect, deploy, and maintain infrastructure on AWS, GCP, and bare-metal for protocol scalability and performance. Implement Kubernetes-native tooling (Helm, FluxCD, Prometheus, Thanos) to manage deployments and observability. Collaborate with our Protocol R&D team to onboard new blockchains and participate in testnets, mainnets, and governance. Ensure secure infrastructure with best-in-class secrets management (HashiCorp Vault, KMS) and incident response protocols. Contribute to a robust monitoring and alerting stack to detect anomalies, performance drops, or protocol-level issues. Act as a bridge between software, protocol, and product teams to communicate infra constraints or deployment risks clearly. Continuously improve deployment pipelines using Terraform, Terragrunt, GitOps practices. Participate in on-call rotations and incident retrospectives, driving post-mortem analysis and long-term fixes. Our Stack Cloud & Infra: AWS, GCP, bare-metal Containerization: Kubernetes, Helm, FluxCD IaC: Terraform, Terragrunt Monitoring: Prometheus, Thanos, Grafana, Loki Secrets & Security: HashiCorp Vault, AWS KMS Languages: Go, Bash, Python, Typescript Blockchain: Ethereum, Polygon, Cosmos, Solana, Foundry, OpenZeppelin What You Bring 4+ years of experience in SRE/DevOps/Infra roles—ideally within FinTech, Cloud, or high-reliability environments. Proven expertise managing Kubernetes in production at scale. Strong hands-on experience with Terraform, Helm, GitOps workflows . Deep understanding of system reliability, incident management, fault tolerance, and monitoring best practices. Proficiency with Prometheus and PromQL for custom dashboards, metrics, and alerting. Experience operating secure infrastructure and implementing SOC2/ISO27001-aligned practices . Solid scripting in Bash, Python, or Go . Clear and confident communicator—capable of interfacing with both technical and non-technical stakeholders. Nice-to-Have First-hand experience in Web3/blockchain/crypto environments . Understanding of staking, validator economics, slashing conditions , or L1/L2 governance mechanisms. Exposure to smart contract deployments or working with Solidity, Foundry, or similar toolchains. Experience with compliance-heavy or security-certified environments (SOC2, ISO 27001, HIPAA). Why Join Us? Work at the bleeding edge of Web3 infrastructure and validator tech. Join a fast-moving team that values ownership, performance, and reliability. Collaborate with protocol engineers, researchers, and crypto-native teams. Get exposure to some of the most interesting blockchain ecosystems in the world.

Posted 1 month ago

Apply

0.0 - 1.0 years

2 - 5 Lacs

Gurugram, Haryana

Remote

Job Title: Performance Marketing Analyst Location: Gurgaon (Hybrid / Remote) Type: Full-Time Experience: 1–5 years Compensation: Competitive + Performance-based incentives About Us : IGV Vault Pvt Ltd. We are driving innovation through ventures like IG Vault (fintech), GoWashXpress (car care), VURS (beauty tech), Feliz Health (telehealth), and more. We’re on a mission to grow performance-led brands with cutting-edge technology and data-driven marketing. What You’ll Do As a Performance Marketing Analyst, you’ll take ownership of driving ROI-focused digital campaigns across multiple channels. You’ll plan, execute, analyze, and optimize performance marketing strategies to generate qualified leads, drive conversions, and scale business growth. Key Responsibilities : Plan and execute campaigns on Google Ads, Meta, LinkedIn, and programmatic platforms Manage budgets, bids, and targeting to meet CPA, ROAS, and funnel goals Track campaign performance via Google Analytics, Meta Insights, and custom dashboards Perform audience segmentation, A/B testing, and funnel optimisation Collaborate with creative teams for landing pages, ad creatives, and messaging Provide weekly/monthly reports with actionable insights Research market trends, competitor activities, and customer behaviour Recommend and test new platforms or strategies (YouTube, WhatsApp, influencer performance, etc.) You’ll Excel If You Have : 1–5 years of hands-on experience in performance marketing or digital analytics Proficiency with tools like Google Ads, Facebook Ads Manager, Google Analytics (GA4), Tag Manager, and Hotjar Strong analytical mindset and ROI-first thinking Experience with attribution models and conversion tracking Familiarity with Excel/Google Sheets for reporting Bonus: Experience in fintech, SaaS, consumer tech, or early-stage startups What We Offer : Hybrid or remote flexibility Transparent performance-linked incentives Fast career growth in a dynamic team Opportunity to work across multiple brands and industries Access to modern tools, training, and creative freedom Apply if your are willing to work in a start-up based company. Job Type: Full-time Pay: ₹250,000.00 - ₹500,000.00 per year Benefits: Commuter assistance Flexible schedule Paid sick time Paid time off Work from home Schedule: Day shift Supplemental Pay: Performance bonus Experience: Performance marketing: 1 year (Required) Location: Gurgaon, Haryana (Required) Work Location: Remote

Posted 1 month ago

Apply

4.0 - 10.0 years

0 Lacs

India

On-site

Hiring Data modeler for one of the global consulting firm for Pan India location. Experience - 4-10 years This role works with business analysts, business stakeholders and technical personnel to translate business data requirements into data models and related data artefacts, which will form the basis of solutions which meet and fulfil the business requirements. This role reports to the Lead Data Modeller. Tasks include: ∙Producing Conceptual, Logical and Physical Data Models and related artefacts to specified standards ∙Producing Entity Relationship Diagrams and specifying and documenting the relationships between entities, specifying and documenting the data attributes for each entity ∙Data Vault modelling knowledge and experience ∙Agreeing entity and attribute definitions with the respective stakeholders ∙Producing Data Ontologies and Class Diagrams ∙Generating data model schema scripts, using data modelling tools, from Physical Data Models ∙Simultaneously being part of several project teams in order to allow multiple projects to proceed at pace concurrently ∙Contributing to data modelling framework design discussions regarding items such as the design and use of Reference Data ∙Investigating and providing estimates for data modelling work and, reporting progress to-date ∙Working with Business SME and Technical staff to identify the data sources to be used to meet the business requirements ∙Data mapping: The identification, specification and documentation of data mappings and the associated transformations in order to move data from the source systems, through the data warehouse layers where appropriate and, to present the data to authorised end users ∙The clarification and documentation of business rules and data transformations with technical colleagues and business users (who may be members of senior management) ∙Working with Database Administrators (DBA’s) and information service designers to ensure the physical implementation meets the functional and non-functional requirements Skills & Experience: ∙Build data flows and develop conceptual data models Internal ∙Create logical and physical data models using best practices to ensure high data quality and reduced redundancy ∙Optimise and update logical and physical data models to support new and existing projects ∙Maintain conceptual, logical and physical data models along with corresponding metadata ∙Develop and maintain best practices for standard naming conventions and coding practices to ensure consistency of data models ∙Recommend opportunities for reuse of data models in new environments ∙Perform reverse engineering of physical data models from databases (Oracle, Microsoft Azure) and SQL scripts ∙Good knowledge and experience of metadata management, data modelling, and related tools (Erwin or ERStudio or others) required ∙Good understanding if graph modelling (neo4J, Star Dog or similar) ∙Should have good SQL scripting knowledge to perform reverse engineering activities ∙Examine new application design and recommend correction wherever applicable

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies