Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Summary The Centre for Effective Governance of Indian States (CEGIS) aims to help state governments strengthen their capacity and public systems to improve governance, service delivery, and the effectiveness of public expenditure.As CEGIS completes its fifth year, we are looking to hire multiple economic policy analysts to join the Economics and Statistics Unit at CEGIS to conduct economic analysis focused on state-level policy issues in India. In this pivotal role, you will help drive analytical work and conduct impactful research on policies and programmes that can boost the effectiveness of thousands of crores of public spending, and thereby improve the translation of public expenditure into development outcomes for millions of people. You will work closely with a diverse team of analysts and economists in close collaboration with senior government officials and with technical guidance and inputs from CEGIS Co-Founder and Scientific Director, Prof. Karthik Muralidharan, as well as other leading economists. As a CEGIS Economic Policy Analyst, you will aid and assist the team in the following activities: (a) conducting original research and technical analysis to evaluate new proposed expenditure items; (b) conducting economic analysis of government policies and programs and evaluating key programs; (c) staying abreast of and synthesising relevant research for answering policy questions; and (d) identifying and liaising with academic and other researchers to obtain expert inputs into policy decisions. This position offers an exciting opportunity to apply your analytical skills and communicate impactful ideas, making a tangible impact on governance and public policy in India. Role and Responsibilities Economic Research and Technical Analysis Conduct comprehensive economic research and analysis on various policy and programmatic issues relevant to state governments. Develop data-driven insights and recommendations to support effective policy implementation and governance reforms. Support CEGIS field projects, including sampling design, data analytical frameworks, and analytical tools. Curate and update datasets (international, national, and state) for rapid analysis. Policy Development and Collaboration Engage with senior government officials to identify research, analysis, and knowledge gaps that can be filled by CEGIS Engage with a range of stakeholders, including government officials, researchers, and think-tanks, to foster effective policy dialogues and knowledge sharing across Indian states, and beyond. Support CEGIS teams and projects in developing and implementing evidence-based policy solutions, providing critical economic insights and analyses. Translate economic research findings into practical policy ideas and reforms that can be presented to state governments for consideration. Knowledge Creation and Dissemination Draft high-quality notes, reports, policy briefs, and academic papers, applying economic concepts and analytical methods effectively. Create and present accessible content to communicate complex economic findings and insights to both academic and non-academic audiences. Education A Masters degree in Economics, Public Policy, or a related field is strongly preferred. Relevant work experience of at least 3 years related to empirical research in the domain of public policy is an additional asset, although not a strict requirement. Applicants without work experience must be able demonstrate requisite skills and inclination through a strong academic record. Skills Proficiency in data science and experience working with large datasets. Knowledge of at least one statistical analysis software (STATA, R, etc.) is an essential requirement. Proficiency in these will be a strong advantage. Knowledge of other programming languages (like Python), and GIS software packages will provide candidates with a strong advantage. Familiarity with major research datasets covering India and experience in compiling and using complex datasets. Strong writing and communication skills in English; fluency in any other Indian languages is a plus. Capability in preparing high-quality policy briefs, research papers, and notes. Demonstrated interest in government functioning improvement and using research and evidence to inform policy. Exposure to project design and implementation, particularly in collaboration with government officials or large-scale projects, is advantageous. Personal Characteristics and Desired Qualities Strong quantitative, analytical, and conceptual skills in economics. Ability to work effectively across a range of projects at any given time. Adaptability to work independently and as part of a small, dynamic team. Creative thinking, willingness to experiment with new ideas, and ability to translate ideas into action plans and execute them. Intellectual curiosity and commitment to continuous learning. Passion for working with governments to enhance state effectiveness. Location - Lucknow/Raipur/Tamil Nadu/Telangana/Karnataka (Please note that for training purpose you need to be present in Chennai/Delhi for first 2 months) Pre-reads Concept note on CEGIS A glimpse into life at CEGIS - CEGIS Retreat 2024 CEGIS Snapshot 2023-24 Podcast episodes with Prof. Karthik Muralidharan one each oneducation and healthcare in India. You are also encouraged to read more of Prof. Karthik Muralidharans work here and through his book Accelerating Indias Development: A State-Led Roadmap for Indias Development . Show more Show less
Posted 2 days ago
15.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the Role As the Head - Procurement at Neysa.ai, you will build and lead a high-impact, modern procurement function from the ground up—crafting sourcing strategies, vendor partnerships, and operational systems that power our AI-first and cloud-native future. This role is both strategic and operational. You’ll be responsible for shaping Neysa’s sourcing roadmap across complex categories like compute infrastructure (GPUs, servers), cloud credits, SaaS tools, professional services, and specialized AI partnerships. In a fast-moving ecosystem where scale and agility are paramount, you’ll bring clarity, control, and commercial advantage to every procurement decision. You’ll work closely with business, engineering, and finance leaders to forecast needs, optimize cost structures, ensure compliance, and deliver long-term value through data-driven sourcing and negotiation strategies. If you’re excited by the challenge of building procurement in a high-growth AI tech company—this is your opportunity to lead from the front. Key Responsibilities 1. Strategic Sourcing & Category Ownership Build and execute Neysa’s end-to-end procurement strategy across CapEx and OpEx. Lead sourcing for high-priority categories: GPUs, cloud credits, servers, networking equipments, SaaS platforms, and consulting services. Define procurement policies and build lightweight, scalable governance frameworks that balance structure with agility. Forecast category demand in partnership with all functional teams. Manage and mentor procurement teams, fostering a culture of innovation and continuous improvement 2. Vendor & Ecosystem Management Identify, evaluate, and engage with high-value strategic partners, global suppliers, and niche vendors. Lead commercial negotiations, pricing discussions, and contract finalisation with a focus on TCO and flexibility. Establish vendor scorecards and feedback loops for ongoing performance management. Develop alternate supplier strategies to mitigate risks and ensure continuity. 3. Procurement Intelligence, Digitisation & Compliance Implement procurement automation through ERP or lightweight tools; explore AI-led platforms to drive efficiencies. Build real-time dashboards and spend visibility systems for Finance and leadership. Partner with Legal and Finance to ensure regulatory compliance, risk mitigation, and audit readiness. Drive data-backed insights into sourcing decisions and identify cost-saving opportunities. 4. Build-Mode Leadership As the first procurement leader at Neysa, you’ll operate as a strategic individual contributor with the mandate to build a scalable function. Design scalable SOPs, documentation processes, and vendor onboarding play books. Responsible for scaling procurement operations across geographies and verticals. Reporting to: Executive Vice President – Finance (EVP - Finance) Functionally Aligned With: Leadership team What You Bring 15+ years of procurement experience in tech, cloud, datacenter or high-growth enterprise environments. Proven ability to handle strategic sourcing in areas like cloud infra, SaaS, hardware, and AI/ML services. Strong commercial acumen with expertise in negotiation, contract structuring, and vendor governance. Excellent analytical, organisational, and communication skills to articulate complex technical and procurement strategies. Experience in scaling procurement systems using ERP tools (SAP, Oracle, Coupa, etc.) or modern procurement stacks. Deep understanding of the tech vendor ecosystem and regulatory frameworks around high-value sourcing. Bias for action, builder’s mindset, and a pragmatic approach to working in evolving environments. Why Join Neysa? Neysa.ai is reimagining how businesses build and deploy AI at scale—and your role will directly impact how efficiently we operate, partner, and scale in this journey. This is your chance to build a procurement function that’s not just operational but transformational. You’ll have the runway to make critical sourcing decisions, influence long-term vendor strategy, and eventually build a procurement team from scratch. If you want to create tangible impact at the intersection of AI, cloud, and commercial excellence—this is the role for you. Build the backbone of AI acceleration. Build it at Neysa. www.neysa.ai Show more Show less
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
The primary role is to establish and manage Treasury & Resource Management Function for an emerging NonBank Finance Company situated in GIFT City (Head office in Singapore and subsidiary in UAE and Jaipur). The role will enhance in stature & position as the company has ambitions to build & expand the balance sheet manifold in next few years given potential of the unique business proposition. ALM and Cash Management Monitoring asset-liability positions. Suggesting adjustments when needed. Manage daily cash operations, including cash positioning, reporting, and forecasting. Ensuring that the Operations gets funding availability into the business account for meeting daily disbursement requirements. Manage money market investments, liquidity management, and short-term funding strategies. Ensure optimal utilization of surplus funds. Resource Raising Developing resource-raising strategies with Senior Management. Introducing effective resource-raising instruments. Relationship management with Banks / FI’s / AIF’s / MF’s for resource raising. Ensure optimum cost of resources raised. Completing all documentary/legal formalities with funding institutions Co-ordinating with Rating Agencies for obtaining credit rating and thereafter raising funds. Investment Determining investment strategies in consultation with Senior Management / Investment Committee / ALCO members. Recommending portfolio changes. Ensuring efficient deployment of funds – maximize returns within the specified risk parameters. Monitoring the investment portfolio/monitoring investment limits. Foreign Exchange Management Monitoring forex/forward contract positions taken for Treasury / Operations. Monitor forex market trends and provide insights for decision-making. Monitoring the operations in Nostro a/c. Reviewing Foreign Exchange risk management strategies including hedging and currency risk mitigation. Treasury Function Overview: Optimize Net Interest Margins & Spreads for the businesses. Establish and conduct / support regular review of Treasury policies and procedures. Continuously improve processes to enhance efficiency and effectiveness. Provide inputs to Senior Management on all aspects related to Treasury Management Prepare detailed financial reports and presentations for Senior Management. Oversee Management Information (MIS) for Treasury related activities. Conduct in-depth financial analysis to support decision-making processes, including scenario planning, sensitivity analysis, and stress testing. Participate & provide inputs on strategic business initiatives & budgeting process to align Treasury & Company objectives. Ensure strong compliance to all relevant regulatory requirements and high standards of governance Provide training and support to team members on Treasury-related matters. Miscellaneous Conduct performance reviews and provide ongoing feedback and development opportunities. Interacting with Banks/FI’s/AIF’s/MF’s/Shareholders as well as Rating Agencies, Regulators and Auditors. Support regular internal / third party financial reviews and audits. Provide support on compliance and governance issues. Stay updated with industry trends and best practices. Participation in weekly/monthly calls with the team. Participation in the Lender’s call based on requirement. Academic Qualifications & Experience Candidates having experience in Bank / NBFC in Treasury function with 4 to 5 years of post-qualification experience would be preferred. Academic Qualifications: Graduation Degree / Post Graduate Degree (Financial Management) Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Thane, Maharashtra, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GRC Consultant Location: Chennai Experience: 3+ years Availability: Immediate Joiners Preferred Language Requirement: Proficiency in Tamil (Mandatory) Job Description: We are hiring a GRC Consultant in Chennai who will be responsible for governance, risk, and compliance-related activities. The role involves working closely with internal teams and clients to assess and improve the risk posture of the organization. Key Responsibilities: Implement and maintain GRC frameworks, policies, and controls Conduct risk assessments, gap analyses, and internal audits Assist in preparing compliance documentation for ISO 27001, SOC 2, GDPR, etc. Coordinate with audit teams and facilitate external assessments Monitor regulatory changes and ensure timely updates to policies and controls Develop and deliver training sessions and awareness programs in Tamil and English Requirements: Minimum 3 years of experience in GRC, IT Risk, or Compliance Proficient in Tamil (both spoken and written) Sound understanding of risk management frameworks and standards Good communication and documentation skills Preferred certifications: ISO 27001 LA, CISA, CRISC, etc. Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Greater Lucknow Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Nashik, Maharashtra, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Delhi, Delhi
On-site
Position - Company Secretary Designation - AVP Department - Secretarial and Compliance Location - New Delhi Educational Qualification - CS Experience - 5 years and above (Candidates having work experience with public sector undertaking/financial sector entities would be preferred.Remuneration) Job Description - To assist in compliance of the Companies Act, 2013 and Rules made there under, Listing Regulations, Insider Laws and RBI Guidelines etc. Ensure timely filing of returns & forms with regulatory authorities (e.g., RBI, MCA). Handling of Corporate Actions To assist in convening and conducting the meetings of Board/Committee/AGM of the Company initiating from preparation of notice, agenda till finalization of minutes and distribution of action points. Maintain records related to board meetings, general meetings, and regulatory compliances. Drafting of internal policies, governance documents, and SOPs. Liaising with Share Transfer Agents, Bankers, Depositories, regulators, parent bank, exchanges etc. Investor Correspondence & Dividend Payment, its related issues. Maintenance of Investors Relations page & other disclosures on website. Maintenance of the statutory records of the company, including registers of members, directors, and secretaries, charges, contracts etc. To assist the secretarial & Compliance function of the Company. Should have excellent drafting and communication skills. Should be familiar with NSE/BSE/SEBI/MCA/RBI/NSDL/CDSL websites and their reporting portals for reporting on behalf of the Company. Handling of Secretarial audit and applicable due diligence processes Keep abreast of changes in corporate laws and governance practices. Handling of Annual CAG & RBI Inspection. Will act as Deputy Nodal Officer - IEPFA Remarks Candidates having work experience with public sector undertaking/financial sector entities would be preferred. Remuneration Upto 15 – 20LPA Kindly share CV at Sapna@shelbyglobal.com or reach at 7406291116. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Schedule: Day shift Language: Hindi (Preferred) English (Preferred) Work Location: In person
Posted 2 days ago
7.0 years
40 Lacs
Kolkata, West Bengal, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Cuttack, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Guwahati, Assam, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
2.0 - 7.0 years
0 Lacs
Guwahati, Assam, India
On-site
About Us : - We are India's leading political consulting organization dedicated to provide high quality professional support for political campaigns. We strongly believe that the nation will best benefit from an enlightened political leadership in the form of Prime Minister Narendra Modi and are proud to have previously contributed in a similar capacity in the momentous election campaign of 2014, 2019,2024 and various subsequent state elections. - Our work includes envisioning and executing innovative electioneering campaigns, facilitating capacity building of grassroots cadre and shaping governance. We add professional aspects to the strengths of the scores of grassroots workers supporting the Prime Minister and ensure optimal electoral results not as an end in itself but to add to the Prime Minister's vision for a developed India. Our work leverages on-ground activities, data analytics, research and new age media as a force multiplier for the Prime Minister's messages and actions. - We comprise a diverse group of dedicated individuals including former management consultants, lawyers, engineers, political theorists, public policy professionals and other varied sectors from premier institutes and corporates with the unified objective of meaningfully contributing to the polity of the nation. Roles and Responsibilities: 1. General Administration & Facility Management 2. Real Estate Solution & Project Management - Setting up of new offices / shifting of existing offices, office space sourcing, negotiations, liaison with landlord, interior / fit out work, agreement execution/renewal, renovation and refurbishment within given time frame and budget, procurement of assets and leased line, broadband, telephone connections etc. 3. Vendor Management & Development 4. Travel Desk - PAN India flight, hotels and cab booking 5. Liaison & Compliances 6. Budgeting 7. Guest House setup and Management This position requires extensive travelling and longer stays at project sites. The person should be comfortable with 6 days working role. Location- Guwahati Experience - 2 to 7 Years Role: Executive / Senior Executive Language Proficiency- Hindi, English and Assamese Local Candidate Preferred. P.S This is a contractual role till April,2026 Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview :- As the Inside Delivery Head at Kreedo, you will spearhead the end-to-end delivery of our academic training programs, preschool owner enablement. You will lead a team of master trainers, drive operational excellence, and ensure our partner preschools thrive academically and operationally. This role demands strategic leadership, exceptional stakeholder management, and a passion for early education. Key Responsibilities 1. Academic Training Delivery Oversee the seamless execution of academic training programs to ensure high-quality implementation of the Kreedo system in classrooms. Lead and mentor a team of master trainers to deliver consistent, impactful teacher training and classroom enablement. Align academic vision with execution across all partner centers to maintain Kreedo’s standards of excellence. Monitor and enhance key metrics, including training effectiveness, classroom quality, and teacher performance. Collaborate with the Academic R&D team to incorporate field insights and update training content for continuous improvement. 2. Preschool Owner Enablement & Success Act as a strategic partner to preschool owners, supporting them in academics, people management, and business operations. Conduct structured business review meetings to evaluate center performance and identify growth opportunities. Provide tailored support to boost parent engagement, admissions, staff effectiveness, and adherence to Kreedo standards. Work closely with Sales, Operations, Academic, and Customer Success teams to deliver a cohesive experience for preschool owners. Drive partner retention and satisfaction through proactive engagement and effective problem-solving. 3. Process, Governance & Reporting Develop and refine Standard Operating Procedures (SOPs) for training delivery, business enablement. Own and track key delivery metrics, including training completion rates, center satisfaction scores and center health indicators. Provide actionable insights and delivery dashboards to leadership to inform strategic decisions. Continuously identify improvement areas using field feedback and performance data. What We’re Looking For Must-Haves 10–14 years of experience in training delivery, business operations, or partner enablement, preferably in education, franchising, or SME-focused industries. Proven leadership experience managing teams and building scalable delivery processes. Exceptional communication, stakeholder management, and problem-solving skills. Deep understanding of small business operations and the support required for their success. Passion for early childhood education and strong alignment with Kreedo’s mission and values. Nice-to-Haves Experience with digital training platforms, learning management systems, or analytics tools. Familiarity with early childhood education models or ed-tech environments. Why Join Kreedo? At Kreedo, you’ll play a pivotal role in redefining early education for students, teachers, and preschool entrepreneurs in underserved markets. Be part of a dynamic team committed to delivering excellence and creating lasting impact in India’s preschool ecosystem. Join Kreedo and help us build a brighter future for young learners and preschool owners across India! Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Reports To: VP Head of HR Technology and Processes Department: HR Transformation / Digital HR We are looking for a dynamic Global PMO Lead to drive the successful delivery of enterprise-wide digital HR transformation initiatives , with a core focus on SAP SuccessFactors . This role will lead the PMO function supporting the global rollout and optimization of digital HR platforms, driving governance, visibility, and consistency across a complex program landscape. The ideal candidate brings deep program management expertise, understands global HR technology ecosystems, and has a proven record of delivering business value through digital transformation. Key Responsibilities: Global Program Governance & PMO Leadership: Lead the PMO for the Digital HR Transformation Program, establishing frameworks for governance, project delivery, risk management, and reporting. Define and manage program plans, integrated roadmaps, interdependencies, and key milestones for SAP SuccessFactors implementation and related digital HR solutions. Ensure consistent program methodologies, stage gates, and quality standards across regions and workstreams. Portfolio & Project Oversight: Monitor execution of a global HR technology portfolio including Employee Central, Onboarding, Compensation, Performance, Succession, and Recruiting modules. Drive integration with enabling platforms such as ServiceNow, e-signature tools, and analytics/reporting tools. Oversee vendor and system integrator performance, budgets, timelines, and deliverables. Strategic Stakeholder Engagement: Act as the key liaison between global HR, IT, business units, and regional transformation leads. Prepare and present high-impact executive reports and dashboards for senior leadership and steering committees. Facilitate effective decision-making across a federated HR environment. Change Management & Adoption: Partner with Change & Communications leads to ensure adoption, process alignment, and stakeholder readiness. Support execution of global rollout strategies and local deployment waves. Team Leadership & Capability Uplift: Build and lead a high-performing global team of PMO analysts and project and managers. Promote knowledge sharing, continuous improvement, and capability building within the HR function. Qualifications: Bachelor’s degree in Business, Human Resources, or related field; MBA or equivalent preferred. PMP, PRINCE2, Agile, or equivalent program management certification. 10+ years of experience in global program/project management roles, with 5+ years specifically in HR Technology or Digital HR . Proven experience managing large-scale SAP SuccessFactors implementations (Employee Central essential; other modules a plus). Strong knowledge of HR operating models, process transformation, and digital enablement. Demonstrated ability to lead across regions, functions, and vendor ecosystems in a matrixed environment. Preferred Skills: Hands-on familiarity with tools such as SuccessFactors Provisioning, ServiceNow, LXP, LMS, Document management etc. Experience managing shared services set-up, global design/localization, and post go-live optimization. Expertise in business case tracking, benefit realization, and continuous improvement in a digital HR environment. It’s an exciting time to be part of our team. At the Adecco Group, our purpose – making the future work for everyone – inspires and connects us all. Through our three global business units (GBU) – Adecco, Akkodis and LHH - we deliver expertise in talent and technology, enabling organizations to succeed and people to thrive. We’re proud to be a global thought-leader and care about doing the best job we can to ensure better futures for everyone. We do this by building our Future@Work strategy as a united team of 40,000+ colleagues with a collective spirit working in over 60 countries globally. We embody our core values: Courage , Collaboration , Customer at the Heart , Inclusion , and Passion in everything we do. Growth and Development You will have the opportunity to grow across a variety of interesting jobs and careers over our extensive portfolio of global brands. We empower our colleagues to work in the smartest, most efficient ways, achieving total balance between their jobs and their lives. We offer world-class resources for upskilling and development, satisfying your curiosity while sharing skills, knowledge, and expertise to grow together. Here, you can be yourself, and we aim to build on the attributes that make you, you. A journey to bring out the best in you We believe that having an understanding of the hiring process helps you to prepare, feel, and be, at your best. As a global, multi brand organization with multiple different roles, our application process can vary. On our career site, you will find some of the key steps you can expect to guide you along the way. Inclusion We believe in talent, not labels. We focus on the diverse and unique skills our people bring. Our culture of belonging and purpose ensures everyone can thrive and feel engaged. We are proud to be an Equal Opportunity Employer, committed to equity, equal opportunity, inclusion, and diversity. Interview Process Our interview process includes an initial phone screening, followed by a virtual round of interviews with Hiring Manager, HR team and senior leaders. This process helps us understand your fit within our team and allows you to ask questions about the role and our company. If you are a visionary leader with a passion for learning and development, we invite you to join us in making the future work for everyone. Accommodations We are committed to providing an inclusive and accessible recruitment process for all candidates. If you require any additional accommodations or support due to a disability or other special circumstances, please let us know by contacting us. We will work with you to ensure your needs are met throughout the hiring process. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Alert from 4S Advisory (www.4sadvisory.com) ***Urgent requirement for Manufacturing industry in Hyderabad Job Title: Company Secretary Location: Hyderabad Department: Legal & Compliance Reports To: CFO / Managing Director / Board of Directors Experience Required:3+years (including IPO experience) Industry Preference: Manufacturing (preferred) Monday-saturday working Timings:9AMto 6PM. Job Summary: We are seeking an experienced and qualified Company Secretary to join our organization. The ideal candidate will play a critical role in ensuring corporate governance, statutory compliance, and will lead and manage the Initial Public Offering (IPO) process. Experience in a manufacturing setup will be considered an advantage. Key Responsibilities: Secretarial & Compliance: Ensure compliance with Companies Act, SEBI regulations, FEMA, and other applicable laws. Organize and manage Board, Committee, and General meetings, including preparing agendas, notices, and minutes. Maintain statutory registers, records, and filings (MCA, ROC, SEBI, NSE/BSE). Draft and vet legal and corporate documents. IPO Management: End-to-end handling of the IPO process in coordination with investment bankers, legal advisors, auditors, and regulators. Prepare DRHP, RHP, prospectus, and liaise with SEBI, stock exchanges, and other regulatory bodies. Ensure due diligence, compliance, and documentation required for listing. Assist in roadshows, investor communications, and disclosures. Corporate Governance: Act as a bridge between the Board and management ensuring transparency and compliance. Support the Board in implementing best corporate governance practices. Legal & Regulatory Affairs: Liaise with external regulators, auditors, and consultants. Provide legal and regulatory advice to internal stakeholders. Ensure compliance with labor laws, environmental laws, and industry-specific regulations (especially for manufacturing setups). Key Requirements: Qualified Company Secretary (ACS) – Membership of ICSI is mandatory. LLB or equivalent legal qualification will be an added advantage. Proven track record of handling IPO or public listing processes. Minimum 3years of relevant post-qualification experience. Experience working in a manufacturing company is desirable. Strong knowledge of corporate laws, SEBI regulations, and stock exchange requirements. Excellent communication, drafting, and stakeholder management skills. Interested candidates may send in their resume to sreevalli@4sadvisory.com mentioning current CTC, expected CTC and notice period Show more Show less
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Clinical Operations Support Required Technical Skill Set: Highly skilled with Veeva eTMF Application Experience: 5 to 8 Years Work Location: Mumbai, New Delhi, Indore, Bangalore , Hyderabad, Pune, Lucknow, Chennai, Kolkata Desired Competencies (Technical/Behavioral Competency) Must-Have: Experience with R&D specific IT application systems for Document management, trial management, Data management and/or Pharmacovigilance. Experience with data visualization and/or analytics tools and ability to build, program and modify new reports and visualizations. Experience with Clinical Trial related systems, e.g. Clinical Trial Management System (CTMS), electronic Trial Master File (eTMF, preferably Veeva Vault), Electronic Data Capture (EDC, preferably Medidata RAVE) and Large Scale Data Analytics (such as SpotFire, CluePoints, JReview or similar). Highly proficient in Information Technology systems, including Microsoft Office suite. Proven record of working inside a team with different colleagues in any position, including being in the Lead. Being able to tailor feedback on compliance to different levels in the organization from Assistant roles to executive VP levels and in between. Knowledge of statistical methods and being able to apply that to detect outliers in data sets and/or to create thresholds such as Quality Tolerance Limits (QTLs) as required by ICH E6 (GCP) is a strong preference. Strong preference for candidates with a Project Management certificate and/or proven experience in project management for R&D related projects. Strong analytic skills for large quantities of compliance, risk management and clinical data. Strong interest in Pharmaceutical Development, mainly in the clinical research (R&D) aspects of drug development. Good-to-Have: Veeva Vault admin certification is required Experience with Veeva RIM Connectors is preferred Regulatory Information Management: Manage the lifecycle of regulatory submissions, including the preparation, tracking, and filing of documents in Veeva Vault RIM. Ensure that regulatory submissions comply with applicable local and international regulations and guidelines. Maintain and update regulatory documentation, including registration dossiers, variation applications, and compliance documents. System Configuration and Maintenance: Configure and customize Veeva Vault RIM to meet organizational needs, ensuring it aligns with regulatory requirements and workflows. Monitor system performance, troubleshoot issues, and coordinate with Veeva support for resolution. Data Management: Ensure the integrity and accuracy of regulatory data within Veeva Vault, including product information, submission statuses, and regulatory milestones. Implement data governance practices to maintain compliance and quality of regulatory data. Cross-Functional Collaboration: Collaborate with various teams, including Regulatory Affairs, Quality Assurance, Clinical Development, and Pharmacovigilance, to ensure timely and accurate regulatory submissions. Act as a point of contact for regulatory queries and provide training to internal teams on Veeva RIM functionalities. Regulatory Compliance: Stay updated on regulatory changes and ensure that the organization’s processes and systems comply with current regulations and industry best practices. Participate in audits and inspections as necessary, providing documentation and system access as required. Responsibility of / Expectations from the Role : Should able to manage Regulatory information management VEEVA RIM & eTMF Demonstrated practical working experience in both processes (eg.xEVMPD) and untilisation of regulatory systems (RIMS, VEEVA) Handle Regulatory Affairs business processes Regulatory Information management VEEVA Vault RIM Align with Support Team on current issues and initiate problem management. Prepare and update application related documentation (Operational Instructions, User Manuals). Processes (eg.xEVMPD) and systems (eg.RIMS, VEEVA) Results-driven and pragmatic approach to work Good organizational skills, self-motivated and proactive Meticulous working style and high attitude to quality. Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
India
On-site
We are seeking a highly skilled and experienced SAP Security Specialist to join our SAP Implementation team. The ideal candidate will have 6 to 9 years of experience in SAP security, including hands-on involvement in SAP implementations and security configurations across SAP landscapes. The candidate will be responsible for designing, implementing, and supporting SAP security solutions, focusing on securing SAP applications, authorizations, roles, and data access control. This role will require strong technical and functional expertise in SAP Security, Risk Management, and compliance standards. Key Responsibilities: SAP Security Design & Implementation: Implement and manage SAP security roles, profiles, and authorizations within SAP S/4HANA , SAP ECC, BTP, SAP Fiori, and other SAP applications. Design and configure security models for various SAP modules, ensuring secure access to sensitive business processes and data. Manage role design, segregation of duties (SoD) analysis, and user provisioning in SAP environments. Work on securing interfaces, data transfers, and other integration points in SAP landscapes. User & Role Management: Perform user access management activities, including creating, modifying, and deactivating user accounts and roles. Implement role-based access control (RBAC) to ensure that users have the right level of access based on their job responsibilities. Conduct regular reviews of roles, user access, and security settings to maintain compliance and minimize security risks. Compliance & Risk Management: Ensure SAP systems comply with internal and external security policies, regulations, and standards (e.g., SOX, GDPR, etc.). Work closely with auditors and compliance teams to address security gaps, review audit logs, and ensure adherence to security policies. Perform periodic SAP security assessments, including vulnerability assessments and remediation actions, to mitigate potential risks. SAP GRC & Authorization Management: Configure and maintain SAP Governance, Risk, and Compliance (GRC) tools to manage security roles, risk management, and audit compliance. Implement SAP GRC Access Control for user access management, role design, and SoD conflict remediation. Utilize SAP GRC for risk assessment and ensure the system provides transparent security processes. Security Incident Management & Troubleshooting: Investigate and resolve security incidents or violations, ensuring proper remediation and preventive actions. Provide security support for SAP systems, investigating access issues, troubleshooting authorization errors, and ensuring proper resolution. Collaboration & Stakeholder Communication: Collaborate with functional and technical teams to understand security requirements and ensure that security measures align with business needs. Provide guidance and support to SAP functional teams on security aspects of SAP modules and integration points. Communicate effectively with business and IT teams regarding security concerns, risks, and mitigation strategies. Documentation & Reporting: Document security configurations, processes, and controls for audit and compliance purposes. Provide regular reports on security activities, user access reviews, risk assessments, and compliance status. Ensure the accuracy and completeness of all security-related documentation, including policies, procedures, and guidelines. Key Skills & Qualifications: SAP Security Expertise: Strong hands-on experience in SAP security administration, including role design, user administration, and authorization management within SAP S/4HANA, SAP ECC, SAP Fiori, and other SAP applications. Proficient in SAP GRC Access Control, SAP Security Audit Log, and SAP Identity Management. Deep understanding of SAP authorization concepts, user roles, profile management, and securing critical SAP modules. Compliance & Risk Management: Knowledge of compliance regulations and industry standards (e.g., SOX, GDPR, HIPAA) and their application within SAP environments. Familiarity with Segregation of Duties (SoD) analysis and conflict management, ensuring compliance with internal controls and security policies. Experience & Project Delivery: 12+ years of experience in SAP security and implementations, with a focus on SAP security architecture and role management. Proven experience in managing security during SAP system implementations, upgrades, and migrations. Technical Skills: Experience with SAP security tools such as SAP GRC, SAP IDM, and SAP Security Audit Log. Familiarity with SAP NetWeaver, SAP HANA, SAP S/4HANA, SAP Fiori, and other SAP solutions. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The governance job market in India is thriving, with a growing demand for professionals who can navigate the complex landscape of policies, regulations, and compliance. As the country continues to focus on strengthening its governance frameworks, job seekers with expertise in governance are in high demand across various industries.
The average salary range for governance professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.
A typical career path in governance may involve starting as an Associate or Analyst, moving up to a Manager or Consultant role, and eventually progressing to a Director or Head of Governance position.
In addition to expertise in governance, professionals in this field may benefit from having skills in policy analysis, risk management, project management, and regulatory compliance.
As you explore governance jobs in India, remember to showcase your expertise, experience, and passion for promoting good governance practices. Prepare thoroughly for interviews, demonstrate your understanding of key concepts, and apply with confidence. Your skills are in demand, and your contributions can make a significant impact in shaping the governance landscape of the country. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2