Home
Jobs

4392 Spectrum Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Summary: If you are aspiring to be a Finance Project Cost controller for a business that implements global projects and is growing rapidly, we have a role for you !If you like to work closely with project managers and other stakeholders to track actual costs, compare them to the planned budget, and implement corrective actions whenever necessary and can manage internal & external partners well with effective communication skills, we have a role for you! In this Role, Your Responsibilities Will Be: Perform Revenue Recognition exercises in the system on POC basis Consolidate and maintain a central repository of project data obtained from Project Managers Perform reconciliations between PA & GL and resolve discrepancies Monthly closing activities and conducting month end checks with respect to EAC movements, Cost corrections, etc. Conduct POR vs Actual Analysis for reporting purposes Pre Post Analysis of Project EAC Inventory analysis & reconciliation UBR & UER analysis & reconciliation Ensure timely closure of projects in Oracle PA module Ensure closure of Audit points Preparation of various reports like Segment wise Sales, BTS Sales, etc. for management reporting Passing Manual Journal entries for Project provisions, ENO provisions on monthly / quarterly basis Who You Are: You quickly and decisively act in constantly evolving, unexpected situations. You adjust communication content and style to meet the needs of diverse partners. You always keep the end in sight; puts in extra effort to meet deadlines. You analyze multiple and diverse sources of information to define problems accurately before moving to solutions. You observe situational and group dynamics and select best-fit approach. For This Role, You Will Need: Able to take accountability and owning accounting quality for financial reporting purposes. Working knowledge in Oracle with skills in adopting new technologies or applications Hands-on experience in ERP, MS Office, and reporting tools. Proficiency in MS Excel Power BI & related presentation tools. Self-starter, suggesting and implementing improvements to the processes Ability to work in a matrix organization with complex processes, systems and tools Strong numerical, analytical skills with accuracy along with communication skills Ability to handle large volumes of data and create dynamic management reports Great teammate who builds and maintains positive relationships with Team members Able to manage and schedule multiple priorities and meet deadlines Preferred Qualifications that Set You Apart: Chartered accountant Inter CA / MBA finance with at least 4 to 6 years of experience in Project accounting. MNC experience preferred. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Nagpur, Maharashtra, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Kanpur, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Ranchi, Jharkhand, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Amritsar, Punjab, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description IQEQ is a preeminent service provider to the alternative asset industry. IQEQ works with managers in multiple capacities ranging from hedge fund, private equity fund, and mutual fund launches; private equity fund administration; advisory firm set-up, regulatory registration and infrastructure design; ongoing regulatory compliance (SEC, CFTC, and 40 Act); financial controls and operational support services; compliance and operational related projects and reviews; and outsourced CFO/controller and administration services to private equity fund investments – portfolio companies, real estate assets and energy assets. Our client base is growing, and our existing clients are engaging the firm across the spectrum of our services offerings. Job Description Job Summary To provide and ensure the timely and high-quality service and deliverables to the cluster clients for funds clients allocated to you and your team. Main person who is accountable, has ownership of deliverables. Concentrates in engaging and developing teams and individuals. Expected to perform review of core processes, complex ad hoc work and all other client requirements. Provides consistent feedback on accuracy and timeliness of outputs made by their team Responsible in assisting in the review and/or preparation and completion of NAV and Financials of the private equity funds that include recording journal entries, monthly/quarterly/annual financials, processing payments, investor notices and various client reporting. Responsible in assisting in the review and/or preparation and completion of capital call and distribution workings along with notices and release to respective investors. To facilitate and assist with conversion of Private Equity Funds from other accounting applications to Investran. To facilitate and assist with onboarding new Private Equity Funds in the accounting platform (Investran/Paxus). This entails assisting in the review and/or preparation of all/and complex reporting requirements, such as but not limited to financial statements, working papers/ management accounts, partner statements and ad-hoc client deliverables. Spends most of the time in driving results based on the KPIs (ex. Quality and Timeliness, Error Report, Increased Productivity and Lowers Overtime, among others) and optimizing the work performance of the team. To facilitate and assist various accounting and administration processes of complex clients and conduct all work in accordance with IQ-EQ India Global Client Delivery's policies and procedures. As a supervisor, the Assistant Manager leads, organizes, coordinates, develops, monitors the performance, delegates tasks and responsibilities to team members. Core Responsibilities Responsible for review of Financial Statements with Disclosures, NAV reporting and other ad-hoc service requests for complex funds and/or corporate clients. Responsible for the preparation of complex reporting requirements, when necessary. Manages and ensures the timely and accurate deliverables of the team. Acquire knowledge to become specialists of their designated clients' requirements and deliverables. Ensures and champions compliance by following procedures and checklists following SSAE and other similar statutory requirements of the clients they handle. Provides guidance and coaching to the team in both technical and non-technical aspects of their role. Seeks knowledge and expertise for their own professional development. Initiates the development of the technical competencies of their teams by providing feedback on their deliverables and endorsing them to the appropriate learning & development activities. Directly liaises with the cluster counterparts of upcoming deliverables and the progress thereof, queries and other dependencies to carry out the work. To be able to effectively communicate and relate with the various stakeholders of the team. Submits cluster client requirements after thoroughly being reviewed. To act as an alternate for Managers based on business needs which may include cluster client responsibilities, administrative tasks, and team management. Tasks & Duties Cluster Client Delivery Acquires and shares specialized knowledge and understanding of the clients’ agreements, Scope of Work (SOW), SLA’s and other necessary information needed to review and/or prepare deliverables. Confirms that the complete information, data and supporting documents are received for the review/preparation of the needed requirements, then escalates and requests from the cluster counterparts any missing information. Ensures the delivery of consistent and quality information within the agreed timeframes of the team. Coordinates with the cluster the deadlines (and any changes) of deliverables then plans and assigns the workload within the team. Facilitate the request for support from peers, as needed. Prepare (if needed), completely review, and ensure the quality of the assigned clients’ deliverables within the agreed timelines. To be knowledgeable with the components of the reports generated by the appropriate platform/s, when necessary. Reviews the reports generated by the team. To be able to review outputs and all other deliverables prepared. Monitors all review comments for all clients have been addressed. Monitors that their team takes full ownership and replies to cluster/Client/Investor’s queries within 24 hours. Addresses any job-related issues and concerns in a timely manner and escalates to the appropriate authorities, as needed. Communicates review comments to the team and then follows up to address pending comments for preparers. Monitors and oversees the interpretation of the requirements of the client and in some instances, does the research and validates in the absence of an Accountant. Drafts error reports/compliance cases by collecting information on the root cause then proposes the corrective actions and preventive measures in a timely manner. Gathers and organizes the information needed for the quarterly debrief meeting with the cluster to review the team's performance. This includes consolidating review comments and action points to watch out for the following quarter. Decides the proper treatment for transactions when differences in points of view arise, and escalate recommendations to appropriate authorities, as needed. Decides on escalated recommendations and if necessary, escalate to SMEs/Manager or technical team. Standardizes and optimizes the efficiency of the process of their funds. Workflow Management Ensures timesheets are completed daily and accurately filed for all hours worked. Reviews and approves timesheets of their team and ensures they are completed daily and accurately filed for all hours worked in a timely manner (of their SA's). Reviews and pre-approves filed overtime of their team (SA's). Sets up, monitors and updates all deliverables via the workflow planner in a timely manner. Oversees the accuracy and completeness of the workflow planner for the team. Systems Understands and uses best practice on accounting platform/s. Understands and becomes knowledgeable on how to generate reports using reporting platform/s. Risks Champions compliance by ensuring relevant procedures, checklists and SSAE requirements are adhered to and completed to mitigate errors. Report any breaches, complaints, or errors to appropriate authorities in a timely manner. Acquires knowledge of risk factors and potential breach. Monitors and oversees drafts reports and compliance cases with root cause information. In the absence of a Senior Accountant, will draft the error reports/compliance cases, corrective actions and preventive measures based on collected root cause information in a timely manner. Other May undertake any additional tasks and/or responsibilities as part of their professional development which may or may not be related to their specific function. Escalate any other work-related issues and concerns to the appropriate authorities in a timely manner. Participate in interviews, as needed. Key behaviours we expect to see Role In addition to demonstrating our Group Values (Authentic, Bold, and Collaborative), the role holder will be expected to demonstrate the following: Drives Results - Consistently achieving results, even under tough circumstances. Optimises Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement. Directs Work - Providing direction, delegating, and removing obstacles to get work done. Builds Effective Teams - Building strong-identity teams that apply their diverse skills and perspectives to achieve common goals. Resourcefulness - Securing and deploying resources effectively and efficiently. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Courage - Stepping up to address difficult issues, saying what needs to be said. Develops Talent - Developing people to meet both their career goals and the organisation's goals. Persuades - Using compelling arguments to gain the support and commitment of others. Business Insight - Applying knowledge of business and the marketplace to advance the organisation's goals. Builds Networks - Effectively building formal and informal relationship networks inside and outside the organisation. Balances Stakeholders - Anticipating and balancing the needs of multiple stakeholders. Decision Quality - Making good and timely decisions that keep the organisation moving forward. Qualifications Education/professional qualifications Graduate of accounting or any business-related course with 6+ years relevant accounting experience. At least two (2) years supervisory experience. Background Experience Experience with managing service operations and client deliverables. Experience in managing staff and/or conducting appraisals. Sound knowledge of IndAS, IFRS and GAAPs of different jurisdictions (US, UK, and Lux). Preparing and reviewing of Financial Statements using applicable laws and regulations. Fluency in English and an additional foreign language. Technical Actual work experience in the preparation and/or review of working paper files, financial statements with disclosures and other financial information. Operational experience in fund accounting services is required, preferably in handling private equity. Computer / program knowledge Intermediate Excel skills such as pivot tables, lookup, “if” and other similar functions. Experience in e-mail, word processing, presentation, and video conferencing applications such as Microsoft Office. Effective written and advance verbal communication skills. Experience in using accounting software (Investran/Paxus software). Desired Completed Certified Public Accountant/ACCA qualification. Experience working in Financial Services or Shared Services office environment. Experience working in a multinational office environment. Open to travel and other secondment opportunities abroad. University Degree in Accountancy and other accounting-related courses. Additional Information At IQ EQ we want you to reach your full potential. We offer an inclusive and diverse environment to support your career aspirations. With a strong emphasis on continuous learning and a holistic approach to your professional and personal development. We also offer opportunities across our service lines and our international network of offices. For further information, and to apply, please visit our website via the “Apply” button below. Show more Show less

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Jaipur, Rajasthan

On-site

Indeed logo

Job Title: Relationship Manager (RM) Location: Jaipur (Client-Facing, Tier-2 Market Deployment) Experience: 1–3 Years (Fresh MBA graduates with a strong learning curve are welcome) Role Overview: We are looking for a proactive and client-focused Relationship Manager to serve as the primary point of contact for our customers across a wide spectrum of financial services including insurance, working capital, business loans, solar funding, and other debt-related solutions. The RM will be responsible for onboarding, documentation, client servicing, kit delivery, and conversion tracking, ensuring high client satisfaction and business growth. Key Responsibilities: Facilitate client onboarding, KYC, and documentation formalities Manage the delivery of starter kits and maintain accurate CRM records Collaborate with internal analysts, partners, and management for end-to-end execution of financial solutions Drive retainer renewals, cross-sell, and upsell conversations Resolve client issues promptly and ensure a high Net Promoter Score (NPS) Maintain strong relationships and identify upselling or refinancing opportunities across all financial product categories Key Performance Indicators (KPIs): CRM Accuracy: 90%+ data hygiene and real-time updates Kit Fulfilment: 6–8 successful kit deliveries per month Conversion Ratio: 25% kit-to-retainer conversion from new client onboarding Client Satisfaction: NPS ≥ 8 across the assigned client base Job Type: Full-time Pay: ₹360,000.00 - ₹600,000.00 per year Schedule: Day shift Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): What is your Current in hand salary? What is your Expected salary? Do you have experience in Finance sector like NBFCs ,Bank ,etc? Education: Master's (Preferred) Experience: Banking: 1 year (Preferred) Debits & credits: 1 year (Preferred) Customer relationship management: 1 year (Preferred) Language: English (Preferred) Work Location: In person

Posted 2 days ago

Apply

2.0 years

3 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Name: Rock My Sales- Noida Office Job Title: Influencer Marketer Executive Location: Noida Salary- INR 15,000-INR 25,000/month Role Description: Rock My Sales is looking for a creative and experienced Influencer Marketing executive to join our dynamic team in Noida. This role is ideal for someone who excels in ideating content for influencer shoutouts and can craft bespoke influencer marketing strategies tailored to our clients' needs and products. A background in organizing bloggers' meet and basic editing skills, particularly with tools like Canva, is highly desirable. Key Responsibilities: Develop and manage influencer outreach programs, targeting the lifestyle, fashion, and beauty sectors. Collaborate with influencers to create engaging and brand-aligned content for shoutouts. Strategize and implement influencer marketing campaigns that resonate with the client's product and audience. Organize and coordinate bloggers' meets to enhance brand visibility and engagement. Utilize tools like Canva for basic graphic editing to support content creation. Build and maintain relationships with a network of influencers and bloggers. Work closely with clients to understand and meet their marketing objectives. Analyze campaign performance, providing insights and strategies for optimization. Stay updated with the latest trends in influencer marketing and relevant industry sectors. Qualifications: Bachelor's degree or higher in Marketing, Business Administration, or a related field At least 2 years of experience working in the field of marketing, preferably in Influencer Marketing An impressive portfolio with a successful track record of executing creative and effective influencer campaigns Proven experience in influencer marketing within an agency setting. Demonstrated ability to ideate and strategize content for influencer campaigns. Experience in organizing successful bloggers' meets. Familiarity with graphic design tools, particularly Canva, for basic editing tasks. Strong existing relationships with influencers, especially in lifestyle, fashion, and beauty. Excellent communication, negotiation, and project management skills. Creative mindset with the ability to align influencer strategies with client needs. Knowledge of social media platforms and influencer marketing trends. Experience with creating marketing budgets, setting goals, and tracking progress Company Description Rock My Sales (RMS) is a creative boutique that empowers brands to establish a potent place in their industry via Brand Management & Digital Marketing. With extensive experience serving some of the most renowned groups and MNCs around the world, we have wide expertise in handling accounts across a wide spectrum of industries including but not limited to hospitality, e-commerce, health, real estate, entertainment, FMCG, consumer electronics, apparel, financial consultancy, publishers, authors, and education sectors. We Offer: ● A dynamic and creative work environment with opportunities for growth. ● Exposure to diverse clients and projects Rock My Sales Services LLP is an Equal Opportunity Employer. Show more Show less

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Summary: We are launching a premium Executive Education initiative designed for India's most accomplished business leaders—CXOs, Entrepreneurs, and Senior Professionals—who are preparing for their next career chapter. The program is thoughtfully designed, academically rigorous, and positioned as a first-of-its-kind leadership transition platform. As Program Lead, you will be entrusted with the full-spectrum responsibility of brand activation, stakeholder engagement, sales leadership, and program excellence. You will serve as the driving force behind a high-trust learning experience tailored for a senior, discerning audience. This is a business leadership role—requiring strategic thinking, operational ownership, and a flair for premium positioning. Key Responsibilities: Program Launch & Brand Activation: Lead the go-to-market strategy for a flagship executive education offering Craft the brand voice, messaging architecture, and positioning across platforms Oversee development of high-end marketing assets including pitch decks, brochures, and digital content Sales & Stakeholder Engagement: Drive consultative sales to senior business leaders, CXOs and high-impact professionals. Represent the program in corporate boardrooms, HR forums, and strategic partnerships. Build and manage high-quality lead pipelines through targeted outreach and referrals. Program Delivery & Experience: Ensure seamless delivery across online and offline components such as residential immersions, live faculty sessions, and mentorship engagements. Liaise with academic institutions, guest faculty, coaches, and operations teams to uphold excellence. Serve as the primary relationship custodian for participants from enrolment through alumni engagement. Operational & Business Ownership: Track sales, feedback, and engagement metrics to ensure impact, ROI, and continuous program evolution. Work cross-functionally with creative, academic, and leadership teams. Recruit and manage support teams as the program scales. Qualifications: MBA from a top-tier institution with 5-8 years of experience in executive education, consulting, premium brand management, or high-touch service industries. Strong storytelling, communication, and stakeholder management skills. High executive presence and maturity to engage a CXO audience. Entrepreneurial, self-driven, and comfortable owning a business vertical. Prior exposure to leadership programs, high-value clients, or institutional partnerships is a plus. Benefits: Be the face and force behind one of India's most premium executive learning brands. Shape a nationally recognized program that enables senior leaders to reimagine their careers. Collaborate with globally respected academic partners, facilitators, and mentors. Work directly with visionary leadership in a high-autonomy, high-impact role. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

SDE2 – .NET Developer (C# + Angular) Experience: 3 - 6 Years Exp Salary: Competitive Preferred Notice Period : Within 15 Days Opportunity Type: Office (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: C#, .Net. Angular TatvaCare (One of Uplers' Clients) is Looking for: About TatvaCare TatvaCare is transforming care practices to deliver positive health outcomes. TatvaCare, a startup in the Indian health tech landscape, is catalyzing the transformation of care practices through digitisation. Our product portfolio includes TatvaPractice, an advanced EMR and Knowledge platform for healthcare professionals, and MyTatva, a Digital Therapeutics application designed to manage chronic diseases like Fatty Liver, COPD, and Asthma. Through these initial solutions and more to come, we aim to bridge the gap in healthcare, connecting professionals and patients. We are committed to revolutionizing healthcare in India, promoting efficient, patient-centric care, and optimizing outcomes across the healthcare spectrum. MyTatva: A DTx app that aids adherence to doctor-recommended lifestyle changes. TatvaPractice: An ABDM-certified EMR platform to Enhance a doctor’s practice. Our vision is not just about digitizing records; it's about fostering a healthcare ecosystem where efficiency and empathy converge, ultimately leading to a health continuum. Job Description: We are looking for a passionate and skilled SDE2 to join our engineering team at Tatvacare. The ideal candidate will have hands-on experience in developing full-stack applications using .NET (C#) and Angular, with a solid understanding of DevOps practices and project delivery management. Key Responsibilities: Design, develop, and maintain scalable .NET (C#) applications with Angular frontend Collaborate closely with cross-functional teams to define, build, and deliver key features Implement CI/CD pipelines and manage deployment on cloud or on-prem environments Lead modules independently and ensure timely delivery with quality How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply and register or log in to our portal 2.Upload updated Resume & complete the Screening Form 3. Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

10.0 - 12.0 years

0 Lacs

Aurangabad, Maharashtra, India

On-site

Linkedin logo

At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Supply Chain Engineering Job Sub Function Manufacturing Engineering Job Category Scientific/Technology All Job Posting Locations: Aurangabad, Maharashtra, India Job Description SECTION 1: JOB SUMMARY* Design, execution and documentation of process characterization, process development, equipment qualification, validations, Manufacturing Equipment and Sterilization Validation Master Plan, and test methods used in manufacturing processes in the context of the applicable standards and regulations. Lead- Process Engineering is responsible to execute process engineering deliverables as per the engineering and base business strategies and objectives at the Aurangabad site. Ensures flawless execution of Manufacturing process validations or qualifications. Provides leadership to the Process team and ensures the support from all engineering functions and also other required cross functions. Responsible to ensure the integration with the regional and WW process engineering organization. Responsible to develop and execute the Sterilization strategies and objectives at ETHICON, Aurangabad site. Provides the leadership & support to engineering function for execution of EO and GAMMA sterilization projects and regular maintenance. Provide Technical knowledge to the Manufacturing engineering, Process engineering and Engineering Projects team. Designs, develops, tests, and evaluates new and existing manufacturing systems for industrial production processes including human work factors, material flow, cost analysis, and process optimization in both production and packaging operations which includes but not limited to Technology Roadmap projects. Is responsible for the Process Safety Management activities for the site. SECTION 2: DUTIES & RESPONSIBILITIES* Ensures coordination and execution of the recurring validation activities according to validation master plan. Leads cross functional teams (internal and external to engineering) and resolves inter- functional issues. Establishes and communicates process and program schedules, objectives, priorities, and targets. Ensures the documentation of project and program activities and deliverables. Planning, coordination, monitoring and evaluation of manufacturing equipment & process validation activities. Training of employees in validation specifications, test plans, test methods, etc. Supporting the selection of processes / machines, taking into account the requirements of project and production as well as investment and cost aspects. Ensuring an effective process risk management (FMEA) carrying out process risk analysis process participation in product risk analysis. Developing manufacturing processes under application / taking into account methodological concepts (Six Sigma, Lean). Supporting the root cause identification and implementation/documentation of corrective measures during the stabilization phase of a product / process development project. Taking over of co-ordination and project management tasks for the Aurangabad site if no project manager is associated with the project. Ensuring process optimization within the stabilization phase of a assigned project Engages to comply with ISO and FDA requirements. Ensuring compliance to the Quality system requirements. Utilizing problem solving skills and statistical techniques to support product / processes controls that are aligned with the overall quality and business vision. Assuring that engineering department is appropriately run in a safe, clean, and environmentally sound manner. Developing and analyzing statistical data and machine specification to determine present standards and establish proposed quality and reliability expectancy of finished product. Assisting in engineering budget preparation, goal tracking and in the business planning process. Supporting for NCR’s investigation and performs trend analysis and report to Management. Ensuring for training / compliance of GMP as per FDA guidelines and site procedures and Policies and on the job training. Participating in audits and gap assessments in support of the internal audit program and FDA readiness. Partnering with operations in the investigation / correction of process failure Developing safety culture in the engineering function. Prepares capital expenditure proposals starting from preparation of draft proposals, vendor selection, price negotiations, planning, and execution. Leads projects end-to-end. Lead engineering projects, New Technology or Technology Transfer projects to improve existing technology with respect to Quality, Compliance, Capacity and Cost. Ensure that projects are completed on time and within budget with no impact to quality or customer service. Develop and execute detailed project plans using standard project management tools (charter, Gantt chart, etc.). Prepare/review capital & expense forecasts for assigned projects. Prepare capital authorization requests (CAR). Sterilization: Under limited supervision & general direction and in accordance with all applicable federal, state and local laws/ regulations and Corporate Johnson & Johnson, procedures and guidelines, this position: Prioritize, assign and coordinate for EO & Gamma sterilization, UDI, serialization and related work. Establish, review and communicate plans (work/project scope, cost, schedule, resource requirements, and risks) for sterilization related activities required to meet system demands and business objectives. Lead & prepare a sterilization road map and prepare a strategy to implement them. Support in preparing technology road map and prepare a strategy to implement them. Lead the various sterilization related projects in tandem with WW sterilization experts. Lead & deliver projects like sterilizer replacement, new cycle creation, existing cycle improvement (as needed) and new practices development. Monitor progress to ensure final deliverable meet lifecycle boundaries and customer acceptance criteria. Engage to comply with ISO and local FDA requirements. Responsible for revision I creation of documents like SOP, Forms etc. required for manufacturing /Sterilization/UDI/Serialization & as required. AUTHORITIES Create PR in emp/Ariba. Create & Approve PR in emp/Ariba Create/Approve Gate Pass for material. Change Assessment creation in ADAPTIV/ PLM system. CO/CP creation in ADAPTIV/PLM system. QMS document Approval creation/Approval in ADAPTIV/ PLM system. SECTION 3: EXPERIENCE AND EDUCATION* Bachelor’s degree with minimum 10 to 12 years of experience in Engineering/Industrial/Electrical/Mechanical Engineering (related stream). Experience in the Medical Device industry or pharmaceutical or consumer or similar industry with experience in aseptic handling. Demonstrated knowledge of manufacturing principles and practices and procedures. Knowledge of specific business practices and software and software applications. Experience using medical device equipment. Ability to communicate effectively with a diverse client/stakeholder base. Ability to work cooperatively with coworkers, peers and required stakeholders. Ability to perform duties in accordance with policies and procedures. SECTION 4: REQUIRED KNOWLEDGE, SKILLS, ABILITIES, CERTIFICATIONS/LICENSES and AFFILIATIONS * Functional Competencies - Engineering Basics: Ability to use knowledge of technical designs, Understanding and creating Engineering drawings and leads key site projects/base business automation initiatives. Manufacturing processes Knowledge: Displays in-depth knowledge of manufacturing methods and standards of process control. Leverages the knowledge and leads practices to implement sustainable process improvements in assigned areas. Product knowledge: Understanding of Product functionalities and identifies improvement opportunities. Understand product requirements and translate into product characteristics and procedures. Ability to define, measure, improve product characteristics and their co-relation with product performance. Process Excellence: Displays in-depth understanding of Lean tools & techniques, Value stream mapping, Six Sigma as a certified Black Belt, Statistical Data Analysis & process controls. Technical Quality and Compliance: Displays in-depth knowledge of J&J quality standards to implement cross-functional corrective action related to quality issues. Use in-depth knowledge of EHS policies to guide others when implementing EHS initiatives. Standard Cost generation: Displays in-depth Knowledge within Financial Systems and Budget Preparation, ROI calculation, standard cost planning and leverages the skills and leading Practices to implement long range business plans that meet J&J strategic Goals & creates new opportunities. Image/Signal Processing and Robotics: Displays in-depth Knowledge within image analysis, image processing, image algorithms, robotic systems, software and verification systems development and leverages the skills and leading Practices to implement processing strategy. Mechanical Equipment & Systems: Displays in-depth knowledge of commissioning of mechanical equipment, Safety equipment and leverages best practices. Process Validations: Ability to develop/plan qualification strategy for overall process, execution of strategy and plans in a diligent manner; ability to react/resolve issues that occur during qualifications; basic knowledge of statistical techniques. Packaging Equipment and Operation: Consistently applies Knowledge and experience to a wide variety of activities and situations associated with Automation & Robotics, Vacuum Technology, Programmable Logic Controllers (PLC), Mechanical Engineering, Sterilization/Cleanroom Technology. Packaging Process: Consistently applies Knowledge and experience to a wide variety of activities and situations associated with Packaging Process, Packaging Materials, Test Methods, Packaging Classification & Types. Packaging Design: Displays in-depth Knowledge within Develops & Implements New Technology concepts and methods and leverages the skills and leading Practices to implement long range business plans that meet J&J strategic Goals & creates new opportunities. Professional Competencies- Analytical Problem Solving: Applies understanding of analytic techniques to interpret data, identify issues, analyze causes of the issues, and provide well-reasoned conclusions and solutions. Technology & Data Management: Uses in-depth knowledge of system and technology capabilities, architecture, and leading practices to effectively interface with IT professionals to identify, select, and implement tools that enable business processes. Quality Mindset: Participates in quality processes, as appropriate, including validation and compliance- related issues (for example, FDA regulations, holds, customs, etc.) Demonstrates an understanding of the critical importance of traceability and ability to apply supporting approaches or technologies (i.e. lot coding, expiration dating, etc.) Applies knowledge of validation strategies and/or continuous improvement concepts to proactively identify process deficiencies or improvements. Analytical Problem solving: Uses in-depth knowledge of analysis and problem-solving techniques to study reports, identify underlying issues or trends, and assess broader implications of the findings, based upon the interpretation of quantifiable data, to recommend appropriate solutions. Project Management: Demonstrates expertise in project management tools and techniques, interactions with project stakeholders and sponsors. Identifies innovative ways to improve cost or lead-time to maximize resources to achieve project outcomes. Leverages understanding of FPX and other project management methodologies to perform root-cause analysis on project failures. Business Case Development: Uses in-depth knowledge of business case development to articulate the business case for broad, cross-departmental change to decision-makers Mentors others by providing direction and context for change by outlining linkages between functional activities and J&J’s bottom line. Knowledge of project management methodologies (e.g., PMP etc.) Knowledge of continuous improvement tools, Lean Manufacturing & Six- Sigma. Ability to lead a team of professionals with diverse skills and competencies spanning business & technical areas. Knowledge of SAP based MRP, Visio, MS Project, Minitab, ADAPTIV, ETQ Audit, ETQ CAPA, ETQ NC & Compliance Wire. Show more Show less

Posted 2 days ago

Apply

2.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Human Resources Job Sub Function Talent Acquisition Job Category Professional All Job Posting Locations: Mumbai, India Job Description Johnson & Johnson is recruiting for a Lead Talent Acquisition Partner - Early in Career, Asia Pacific, located in Mumbai, India. The incumbent will partner closely with business leaders, HR, and Talent Acquisition to build and implement highly differentiated strategies that ensures J&J is attracting, assessing and acquiring top Early in Career talent in the marketplace. Key Responsibilities Lead the end-to-end recruiting process by adopting new technologies to ensure that the sourcing, recruiting, assessment, offer, onboarding and communication processes run efficiently contributing to a positive candidate experience and employer equity. Responsible for recruitment and selection projects and recommend changes to the process to increase attraction and retention of highly qualified applicants. Proactively source (e.g. networking, internet research, university events & conferences) and recruit for University hires corporate-wide. Establish external networks with University Relationships, including career centers, student groups, and professors Provide ongoing advising value to clients to improve search/recruitment efforts Be the one of the APAC EiC team members to drive local recruitment, engagement and project implementation Be the key driver in India and collaborate with other team members of APAC to deliver good performance Get involved with and collaborate on EIC regional/global projects as and when needed. Be responsible for Metaverse India operations Qualifications Education: MBA from a Tier 1 Business School (preferably from the batch of 2024) OR MBA in HR from any business school with at least 2-3 years of experience in campus recruitment Experience And Skills Excellent communication and data analysis skills Ability to influence internal and external stakeholders Should be a creative thinker, digitally savvy and possess problem solving abilities Strategizing Campus Management Responsible for Campus budgeting Employer branding Johnson & Johnson is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability. For more information on how we support the whole health of our employees throughout their wellness, career and life journey, please visit www.careers.jnj.com. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Ambala, Haryana, India

On-site

Linkedin logo

Established with the vision to meet all the variegate demand of customers, we Osaw Udyog pvt ltd, based in Ambala Cantt, Haryana, India, is engaged in domain since 2005, as a highly rated Manufacturer, Supplier and Exporter of quality certified range of Agricultural Implements or Machinery. Our comprehensive range of product is inclusive of Rotary Tiller ranging 2 feet to 10 feet( Multispeed,Single speed), Power Harrow, Rotary Disc Harrow, Super Seeder, Straw reaper, Laser Land Leveller, Mulcher, Reversible Plough & other Agriculture Implements. The year 1919 saw the inception of Osaw Group, which was initially manufacturing laboratory equipment. In the year 1998, we ventured into the domain of agricultural equipments and over the decades we have concentrated our activities on agricultural industry, becoming a benchmark in this field with a strong presence in the world. Today, Osaw Udyog is well known in the field of farm and agricultural equipment. The main production plant consists of approx 70,000 sq ft covered area and 2,80,000 sq ft in total area in order to encompass all the phases to develop a new project. The aim of our company is to reduce the import of agricultural implements that are being imported in India. We are a complete autonomous organisation able to manage the entire production cycle: storage and metal cutting, mig welding, CNC bending, and special SPM for welding, paint shop with oven baking, shot blasting for cleaning of products, well equipped Design section with latest solid edge software, a well equipped assembly line. The quality of the material used, our production process, quality check at various stages of production, Pre dispatch inspection and fair dealings assures "Excellent Quality" of our machines. The Role You Will Be Responsible For Providing support to the full spectrum of HR functions including talent acquisition, learning and development and compensation and benefits. Employee onboarding & offboarding. Creating and maintaining employee personnel files and ensuring employee information is up to date in the internal system. Application and renewal of work visas. Maintaining employee leave and training records. Administering medical and other insurance as per Company policy. Preparing monthly HR reports for management. Ensuring the timely and accurate processing of payroll. Working with various internal stakeholders to handle payroll related inquiries and resolving any issues or errors in a timely manner. Maintaining all statutory compliance with respect to payroll. Other adhoc works such as preparing work certificates etc. Ideal Profile You have at least 3 years experience within a HR Administrator or Payroll Accountant role, ideally within the Agribusiness / Agritech, Real Estate and Manufacturing industry. Strong knowledge of legal and statutory requirements pertaining to HR practices. You have working knowledge of TA DA and Compliance You pay strong attention to detail and deliver work that is of a high standard You are a strong team player who can manage multiple stakeholders You are a strong networker & relationship builder What's on Offer? Opportunity within a company with a solid track record of performance A role that offers a breadth of learning opportunities Show more Show less

Posted 2 days ago

Apply

125.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Through bold discovery and cutting-edge innovation, we lead an industry that is vital for the future of our planet: lighting. Through our leadership in connected lighting and the Internet of Things, we're breaking new ground in data analytics, AI, and smart solutions for homes, offices, cities, and beyond. At Signify, you can shape tomorrow by building on our incredible 125+ year legacy while working toward even bolder sustainability goals. Our culture of continuous learning, creativity, and commitment to diversity and inclusion empowers you to grow your skills and career. Join us, and together, we’ll transform our industry, making a lasting difference for brighter lives and a better world. You light the way. More About The Role What you’ll do Develop Cloud native micro-services for Connected Lighting Solutions. Implement modular code to bring software design to life. Work with an Agile team to realise product features. Interact with product management to understand requirements and translate to implementation. Your Qualifications 10 years building data-intensive applications and pipelines, that involve concepts like ETL. Proficient with core Java. Good to have - Practical experience with frameworks like Springboot Demonstrates capabilities in API Design. Has demonstrated ownership for features and modules in their projects including feature and module design, practices such as code reviews, unit tests, code coverage and build sanity. Has integrated with databases such as Postgres/MySQL. Good to have - practical experience with NoSQL databases Demonstrates capabilities in building cloud native solutions on cloud platforms such as AWS/GCP/Azure. Has practical understanding of application security. Everything we’ll do for you You can grow a lasting career here. We’ll encourage you, support you, and challenge you. We’ll help you learn and progress in a way that’s right for you, with coaching and mentoring along the way. We’ll listen to you too, because we see and value every one of our 30,000+ people. We believe that a diverse and inclusive workplace fosters creativity, innovation, and a full spectrum of bright ideas. With a global workforce representing 99 nationalities, we are dedicated to creating an inclusive environment where every voice is heard and valued, helping us all achieve more together. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Saint-Gobain group through its group company Grindwell Norton Limited has established INDEC - an International Delivery Center in Mumbai to provide IT solutions and services to the groups’ businesses Globally. INDEC is currently organized into INDEC Application Development, INDEC Infrastructure Management and Cyber Security Management. While INDEC Apps specializes in Software application development and maintenance services (ADM), INDEC Infra specializes in monitoring and managing the key IT infrastructure assets of the group deployed globally across 70 countries worldwide. INDEC provides IT Services and Solutions to the Saint-Gobain group through its state-of-the-art delivery centers based at Andheri – East in Mumbai. There are approximately 1200+ associates working in INDEC currently. INDEC Apps provides software application development and maintenance services across a wide spectrum covering SAP, Java, PHP, .Net, CRM, Mobility, Digital, Artificial Intelligence (AI), and Robotic Automation. INDEC Infra on the other hand operates the following service lines: Network Coordination Center (NCC/NOC), Data Center Infrastructure Support, IT Standards, Tools Engineering and Reporting Automation. INDEC Cybersecurity provides 24/7 Security monitoring to detect & react on any suspicious activity in Saint- Gobain. It provides services on vulnerability scanning, web application firewall, endpoint protection, strong authentication, digital certificate, Win 10 MBAM and SFTS support. Responsibilities: Project Work: Participate in SAP PM implementation/rollout, and support projects. Configuration: Configure SAP PM modules to meet business requirements. Data Migration: Perform data upload and migration activities. Enhancements: Develop and implement enhancements and custom developments. Interfaces: Work on interfaces with third-party systems. Test Scripts: Create and execute test scripts for various scenarios. Defect Resolution: Identify, analyze, and resolve defects in the system. Training: Conduct training sessions for end-users and stakeholders. Documentation: Prepare and maintain project documentation. Collaboration: Work closely with other SAP consultants and project teams. Calibration and Refurbishment: Manage calibration and refurbishment processes for equipment and assets. Key Performance Indicators: Project Delivery: Successful completion of implementation, rollout, and support projects within deadlines. Technical Accuracy: High level of accuracy in technical solutions and configurations. Client Satisfaction: Positive feedback from clients and stakeholders. Problem Solving: Effective resolution of issues and challenges. Team Collaboration: Active participation and contribution to team efforts. Qualifications: Education: Bachelor's degree in Mechanical, Computer Science, Information Technology, or related field. Experience: 3-5 years of experience in SAP PM with knowledge of PP or QM. Certifications: SAP PM certification is preferred. S4 Experience: Minimum 1 year of experience in SAP S4. SSAM: Hands-on experience with SSAM. Functional Skills/Competencies: Technical Expertise: Proficient in SAP PM Preventive Maintenance, Corrective Maintenance, Maintenance Planning, Work Order Management, Equipment Management, Reporting, Calibration and Refurbishment, Measuring Point, Classification, Notification. Good understanding of PP module & integration with other modules like MM FI CO SD Project Experience: Minimum of 1 E2E implementation or 1 rollout, and 1-2 support projects. S4 Experience: At least 1 year of experience in SAP S4HANA. SSAM Hands-on: Practical experience with SSAM. Data Migration: Proficiency in data upload and migration activities. Good to have knowledge oftools like Syniti. Enhancements/Custom Developments: Experience in preparing FS working with ABAP for enhancements and custom developments. Third-party Interfaces: Working experience on interfaces with third-party systems including idocs/APIs. Test Scripts and Defect Resolutions: Experience in creating and executing test scripts and resolving defects. Domain Knowledge: Experience in relevant industry domains. Behavioral Skills/Competencies: Proactive: Takes initiative and anticipates needs. Communicative: Excellent communication skills, both verbal and written. Professional: Maintains a high level of professionalism in all interactions. Team Player: Works well in a team environment and collaborates effectively. Problem Solver: Strong analytical and problem-solving skills. SELECTION PROCESS: Interested Candidates are mandatorily required to apply through this listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may be required to appear in an Online Assessment administered by Jigya on behalf of Saint-Gobain INDEC Candidates selected after the screening test will be interviewed by Saint-Gobain INDEC Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Saint-Gobain group through its group company Grindwell Norton Limited has established INDEC - an International Delivery Center in Mumbai to provide IT solutions and services to the groups’ businesses Globally. INDEC is currently organized into INDEC Application Development, INDEC Infrastructure Management and Cyber Security Management. While INDEC Apps specializes in Software application development and maintenance services (ADM), INDEC Infra specializes in monitoring and managing the key IT infrastructure assets of the group deployed globally across 70 countries worldwide. INDEC provides IT Services and Solutions to the Saint-Gobain group through its state-of-the-art delivery centers based at Andheri – East in Mumbai. There are approximately 1200+ associates working in INDEC currently. INDEC Apps provides software application development and maintenance services across a wide spectrum covering SAP, Java, PHP, .Net, CRM, Mobility, Digital, Artificial Intelligence (AI), and Robotic Automation. INDEC Infra on the other hand operates the following service lines: Network Coordination Center (NCC/NOC), Data Center Infrastructure Support, IT Standards, Tools Engineering and Reporting Automation. INDEC Cybersecurity provides 24/7 Security monitoring to detect & react on any suspicious activity in Saint- Gobain. It provides services on vulnerability scanning, web application firewall, endpoint protection, strong authentication, digital certificate, Win 10 MBAM and SFTS support. Job Responsibilities: Experience in implementing SAP S4 HANA Controlling with minimum 2 end to end implementations along with global rollouts. Adept at handling business requirements and mapping them to SAP processes especially Global Template design process with Technical Team for various RICEF developments. Strong Product Costing including Material Ledger, COPA, Integration with Material Management, Sales and Distribution & Production Planning modules, Interfaces. Conversant with SAP Fiori apps, Workflows etc. Experience of project delivery by Agile methodology. Experience with cutover and data migration activities. Hands on experience with SAP S4 HANA process. Analysis, ticket resolution, dealing with business stakeholders. Qualificaton: Any graduate / Postgraduate (Preferably CMA, CA) Functional Skills/Competencies: SAP CO - Product Costing, COPA, Integration with MM, FI, SD, variance analysis. Behavioral Skills/Competencies: Strong Oral & written communication, Strong Analytical skills. SELECTION PROCESS: Interested Candidates are mandatorily required to apply through this listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may be required to appear in an Online Assessment administered by Jigya on behalf of Saint-Gobain INDEC Candidates selected after the screening test will be interviewed by Saint-Gobain INDEC Show more Show less

Posted 2 days ago

Apply

Exploring Spectrum Jobs in India

The spectrum job market in India is rapidly growing with the increasing demand for professionals with expertise in this field. Spectrum roles encompass a wide range of job opportunities in various industries such as telecommunications, technology, and research. Job seekers looking to explore spectrum jobs in India have a plethora of opportunities to choose from.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for spectrum professionals in India varies based on experience and expertise. Entry-level positions may start around INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

A typical career path in spectrum jobs may progress as follows: - Junior Spectrum Analyst - Spectrum Engineer - Senior Spectrum Manager - Spectrum Lead

Related Skills

In addition to expertise in spectrum, professionals in this field are often expected to have knowledge or experience in: - RF Engineering - Signal Processing - Network Optimization - Spectrum Analysis Tools

Interview Questions

  • What is spectrum analysis and how is it used in telecommunications? (basic)
  • Can you explain the difference between narrowband and broadband spectrum? (medium)
  • How do you ensure regulatory compliance while managing spectrum allocations? (advanced)
  • What experience do you have with spectrum monitoring tools? (basic)
  • Describe a project where you had to optimize spectrum efficiency. (medium)
  • How do you handle interference issues in spectrum management? (advanced)
  • What is the role of spectrum management in 5G network deployment? (medium)
  • Have you worked on spectrum allocation for IoT devices? (medium)
  • Explain the concept of spectrum sharing and its benefits. (medium)
  • How do you stay updated with the latest trends in spectrum management? (basic)
  • Can you discuss the challenges of spectrum fragmentation in wireless networks? (advanced)
  • What tools or software do you use for spectrum monitoring and analysis? (basic)
  • Describe a scenario where you had to resolve spectrum interference in a live network. (advanced)
  • How do you prioritize spectrum allocations based on network requirements? (medium)
  • What are the key factors to consider when planning spectrum allocation for a new project? (medium)
  • How do you ensure spectrum efficiency while minimizing interference? (advanced)
  • Have you worked on spectrum auctions or licensing processes? (medium)
  • What measures do you take to prevent unauthorized spectrum access? (medium)
  • Can you explain the concept of dynamic spectrum sharing? (medium)
  • How do you handle spectrum congestion in high-traffic areas? (advanced)
  • Describe a time when you had to troubleshoot spectrum-related issues in a network. (medium)
  • What strategies do you use for spectrum planning and optimization? (medium)
  • How do you collaborate with other teams to ensure efficient spectrum management? (basic)
  • What are the key performance indicators you track in spectrum management? (medium)
  • How do you ensure compliance with spectrum regulations and policies? (basic)

Closing Remark

As you explore spectrum jobs in India, remember to showcase your expertise, experience, and passion for the field during interviews. Prepare thoroughly, stay updated on industry trends, and approach each opportunity with confidence. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies