Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location: Bangalore, India Experience: 15+ years Role Type : Onsite We’re hiring a strategic research leader to head the AI and GCC intelligence division for a leading media and market insights firm. This role is ideal for someone with a strong background in market research, thought leadership, and productized research offerings. Key Responsibilities Lead and Grow a Research Team: Hire, mentor, and manage analysts, data scientists, and editors across AI, GCCs, and emerging tech domains. Set annual goals and ensure high-quality output. Drive Business Growth: Own the P&L for the research vertical. Build and manage budgets, identify new revenue streams, and partner with Sales & Marketing for go-to-market success. Own Research Products: Oversee core research assets like vendor quadrants, rankings, and data platforms. Launch new offerings aligned with market needs. Engage with Ecosystem: Lead vendor briefings and RFPs, maintain evaluation integrity, and cultivate relationships with startups and tech providers. Be a Thought Leader: Guide the creation of signature reports, whitepapers, and custom research. Represent the firm at conferences, webinars, and across media. Stakeholder Management: Serve as the key contact for clients and partners. Deliver insights that inform enterprise decisions and drive engagement. Ensure Methodology Excellence: Maintain rigorous standards in research design, data integrity, and tool use (Tableau, Python, R, etc.). Ideal Candidate 15+ years in research, advisory, or consulting roles, ideally with market intelligence or tech focus Deep understanding of AI ecosystems, GCC trends, and vendor landscapes Strong P&L ownership experience and a track record of product launches Skilled in cross-functional leadership, strategic planning, and analytical storytelling Excellent communication skills and comfort engaging with C-suite stakeholders Bachelor’s degree required; Master’s or MBA preferred Success Metrics Research revenue growth and P&L health Launch of new research products and adoption rates Client satisfaction and renewal rates Vendor ecosystem coverage and turnaround speed Team engagement and retention External impact via citations, media, and event presence Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
You’ll be our- Escalations Specialist You’ll be based at- IBC Knowledge Park, Bengaluru/Ather Hosur Factory/Other city name You’ll be aligned with- Customer Success Manager You’ll be joining our- Escalations Team Roles and Responsibilities De-escalation & Rapport Building – Effectively manage customer concerns, use appropriate retention strategies, and foster positive relationships during the resolution process. Cross-Functional Collaboration – Work closely with internal teams to ensure quick and effective issue resolution. Resolution Ownership – Take full ownership of customer concerns, track their progress, and follow up until a satisfactory resolution is achieved. Here’s what we’re looking for Excellent Communication Skills( Mandatory ) – Ability to communicate clearly and effectively with customers, internal teams, and stakeholders. Strong written and verbal communication is essential. Past Experience – Prior experience in handling customer escalations is mandatory. Good Product Knowledge – Strong understanding of our products and services to provide accurate and effective solutions to customers. A good knowledge about Salesforce ( optional) What you bring to Ather Experience: Minimum 2 years in Customer Support & Escalations Handling Education : A graduate in any field with strong acumen for customer service Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About NxtWave NxtWave is one of India’s fastest-growing Ed-Tech startups. NxtWave is revolutionizing the 21st-century job market by transforming youth into highly skilled tech professionals irrespective of their educational background with its CCBP 4.0 programs. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). The startup is backed by Orios Ventures, Better Capital and marquee angels, including founders of some of India’s unicorns. NxtWave is an official partner for NSDC, under the Ministry of Skill Development & Entrepreneurship, Govt. of India, and recognized by NASSCOM, Ministry of Commerce and Industry, Govt. of India, and Startup India. The startup has received accolades as ‘The Greatest Brand in Education’ in a research-based listing by URS Media, a leading international media house. By offering vernacular content and interactive learning, NxtWave is breaking the entry barrier for learning tech skills. Learning in their mother tongue helps learners achieve higher comprehension, deeper attention, longer retention, and greater outcomes. NxtWave now has paid subscribers from 450+ districts across India. In just 2 years, CCBP 4.0 learners have been hired by 1000+ companies including Google, Amazon, Nvidia, Goldman Sachs, Oracle, Deloitte, and more. Scale at which we operate on tech level (as on July 22) 50 Cr+ learning minutes spent 12 Cr+ Code Runs 2Bn.+ API Requests Handled by our servers Know more about NxtWave: https://www.ccbp.in Job Summary We are looking for a proactive and detail-oriented Associate Project Manager to support the smooth planning and execution of assessments across learner cohorts. The role involves coordinating assessment logistics, facilitating communication with stakeholders, handling student feedback processes, and supporting data accuracy and reporting. Key Responsibilities: Coordinate the creation and distribution of assessment links in alignment with internal timelines and academic plans. Maintain centralized records for assessment activities and ensure timely communication with relevant stakeholders. Review student feedback related to assessments and support the resolution of actionable concerns. Collaborate with internal teams to ensure assessments are approved, shared, and conducted in a structured and efficient manner. Facilitate the sharing of performance results and feedback summaries with learners. Support the setup and implementation of processes to address and act upon genuine feedback-related score updates. Preferred Skills & Qualifications: 1–2 years of experience in operations, academic support, or program coordination roles Strong communication, coordination, and documentation skills Familiarity with tools such as Google Workspace and basic project tracking systems Ability to work independently, manage multiple priorities, and adapt in a dynamic environment What We Offer Opportunity to impact the lives of learners and contribute to their success. Collaborative and dynamic work environment. Location: Hyderabad Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Objectives of this Role Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes Help streamline our data science workflows, adding value to our product offering and building out our customer lifecycle and retention models Work closely with the data science and business intelligence teams to develop data models and pipelines for research, reporting, and machine learning Be an advocate for best practices and continued learning What You’ll Do Develop BI dashboards and data visualizations to drive business decisions Work with cross-functional teams to analyze user behavior and product performance Build and optimize data models and reporting system sTranslate data into actionable insights that support growth, retention, and customer experienc e What We’re Looking Fo r 2+ years of experience in BI, SQL, data modeling, ETL, and reporting tools (MicroStrategy preferred) Strong analytical mindset and business acumen Ability to present complex data in a clear, concise manner to non-technical teams Experience in any B2C domain lik e Gaming, Fintech, EdTech, or E-commer ce is a strong pl us Preferred Qualificati ons Experience creating and managing reports on BI tools Hands-on experience with SQL database design Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
India
On-site
Video Editor – Long-Form Celebrity & Entertainment News About Us :We’re a digital-first news agency covering trending celebrity and entertainment stories for a global audience. From long-form deep dives to fast-moving viral reels, we tell stories that matter — and spark conversations. Our content reaches millions across YouTube, Instagram, Facebook, TikTok, and beyond . Position Overvie w:We’re looking for a Video Editor to join our social media team and take charge of editing up to 2 long-form videos per day (typically 2–10 minutes each). You'll work closely with a lead scriptwriter and content strategist to bring celebrity and entertainment news stories to life through sharp edits, strong pacing, and engaging visual s.This role is ideal for someone with a pulse on pop culture, who understands storytelling for digital platforms and thrives in a fast-paced, deadline-driven environmen t. Key Responsibiliti es:Edit long-form videos (2–10 minutes) from raw footage, voiceovers, visuals, and stock/media ass etsEnhance storytelling with the right pacing, transitions, background music, captions, and motion graph icsSource relevant b-roll, images, and social media clips to enrich the narrat iveMaintain consistent brand style, tone, and formatting across vid eosWork closely with the scriptwriter to interpret the vision and optimize for viewer retent ionExport and deliver videos optimized for YouTube, Facebook, and other long-form platfo rmsMaintain an efficient editing workflow to meet daily deadlines (up to 2 videos per d ay)Stay updated on editing trends and formats used by top creators in the entertainment sp ace Requireme nts:1–3 years of experience in video editing (digital news, YouTube, or entertainment prefer red)Proficiency with editing software (Adobe Premiere Pro, Final Cut Pro, or simi lar)Strong understanding of pacing, narrative flow, and visual storytel lingBasic knowledge of motion graphics, text animation, and sound de signAbility to work fast without sacrificing qua lityFamiliarity with copyright-safe content sourcing (stock media, UGC clips, fair use)Excellent time management and communication sk ills Bonus Po ints:Experience editing celebrity/entertainment content or document ariesKnowledge of YouTube best practices (thumbnails, metadata, retention h ooks)Comfort working with templates or style guides for faster produ ctionPassion for pop culture, social media trends, and digital storyte lling Shift Timings- 10 am-6 pm, 6 days per week.CTC - 2. 4-5LPA Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Kochi, Kerala, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Greater Bhopal Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Pune, Maharashtra
On-site
Company Description Metro Global Solution Center (MGSC) is internal solution partner for METRO, a €31 Billion international wholesaler with operations in more than 30 countries. The store network comprises a total of 623 stores in 21 countries, of which 522 offer out-of-store delivery (OOS), and 94 dedicated depots. In 12 countries, METRO runs only the delivery business by its delivery companies (Food Service Distribution, FSD). HoReCa and Traders are core customer groups of METRO. The HoReCa section includes hotels, restaurants, catering companies as well as bars, cafés and canteen operators. The Traders section includes small grocery stores and kiosks. The majority of all customer groups are small and medium-sized enterprises as well as sole traders. METRO helps them manage their business challenges more effectively. MGSC, location wise is present in Pune (India), Düsseldorf (Germany) and Szczecin (Poland). We provide HR, Finance, IT & Business operations support to 31 countries, speak 24+ languages and process over 18,000 transactions a day. We are setting tomorrow’s standards for customer focus, digital solutions, and sustainable business models. For over 10 years, we have been providing services and solutions from our two locations in Pune and Szczecin. This has allowed us to gain extensive experience in how we can best serve our internal customers with high quality and passion. We believe that we can add value, drive efficiency, and satisfy our customers. Website: https://www.metro-gsc.in Company Size: 500-1000 Headquarters: Pune, Maharashtra, India Type: Privately Held Inception: 2011 Job Description 1. Role Overview The IT Lead will be responsible for scaling our offshore technology hub in Pune, India, and perhaps in additional locations in India. This role is critical to our global IT strategy and will oversee the recruitment, operational setup, and delivery of services primarily across software engineering, DevOps, infrastructure, and IT support. The successful candidate will be a seasoned technology leader with experience in building high-performing offshore organizations and integrating them into a global, platform-based delivery model. 2. Key Responsibilities A. Strategic Leadership Develop and execute a 3–5 year strategic plan for the Pune IT hub, aligned with global IT and business objectives, building on a strong foundation provided by METRO GSC as the hosting entity. Define the hub’s operating model, service portfolio, and integration strategy with global teams. Represent the Pune hub in global leadership forums and ensure alignment with enterprise priorities and key METRO stakeholders. Actively drive the identification of IT off-shoring opportunities throughout METRO and their transition planning towards the Pune hub B. Hub Setup & Governance Lead the physical and digital setup of the hub, including office space, IT infrastructure, security, and connectivity based on an existing foundation provided by METRO GSC, but in alignment with Group IT (METRO.digital) practices. Establish legal, compliance, and operational frameworks in collaboration with HR, Legal, and Finance based on an existing foundation provided by METRO GSC, but in alignment with Group IT (METRO.digital) practices. Define and implement governance structures for delivery, risk management, and reporting in alignment with an existing foundation provided by METRO GSC. C. Talent Acquisition & Organizational Development Build a scalable organizational structure to support rapid growth from 0 to 100 employees within a year and doubling in mid-term either for newly established positions or as replacement for existing staff coming from the Group IT (Metro.digital) or the Country IT functions. Ensure talent acquisition and hiring practices that are efficient, quick and effective to support rapid growth and secure most-suitable market candidates for METRO Engage directly in hiring senior and key personnel, especially in the early stages of the hub’s development Ensure high retention of talent through targeted measures and best practices Develop onboarding, training, and career development programs to build a strong engineering culture in alignment with Group IT practices. Ensure quick ramp-up and reaching of critical mass for local ressources D. Delivery & Operational Excellence Oversee in-scope and in-quality delivery of off-shored IT services while acting with “wholesale-typical” efficiency (incl. a cost-leadership mindset) Implement with market-leading and Group-IT best practices (e.g. Agile) to ensure high-quality, scalable, and secure operations Define and secure KPIs and SLAs for delivery, quality, uptime, and customer satisfaction. Ensure strong alignment and integration with any existing IT delivery teams and ressources part of the GSC organization E. Stakeholder Engagement Act as the primary point of contact for global IT and business stakeholders regarding the Pune hub. Build strong relationships with internal customers, vendors, and local partners. Actively promote the IT hub throughout the organization, ensuring high acceptance and up-skilling METRO staff to effectively engage with Hub Qualifications Education Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or related field. Experience Ideally 5+ years of consulting experience and (must-have) 15+ years of managing large-scale off-shore organizations Proven track record of building and scaling technology teams (200+ people). Experience in enterprise IT environments with global delivery models. Technical Expertise Strong understanding of modern software development (Agile, CI/CD, microservices). Experience with cloud platforms (AWS, Azure, GCP), DevOps toolchains, and ITSM systems. Familiarity with cybersecurity, data privacy, and compliance frameworks Additional Information Organizational Context The Pune IT hub will be operated as part of the Group IT (Metro.digital) organization, however hosted by METRO GSC, therefore sharing infrastructure, services and some governance elements with the already established entity. WHAT DO WE HAVE FOR YOU A career transforming role that can step up your game A true immersion into leadership space ‘true head heart & hand approach’ Diverse yet cohesively working global teams to learn from A true sense of purpose in what you do A learning culture where we learn each day A place where you can truly belong , we are inclusive & warm Ability to define your Hybrid work structure split b/w home & at office
Posted 2 days ago
7.0 years
40 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Chandigarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Mysore, Karnataka, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Patna, Bihar, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
0 years
0 Lacs
Patel Nagar, Delhi, India
Remote
The landscape of work has transformed dramatically in recent years, with remote work becoming a cornerstone of modern employment. As of 2025, many U.S. companies are embracing remote and hybrid work models, offering professionals the flexibility to work from home or anywhere else. This shift has been driven by advancements in technology, evolving employee expectations, and the proven benefits of remote work, such as increased productivity and better work-life balance. If you’re a job seeker looking for remote opportunities, this guide highlights top companies hiring for remote roles in the U.S., key industries offering these positions, and actionable tips to land your dream remote job. Below, we explore the current remote work scene, spotlight companies actively hiring, and provide insights to help you navigate the job market. Why Remote Work Continues to Thrive in 2025 Remote work is no longer a temporary trend but a permanent fixture in the U.S. job market. According to recent data, 9% of U.S. jobs on platforms like LinkedIn are fully Remote Work Hiring Now, yet they attract nearly 40% of applications, underscoring the high demand for these roles. Companies are adopting remote work to reduce overhead costs, access a global talent pool, and meet employee preferences for flexibility. Industries such as technology, healthcare, finance, and customer service are leading the charge, offering roles that range from entry-level to senior positions. Remote work also supports digital nomadism, allowing professionals to work from anywhere in the U.S. without geographic restrictions, provided they meet legal and time zone requirements. Benefits Of Remote Work For Employees Flexibility: Work from home or any location, balancing personal and professional commitments. Cost Savings: Eliminate commuting expenses and reduce costs for professional attire or meals. Improved Productivity: Create a personalized work environment to enhance focus and efficiency. Work-Life Balance: Spend more time with family or pursue hobbies without long commutes. Benefits For Employers Wider Talent Pool: Hire skilled professionals regardless of their location in the U.S. Reduced Overhead: Save on office space, utilities, and other operational costs. Higher Retention: Flexible work arrangements boost employee satisfaction and loyalty. Increased Productivity: Studies show remote workers often outperform office-based employees in certain roles. Top Industries Hiring for Remote Work in 2025 Several Industries Are At The Forefront Of The Remote Work Revolution, Offering Diverse Opportunities For Job Seekers. Here’s a Look At The Key Sectors Driving Remote Hiring In The U.S. Technology: Software development, data science, and IT roles dominate remote job listings, with companies leveraging tools like Slack, Zoom, and GitHub for collaboration. Healthcare: Telehealth, medical billing, and administrative roles are increasingly remote, with organizations like CVS Health leading the way. Finance: Financial analysts, accountants, and customer service roles in banking and insurance are shifting to remote settings. Customer Service: Call center and support roles are highly flexible, with companies like UnitedHealth Group hiring remote representatives. Marketing and Content Creation: Content writers, social media managers, and digital marketers thrive in remote environments, supported by platforms like HubSpot and Buffer. Education: Online tutoring, instructional design, and administrative roles in education are growing, with companies like Stride, Inc. offering remote positions. Also Read: Top Remote Call Center Jobs Hiring Now – Work From Home Guide Top Companies Offering Remote Work Hiring Now in the U.S. Role Based on recent job market analyses and posts on platforms like X, the following companies are actively hiring for remote roles in 2025. These employers span various industries and offer a mix of full-time, part-time, and contract positions. Below is a curated list of 20 companies, their remote work policies, and the types of roles they’re hiring for: Zapier Industry: Technology Remote Policy: Fully remote, work-from-anywhere model with no geographic restrictions. Open Roles: Software engineers, customer support specialists, and marketing managers. Benefits: Home office budget, flexible PTO, and equity options. Why Join: Zapier’s asynchronous work culture supports flexibility and autonomy, ideal for digital nomads. GitLab Industry: Technology Remote Policy: Remote-first, with employees in over 65 countries. Open Roles: DevOps engineers, product managers, and data analysts. Benefits: Transparent culture, stock options, and learning stipends. Why Join: GitLab’s handbook-driven approach ensures clarity and inclusivity for remote workers. CVS Health Industry: Healthcare Remote Policy: Offers hybrid and fully remote roles, with a focus on telehealth and customer service. Open Roles: Prevention coordinators, customer service representatives, and IT specialists. Benefits: Comprehensive health insurance, wellness programs, and tuition reimbursement. Why Join: CVS Health combines healthcare impact with flexible work arrangements. UnitedHealth Group Industry: Healthcare Remote Policy: Hybrid and fully remote options, depending on the role. Open Roles: Medical coders, customer service agents, and data analysts. Benefits: Competitive pay, parental leave, and employee assistance programs. Why Join: UnitedHealth Group is a leader in remote healthcare roles with robust benefits. Buffer Industry: Marketing/Social Media Remote Policy: Fully remote with a focus on asynchronous communication. Open Roles: Social media managers, content writers, and customer advocates. Benefits: Four-day workweeks, free Kindle with books, and transparent salaries. Why Join: Buffer’s open culture and emphasis on work-life balance make it a top choice. Affirm Industry: Financial Services Remote Policy: Remote-first, with optional office spaces for hybrid work. Open Roles: Financial analysts, software engineers, and customer success managers. Benefits: No late fees for employees, wellness programs, and stock options. Why Join: Affirm’s mission-driven approach resonates with professionals seeking impactful roles. Atlassian Industry: Technology Remote Policy: Team Anywhere model, allowing work from any U.S. location. Open Roles: Software developers, UX designers, and marketing specialists. Benefits: Flexible PTO, team offsites, and professional development programs. Why Join: Atlassian’s tools like Jira and Confluence are built for remote collaboration. HubSpot Industry: Marketing/Sales Remote Policy: Hybrid model with significant remote flexibility. Open Roles: Inbound marketing specialists, sales representatives, and content strategists. Benefits Paid parental leave, remote work stipends, and employee resource groups. Why Join: HubSpot’s focus on employee growth makes it ideal for career-driven professionals. Reddit Industry: Social Media Remote Policy: Flexible model allowing permanent remote work or hybrid options. Open Roles: Community managers, software engineers, and data scientists. Benefits: Flexible work locations, wellness stipends, and inclusive culture. Why Join: Reddit’s community-driven platform offers creative and engaging remote roles. Intuit Industry: Financial Services Remote Policy: Remote-friendly with roles in multiple U.S. states. Open Roles: Accountants, customer support specialists, and UX designers. Benefits: Pay equity, fertility benefits, and volunteer time off. Why Join: Intuit’s products like TurboTax make it a trusted name in remote finance roles. Dropbox Industry: Technology Remote Policy: Fully remote with a focus on digital-first collaboration. Open Roles: Cloud engineers, product managers, and marketing specialists. Benefits: Flexible PTO, wellness reimbursements, and remote work stipends. Why Join: Dropbox’s innovative tools support seamless remote work. Shopify Industry: eCommerce Remote Policy: Digital by Design, fully remote across the U.S. Open Roles: eCommerce specialists, software developers, and customer support agents. Benefits: Stock options, learning budgets, and health insurance. Why Join: Shopify empowers entrepreneurs, offering dynamic remote opportunities. Spotify Industry: Entertainment/Technology Remote Policy: Work-from-anywhere model with time zone flexibility. Open Roles: Audio engineers, marketing managers, and data analysts. Benefits: Parental leave, wellness allowances, and learning programs. Why Join: Spotify’s creative culture appeals to tech and music enthusiasts. Pinterest Industry: Social Media Remote Policy: PinFlex model, allowing remote or hybrid work. Open Roles: Content curators, software engineers, and ad specialists. Benefits: Flexible work locations, health insurance, and employee resource groups. Why Join: Pinterest’s visual platform offers unique remote marketing roles. Okta Industry: Technology/Cybersecurity Remote Policy: Dynamic Work model, fully remote with office access. Open Roles: Cybersecurity analysts, software engineers, and sales representatives. Benefits: Stock options, wellness programs, and professional development. Why Join: Okta’s focus on security makes it a leader in remote tech roles. Coursera Industry: Education Remote Policy: Remote-friendly with global reach. Open Roles: Instructional designers, content developers, and customer success managers. Benefits: Learning stipends, flexible PTO, and health insurance. Why Join: Coursera’s mission to transform education aligns with remote learning trends. Zillow Industry: Real Estate/Technology Remote Policy: Hybrid and fully remote options. Open Roles: Data analysts, software developers, and customer support specialists. Benefits: Home office stipends, wellness programs, and equity awards. Why Join: Zillow’s innovative approach to real estate offers diverse remote roles. Slack Industry: Technology Remote Policy: Remote-first with asynchronous collaboration. Open Roles: Software engineers, customer experience specialists, and product managers. Benefits: Flexible PTO, wellness reimbursements, and team offsites. Why Join: Slack’s communication tools are built for remote team success. NVIDIA Industry: Technology/AI Remote Policy: Remote-friendly with roles across the U.S. Open Roles: AI researchers, software engineers, and sales specialists. Benefits: Tuition reimbursement, parental leave, and health insurance. Why Join: NVIDIA’s cutting-edge AI work offers exciting remote opportunities. Twilio Industry: Technology/Communications Remote Policy: Remote-first with global flexibility. Open Roles: Cloud engineers, customer success managers, and marketing specialists. Benefits: Stock options, wellness stipends, and learning programs. Why Join: Twilio’s communication solutions power remote collaboration. Note : Job openings and remote policies may vary. Check company career pages or platforms like FlexJobs, We Work Remotely, or LinkedIn for the latest listings. Also Read: Remote Pharmacy Technician Jobs: Work From Home Roles You Can Apply For Job openings How To Find Remote Jobs In 2025 Securing a remote job requires a strategic approach, especially in a competitive market. Here are actionable tips to help you stand out: Use Specialized Job Boards: Platforms like FlexJobs, We Work Remotely, and Remote.co curate verified remote job listings, reducing the risk of scams. Set up alerts for roles matching your skills and preferences. Tailor Your Application: Highlight remote work skills like self-motivation, time management, and proficiency with tools like Zoom, Slack, or Asana. Customize your resume and cover letter with keywords from job descriptions. Leverage Networking: Connect with professionals on LinkedIn or join remote work communities to discover hidden opportunities. Attend virtual industry events to meet hiring managers directly. Showcase Remote Readiness: Demonstrate familiarity with remote tools and asynchronous communication. Include past remote work experience or transferable skills in your portfolio. Research Company Policies: Understand each company’s remote work policy (e.g., fully remote vs. hybrid) to ensure alignment with your needs. Check for time zone or state-specific requirements. Upskill for In-Demand Roles: Learn tools like Adobe Creative Suite for design roles or Python for tech positions. Pursue certifications in project management, digital marketing, or data analysis to boost your qualifications. Challenges Of Remote Work And How To Overcome Them While remote work offers numerous benefits, it comes with challenges. Here’s how to address common hurdles: Isolation: Combat loneliness by joining virtual coworking spaces or scheduling regular check-ins with colleagues. Distractions: Create a dedicated workspace and set boundaries with family or housemates. Communication Gaps: Use tools like Slack or Microsoft Teams for clear, asynchronous communication. Career Growth: Seek companies with transparent promotion paths and invest in continuous learning to stay competitive. Conclusion – Remote Work Hiring Now Remote work continues to redefine the U.S. job market in 2025, offering unparalleled flexibility and opportunities across industries like technology, healthcare, finance, and education. Companies like Zapier, GitLab, CVS Health, and Buffer are leading the way, hiring for diverse roles with competitive benefits. By leveraging specialized job boards, tailoring your applications, and upskilling, you can secure a remote job that aligns with your career goals and lifestyle. Whether you’re a seasoned professional or new to the workforce, the remote job market is brimming with possibilities. Start your search today and take the first step toward a flexible, fulfilling career. Frequently Asked Questions (FAQs) – Remote Work Hiring Now Which industries offer the most remote jobs in 2025? Technology, healthcare, finance, customer service, and marketing are among the top industries offering remote roles, with positions like software developers, telehealth professionals, and content writers in high demand. How can I find legitimate remote jobs? Use trusted platforms like FlexJobs, We Work Remotely, or LinkedIn to find verified listings. Avoid generic job boards to reduce the risk of scams. What skills are essential for remote work? Key skills include self-motivation, time management, communication, and proficiency with remote tools like Zoom, Slack, and project management software. Are remote jobs available for entry-level candidates? Yes, roles like virtual assistants, customer service representatives, and content writers are ideal for beginners and often require minimal experience. Do remote jobs pay as well as in-office roles? Many remote jobs, especially in tech and finance, offer competitive salaries, sometimes exceeding industry averages due to cost savings on office spaces. What are the benefits of working for a remote-first company? Remote-first companies offer flexibility, asynchronous work options, and benefits like home office stipends, wellness programs, and learning budgets. Can I work remotely from anywhere in the U.S.? Some companies allow work from anywhere, while others have state-specific requirements due to tax or legal regulations. Check job descriptions for details. How do I stand out when applying for remote jobs? Tailor your resume to highlight remote work skills, showcase measurable achievements, and demonstrate familiarity with collaboration tools in your cover letter. Are there part-time remote jobs available? Yes, companies like Amplify Education and Yoko Co. offer part-time remote roles in fields like education, customer service, and content creation. What tools should I learn for remote work? Familiarize yourself with tools like Slack, Zoom, Trello, Asana, and Google Workspace to enhance your remote work efficiency and appeal to employers. Related Posts Step-by-Step: How to Start Your Remote Data Entry Career Today Top 1099 Work From Home Jobs in the U.S. for Independent Contractors Legit Work From Home Jobs for Stepmoms: Real Opportunities & Flexible Roles in 2025 Top RN Careers Work From Home Nursing Jobs in the U.S. Best Work From Home Jobs Houston You Can Start Today Remote Pathophysiology Teaching Jobs: Companies Hiring Now in the USA Higher Education Remote Jobs: Teaching, Admin & More (U.S.) 25 Legit Work from Home Jobs That Really Pay Well in the USA Show more Show less
Posted 2 days ago
0 years
0 Lacs
Delhi, India
On-site
Work Level : Senior Leadership Core : Team Player, Result Driven, Communication Skills, Self Motivated Leadership : Purpose Driven, Delivering Results Industry Type : Banking Function : Regional Manager Key Skills : SME Banking,Working Capital,Team Leader,SME,Team Handling,Ccod,SME Loans,Term Loan,Teamwork Education : PG/ Master Note: This is a requirement for one of the Workassist Hiring Partner. Job Role: Leading a team of RMs to achieve budgeted business levels Initial scrutiny of loan proposals and ensure that they are in line with the underwriting guidelines Monitoring of the performance versus desired results for month /quarter and ensure corrective actions where necessary Recruiting / hiring and training new employees and retention of performing team members Developing alternate sales channels in the catchment area and managing them Ensure productivity of team members by bringing in daily rhythm in sales activities. Ensure compliance of the regulatory and internal guidelines within the team Run local campaigns and events to ensure visibility of the bank in the business segment Serve as the key contact person for preferred customers of the Location Ensure quality of loans acquired and be the first line of defence on asset quality Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position Title: Operations and Compliance Designation: AVP / DVP Objective: The job holder will be responsible for ensuring regulatory compliance and enhancing operational efficiency to support the company's strategic goals. Act as a key point of contact for directors and stakeholders on governance matters. This position will align daily business practices with strategic goals, manage regulatory requirements effectively and maintain a robust operational framework that supports the company's growth. Key Responsibilities: 1) Develop, implement, and maintain robust compliance frameworks in line with applicable laws and regulations (e.g., SEBI, RBI, Companies Act, etc.). 2) To setup processes for customer on-boarding (KYC), depository, clearing and settlement of trades and implement policies and procedures for risk management (market and operational), surveillance, trade settlements and reconciliations 3) Monitor regulatory changes and advise management on the implications and necessary adjustments 4) To ensure process controls and adherence to the guidelines of regulatory bodies governing the business segment. 5) Monitor performance metrics and prepare reports to ensure operational goals are met. 6) Lead initiatives to adopt technology and automation for operational improvements 7) Conduct periodic compliance audits and risk assessments to ensure adherence to legal requirements. Ensure timely and accurate filings with regulatory authorities. 8) Oversee internal audits, prepare for regulatory audits and maintain clear documentation and reporting processes. 9) Organize and manage board and committee meetings, including scheduling, preparation of agendas, notices, and minutes. Maintain statutory registers and records as required under the Companies Act. 10) Collaborating with leadership to build an environment of collective responsibility and accountability. To manage multiple relationships with banks, financial institutions, etc. 11) Ensure timely filing of annual returns, resolutions, and other statutory documents with the Ministry of Corporate Affairs (MCA). 12) Provide legal support on corporate and compliance matters, including drafting and reviewing policies, contracts, and agreements. Offer strategic advice to the management on legal risks and implications. 13) Establish internal controls to mitigate compliance and regulatory risks. Investigate and address compliance breaches, recommending corrective actions. 14) Work closely with internal teams such as Finance, HR, and Operations to ensure regulatory alignment. Liaise with auditors, consultants, and regulators on behalf of the organization. 15) Provide regular financial and operational updates to the executive team and stakeholders, demonstrating alignment with the organization’s financial goals 16) Develop and manage budgets for operations, ensuring cost efficiency without compromising quality. Qualification: Graduate in Law (LLB) or equivalent. Qualified Company Secretary (CS) from the Institute of Company Secretaries of India (ICSI). Experience: 8–12 years of experience in compliance and corporate governance, preferably in the financial services/fintech/banking sector. Proven track record in managing operational processes and secretarial compliance. a. Previous experience working in fintech, banking, or financial services sectors. b. Familiarity with RBI and SEBI compliance frameworks. c. Strong understanding of corporate laws, secretarial practices and governance frameworks Competencies: Ethical judgment and integrity. Strategic thinking and a proactive mindset. Ability to work independently and as part of a team. Adaptability to fast-paced and dynamic environments. Ability to multitask and manage priorities in a dynamic environment. High attention to detail and ability to work under tight deadlines. Job Interactions 1) Cross-Functional Collaboration - Align operations with organizational goals and support other departments, such as Product, Technology, Compliance, and Customer Support, to meet business targets. 2) Support and Enablement - Facilitate efficient workflows and remove operational barriers for other teams. 3) Risk Management and Compliance Oversight - Ensure that operational processes align with regulatory requirements and risk management standards. 4) Data-Driven Decision Making - Support internal teams by providing insights and analytics that can improve decision-making. 5) Continuous Improvement and Training - Promote a culture of continuous improvement within operations to boost productivity and efficiency across the organization 6) Escalation and Conflict Resolution - Address operational issues and conflicts promptly to maintain smooth interdepartmental interactions Nature of Interaction Service Delivery and Client Satisfaction - Operational processes align with client expectations and regulatory standards. This might involve gathering customer feedback to assess service quality and identify areas for improvement. Customer Success and Retention - Build and maintain relationships with high-value or long-term clients. This helps foster trust and loyalty, contributing to client retention and overall customer satisfaction. Onboarding and Compliance - Onboarding process to ensure that clients meet regulatory requirements (like KYC and AML) and that the process is efficient. Partnerships and Vendor Management - Work with third-party vendors or partners to ensure external collaborations align with company standards and serve customer needs effectively. Strategic Development and Advocacy - Act as client advocates in internal discussions, representing customer perspectives when refining products, developing new features, or improving service models Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company Overview: Tring is India’s Largest Tech-Enabled Celebrity Engagement Platform with 15,000+ celebrities on board like MS Dhoni, Rajkummar Rao, Shilpa Shetty, Shivam Dube, Sonali Bendre, Ali Fazal, to name a few. Tring helps brands connect with celebrities for brand ambassador deals, endorsements, event appearances, image rights, influencer marketing, and more. Having worked with over 1,500 brands across industries, Tring makes celebrity marketing accessible and cost-effective for businesses of all sizes. Why Join Us? At Tring, you will be part of a fast-growing company revolutionizing the celebrity engagement industry. Work alongside a vibrant team, collaborate with some of the biggest brands and personalities, and help shape the future of marketing with direct access to A-list celebrities. If you thrive in dynamic environments and want to be part of a game-changing platform, this is the place for you! Your Role: - Conduct cold reach outs via calls, networking, and attending industry events to generate new leads. - Drive B2B sales by acquiring and managing business accounts. - Conduct video meetings to present Tring’s service offerings and close deals. - Collaborate with the celebrity team to propose tailored celebrity solutions for clients. - Manage leads and sales activities in Zoho CRM. - Build strong client relationships to ensure retention and satisfaction. What We’re Looking For: - Minimum 6 months to 2 years of B2B sales experience with a track record of exceeding sales targets. - Strong negotiation and communication skills. - Ability to conduct engaging presentations and explain complex solutions. - Skilled in cold outreach, prospecting, and relationship management. - Analytical and problem-solving abilities. Perks and Benefits: - Competitive salary with performance-based bonuses. - Career growth opportunities in a fast-paced, innovative environment. - Continuous learning and development. - A chance to work with top celebrities and leading brands. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Location- Mumbai, Maharashtra Job Responsibilities Critically examine the execution of Projects & troubleshoot problems on the go. Lead client review meetings & ensure customer delight. Drive client servicing and relations to ensure CSAT of 9 and above for all clients. Map customer expectations at a high level & evolve SEO practice to ensure customer delight. Work closely with the Content and SEO teams to ensure there are no errors in the work being sent out. Mentor managers and junior team members Ensure client retention. Ensure timely execution of projects. Own client status calls and communication regarding SEO and effectively communicate ongoing efforts to clients. Requirements At least 4-7 years of Client & Team Management experience Experience in managing multiple projects. Strong data analytics skills to find actionable insights from the data. Strong verbal and written communication skills. Well-versed in Microsoft Excel, Sheets and Looker Studio Working knowledge of GA4, Google Search Console and other SEO tools like AHREFS, SEM Rush, Screaming frog etc About Social Beat Founded in 2012, Social Beat is a digital growth partner for leading brands and hyperscaling startups in India. With a 300+ strong team of digital experts across Bengaluru, Mumbai, NCR and Chennai, they are India's fastest-growing independent digital marketing solutions company and manage 3% of digital media investment in India. Social Beat is a Google Premier Partner, Facebook Business Partner and works closely with ecosystem partners like Hotstar, Amazon & Linkedin. They work as extended growth teams at startups including boAt, Upgrad Campus, Global Bees, Blackbuck, Jupiter, Khatabook, Scaler, Whitehat Jr, Pharmeasy, Pinelabs, Wakefit, Juicy Chemistry and with top brands including Bharat Matrimony, Jaquar, Tata Cliq, Indiabulls Dhani, Tata Consumer Products, Mahindra Finance, Go Colors, Hotstar, Himalaya Wellness, Quess Corp, Sundaram Mutual, Brigade Group, Give India and Isuzu on driving innovation through a combination of creativity and performance. They have bagged numerous awards from Google, ET Brand Equity, Foxglove, Digies, Advertising Club of Madras, Advertising Club of Bangalore and adjudged amongst the Fastest Growing Agency by Agency Reporter. Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
15.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Context: Head Marketing will lead the marketing efforts for pharmaceutical products of CVD division within the domestic market. This role involves developing and executing marketing strategies and collaborating with cross-functional teams to drive brand growth and market share. Challenges: Aligning Marketing with Business Goals, Keeping Up with Rapid Digital Evolution, Data Overload & Insight Extraction, Customer-Centric Strategy, Cross-Functional Collaboration, Talent Acquisition & Retention, Budget Constraints, Brand Differentiation, Global vs. Local Strategy and Crisis Management. KEY ACCOUNTABILITIES Strategic Planning:- Develop and implement comprehensive marketing strategies to achieve business objectives. Conduct market analysis to identify opportunities and threats. Define target markets and positioning strategies for pharmaceutical products of CVD division. Brand Management:- Oversee the development and execution of brand plans. Ensure consistent brand messaging across all marketing channels. Monitor brand performance and make data-driven adjustments to strategies. Marketing Campaigns:- Plan and execute multi-channel marketing campaigns, including digital, print, and events. Collaborate with creative agencies to develop promotional materials. Track and analyse campaign performance to optimize ROI. Stakeholder Collaboration:- Work closely with sales, CMO's , regulatory, and other departments to align marketing strategies with business goals. Build and maintain relationships with key opinion leaders and industry influencers. Represent the company at industry conferences and events. Budget Management:- Develop and manage the marketing budget. Ensure efficient allocation of resources to maximize marketing impact. Monitor expenditures and provide regular financial reports. Compliance Ensure all marketing activities comply with relevant regulations and industry standards. EDUCATION & EXPERIENCE:- Pharm/M.Pharm/MBA with 10–15 years of experience in product management within the pharmaceutical industry, specifically in Cardiac and Diabetic segments. A minimum of 1 year of experience as a Marketing Manager is mandatory. Functional Skills : Strategic Thinking, Digital Marketing Expertise, Brand Management, Market Research & Consumer Insights, Campaign Planning & Execution, Data Analysis & ROI Measurement, Leadership & Team Management, Communication & Presentation, Budgeting & Financial Acumen, Innovation & Adaptability. Behavioral Skills : Leadership & Vision, Emotional Intelligence, Collaboration & Influence, Adaptability & Resilience, Creative Thinking, Decision-Making, Communication, Customer-Centric Mind-set, Accountability, Ethical Judgment Show more Show less
Posted 2 days ago
3.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The ideal candidate will have experience in all areas related to the human resources field. They should be comfortable onboarding new candidates and collecting necessary background information as well as assisting employees while at work. This candidate should work closely with management in order to provide training for employees and establish ways to increase employee engagement. Responsibilities Employee Relations & Support: Act as the first point of contact for employees regarding HR-related inquiries, providing timely and accurate information. Recruitment & Onboarding: Assist in the end-to-end recruitment process, including job postings, resume screening, conducting interviews, and facilitating onboarding for new hires. Performance Management: Support the performance appraisal process by coordinating with managers and employees, ensuring timely feedback and development plans. Training & Development: Assist in identifying training needs and organizing development programs to enhance employee skills and performance. HR Administration: Maintain accurate employee records, process HR documentation, and ensure compliance with company policies and legal requirements. Employee Engagement: Support initiatives aimed at improving employee engagement, retention, and overall workplace culture. HR Projects: Contribute to various HR projects and initiatives, providing administrative support and ensuring successful implementation. Qualifications Bachelor's degree/MBA in HR or related field 3 - 6 years of experience in HR or related field Strong organization, communication and conflict resolution skills Demonstrated ability to onboard new employees and manage HR tasks Proficient in Microsoft Office suite Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for retention roles in India is growing rapidly as companies recognize the importance of retaining customers and employees. Retention professionals play a crucial role in developing strategies to keep customers engaged and satisfied, ultimately leading to increased loyalty and revenue for the company.
The average salary range for retention professionals in India varies based on experience level. Entry-level positions typically start at around INR 3-4 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
In the field of retention, career progression often follows a path from Retention Executive to Retention Manager to Retention Director. Along the way, professionals may also specialize in areas such as customer retention, employee retention, or membership retention.
In addition to expertise in retention strategies, professionals in this field often benefit from skills in data analysis, customer relationship management (CRM) software, communication, and problem-solving.
As you explore opportunities in the retention job market in India, remember to showcase your expertise in developing effective strategies that keep customers and employees engaged. By mastering the skills and knowledge required for retention roles, you can confidently prepare for interviews and excel in your career growth. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.