Home
Jobs

14137 Scalability Jobs - Page 50

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

6 - 10 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? Software Engineer designs, develops, tests, and maintains the NICE CXone software platform. Key responsibilities include all tiers of the NICE CXone technology, including (but not limited to) design and development of NICE CXone products and features, unit testing, code reviews, resolving defects encountered during the QA cycle, supporting the production environment, and assisting other developers in a team environment utilizing agile development processes. The Software Engineer will also mentor junior staff members and may be asked to propose cross disciplinary architectural solutions to difficult problems. How will you make an impact? Able to articulate and demonstrate awareness of software design principles and patterns for new features by defining an implementation plan that includes schedule, priorities, dependencies and deliverables Demonstrates ability to write efficient code for handling inter-process communications. Develop according to specific requirements with awareness of scalability, hardware capabilities, cross-environment and platform implications. Contribute to the creation and review of HLD and LLD documents. Work as a member of an agile team to enhance and improve software written in Java Solve routine problems related to features using the company tools and processes. Deliver high-quality software on time. Develop, optimize, and maintain SaaS applications with multi-tenant architecture. Troubleshoot and debug complex software issues efficiently. Ensure scalability, security, and reliability of applications. Attend meetings and training as required. Likes working as a team and sharing knowledge with peers Have you got what it takes? BE or ME in Computer Science or related degree. 8+ years’ experience in software development Experience in use of multiple LLM models. Strong knowledge of Python Strong knowledge of working and developing Microservices Good hands-on experience in SQL Excellent communication skills Excellent problem-solving skills Hands-on experience with AWS Services Open to learn new tech stack as needed. Working knowledge of unit testing. Working knowledge of object-oriented software design. Desire to work in a fast-paced environment. Desire of self-growth and personal improvement What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the mark et leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Title/Position: Power Platform Developer Job Location: Pune Experience : 5+ Years Employment Type: Full Time Shift Timings: Rotational Shift Job Summary: We're looking for a Power Platform Developer with some Microsoft Administration expertise who knows how to design, develop, and optimize business solutions using Power Apps, Power Automate, Power BI along working knowledge Microsoft 365 administration. The ideal applicant should have experience creating custom applications, automation workflows, and data visualization tools to increase corporate productivity. Key Responsibilities: Design, develop, and deploy Power Apps (Canvas & Model-driven Apps) for business process automation. Create and optimize Power Automate workflows for process automation and integration with external systems. Develop interactive dashboards and reports using Power BI for data visualization and insights. Ensure performance, scalability, and security compliance of Power Platform solutions. Collaborate with business stakeholders to gather requirements and provide innovative solutions. Troubleshoot and resolve issues related to Power Platform applications and automation. Monitor and troubleshoot Microsoft 365 services and Power Platform issues. Manage and configure Power Platform admin centre settings and governance policies. Required Skills & Qualifications: Proven experience with Microsoft Power Platform (Power Apps, Power Automate, Power BI, Power Virtual Agents). Strong understanding of Data verse , SharePoint, SQL, and other data sources. Expertise in writing Power FX expressions, JavaScript, and Power BI DAX queries. Familiarity with Microsoft 365 Admin Centre, Exchange, SharePoint Online, and Teams Administration. Microsoft Power Platform and Microsoft 365 certifications are a plus. About Stratacent Stratacent is an IT Consulting and Services firm, headquartered in Jersey City, NJ, with two global delivery centres in New York City area and New Delhi area plus offices in London, Canada and Pune, India. We are a leading IT services provider focusing in Financial Services, Insurance, Healthcare and Life Sciences. We help our customers in their digital transformation journey and provides services/ solutions around Cloud Infrastructure, Data and Analytics, Automation, Application Development and ITSM. We have partnerships with SAS, Automation Anywhere, Snowflake, Azure, AWS and GCP. (To learn more: www.stratacent.com ) Employee Benefits: Group Medical Insurance Cab facility Meals/snacks Continuous Learning Program Stratacent India Private Limited is an equal opportunity employer and will not discriminate against any employee or applicant for employment on the basis of race, color, creed, religion, age, sex, national origin, ancestry, handicap, or any other factors. Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

5.0 years

0 - 0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : USD 2962-3111 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Multithreading, Core Java (Collections, Git/Jenkins, JWT or OAuth 2, Microservices Architecture, RESTful API development, Spring Framework (Spring Boot, SQL / NoSQL databases Global leader in data integrity is Looking for: Java Developer (Mid/Senior Java Developer) Key Skills:  Core Java (Collections, Multithreading, Exception Handling)  Spring Framework (Spring Boot, Spring MVC, Spring Security)  Hibernate / JPA  RESTful API development  Microservices architecture  Maven/Gradle, Git, Jenkins, Docker  Authentication: JWT, OAuth2  SQL / NoSQL databases  Agile/Scrum methodologies  Unit testing (JUnit, Mockito)  Cloud services (AWS / Azure) (optional if applicable) Responsibilities:  Design and implement backend services using Java, Spring Boot, and RESTful APIs.  Optimize application performance and scalability through efficient code and design.  Collaborate with frontend developers, QA, and product managers for feature integration.  Participate in code reviews, unit testing, and continuous integration processes.  Document technical solutions and contribute to knowledge sharing. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

5.0 years

0 - 0 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : USD 2962-3111 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Global leader in data integrity) What do you need for this opportunity? Must have skills required: Multithreading, Core Java (Collections, Git/Jenkins, JWT or OAuth 2, Microservices Architecture, RESTful API development, Spring Framework (Spring Boot, SQL / NoSQL databases Global leader in data integrity is Looking for: Java Developer (Mid/Senior Java Developer) Key Skills:  Core Java (Collections, Multithreading, Exception Handling)  Spring Framework (Spring Boot, Spring MVC, Spring Security)  Hibernate / JPA  RESTful API development  Microservices architecture  Maven/Gradle, Git, Jenkins, Docker  Authentication: JWT, OAuth2  SQL / NoSQL databases  Agile/Scrum methodologies  Unit testing (JUnit, Mockito)  Cloud services (AWS / Azure) (optional if applicable) Responsibilities:  Design and implement backend services using Java, Spring Boot, and RESTful APIs.  Optimize application performance and scalability through efficient code and design.  Collaborate with frontend developers, QA, and product managers for feature integration.  Participate in code reviews, unit testing, and continuous integration processes.  Document technical solutions and contribute to knowledge sharing. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Agra, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

🚀 We're Hiring! Java Full Stack Developers GrayOpus Technologies is growing, and we’re on the lookout for passionate Java Full Stack Developers to join our dynamic team! Location: Noida Sec-62 Employment Type: Full-Time Experience: 5-9 Years Working Days: 5 days in a week Salary: As per the market standards + negotiable About the Role We are seeking an experienced Full Stack Java Developer with 5 to 9 years of experience to join our dynamic engineering team. The ideal candidate will have strong backend development skills using Java and Spring Boot, along with frontend expertise using frameworks like Angular or React. You will play a key role in designing and delivering scalable, high-performance web applications. Key Responsibilities Design, develop, and maintain enterprise-grade web applications. Build secure and scalable RESTful APIs and microservices. Implement responsive and dynamic UIs using modern frontend frameworks. Optimize applications for maximum speed and scalability. Collaborate closely with product managers, designers, and QA teams. Review and improve code quality through code reviews and mentoring. Ensure adherence to software engineering best practices and standards. Lead who worked as a team Lead and not as an Individual. Technical Skills Required Backend: Strong command over Java (8+), Spring Boot, JPA/Hibernate Expertise in REST API development & Microservices architecture Frontend: Experience with Angular 8+, ReactJS, or similar frameworks Solid understanding of HTML5, CSS3, JavaScript, TypeScript Databases: Hands-on experience with MySQL, PostgreSQL, or MongoDB DevOps & Tools: Familiarity with Git, Maven/Gradle, Jenkins, Docker Basic understanding of Kubernetes, CI/CD pipelines Cloud (Preferred): Exposure to AWS / Azure / GCP services Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Experience working in Agile/Scrum environments Strong analytical and problem-solving skills Ability to take ownership and deliver independently Good communication and collaboration skills Nice to Have (Especially for 5-9 Years Range) Team lead or mentoring experience Experience in system design and architecture Exposure to performance optimization and security best practices Working knowledge of message queues like RabbitMQ or Kafka Show more Show less

Posted 2 days ago

Apply

0.0 - 3.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

We are seeking a skilled Backend/API Developer to join our dynamic team. The ideal candidate will have a strong background in backend development using Java, Maven, Spring Boot, and Azure SQL, along with experience in integrating external APIs. You will be responsible for developing and maintaining robust backend systems that support our applications and services. Key Responsibilities: Backend Development: Design, develop, and maintain backend systems using Java, Maven, and Spring Boot, Node.js Integration middleware and Enterprise Service Bus etc. Develop and consuming SOAP and REST APIs supporting various formats not limited to XML\JSON Database Management: Utilize Azure SQL for database management, ensuring data integrity and performance. API Integration: Integrate external APIs to enhance application functionality and ensure seamless data exchange. Performance Optimization: Optimize backend performance for speed and scalability, ensuring compatibility across various systems. Testing and Debugging: Conduct thorough testing and debugging to ensure system stability and performance. Stay Updated: Keep up-to-date with the latest trends and advancements in backend development and API integration. Requirements: Experience: Minimum of 3 to 5 years of proven experience as a Backend/API Developer or similar role, with a strong portfolio of backend systems developed using Java, Maven, Spring Boot, and Azure SQL. Technical Skills: Proficiency in Java programming language, Maven build automation tool, Spring Boot framework, and Azure SQL database management. Experience with integrating external APIs. Familiarity with cloud platforms (e.g., Microsoft Azure, AWS, Google Cloud). Knowledge of microservices architecture and containerization (e.g., Docker, Kubernetes). Understanding of CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Experience with version control systems (e.g., Git). Knowledge of security best practices in backend development. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work in an agile development environment. Attention to detail and a proactive approach to identifying and resolving issues. Education: Degree in Computer Science, Engineering, or a related field (preferred). Job Type: Full-time Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Education: Bachelor's (Preferred) Experience: 5years: 3 years (Preferred) Language: Hindi (Preferred) Location: Chennai, Tamil Nadu (Preferred) Shift availability: Day Shift (Preferred) Work Location: In person Application Deadline: 01/07/2025 Expected Start Date: 17/06/2025

Posted 2 days ago

Apply

7.0 years

40 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Vellore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Madurai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company:Global Technology organization Key Skills: Azure, DevOps, Bicep, ARM, Terraform, PowerShell, Azure CLI, Git, Networking, CI/CD, IaC, Azure Monitor, AKS, Security & Compliance, Automation, Cost Optimization. Roles and Responsibilities: Architect and manage Azure cloud infrastructure for scalability, high availability, and cost efficiency Deploy and maintain Azure services such as Virtual Machines, App Services, Kubernetes (AKS), Storage, and Databases Implement networking solutions like Virtual Networks, VPN Gateways, NSGs, and Private Endpoints Design, build, and maintain Azure DevOps pipelines for automated deployments Implement GitOps and branching strategies to streamline development workflows Ensure efficient release management and deployment automation using Azure DevOps, GitHub Actions, or Jenkins Write, maintain, and optimize Bicep / ARM / Terraform templates for infrastructure provisioning Automate resource deployment and configuration management using Azure CLI, PowerShell, etc. Implement Azure security best practices, including RBAC, Managed Identities, Key Vault, and Azure Policy Monitor and enforce network security with NSGs, Azure Firewall, and DDoS protection Ensure compliance with security frameworks such as CIS, NIST, ASB, etc. Conduct security audits, vulnerability assessments, and enforce least privilege access controls Set up Azure Monitor, Log Analytics, and Application Insights for performance tracking and alerting Optimize infrastructure for cost efficiency and performance using Azure Advisor and Cost Management Troubleshoot and resolve infrastructure-related incidents in production and staging environments Experience Requirement: 5-10 years of Experience with Azure services including Compute, Networking, Storage, and Security Expertise in Infrastructure as Code (IaC) using Bicep / ARM / Terraform (Bicep / ARM templates experience is a plus) Proficiency in managing and optimizing CI/CD pipelines in Azure DevOps In-depth knowledge of networking concepts (VNETs, Subnets, DNS, Load Balancers, VPNs) Proficiency in scripting with PowerShell, Azure CLI, or Python for automation Strong knowledge of Git and version control best practices Experience in implementing security and compliance frameworks (CIS, NIST, ASB) in cloud environments Experience in monitoring and cost optimization using Azure-native tools Previous experience in troubleshooting and managing staging/production infrastructure incidents Education: Bachelor. Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Overview: You’ll design and implement scalable solutions for award-winning platforms like LMX and MAX, automating media transactions and bridging media buyers and sellers. Work in an Agile, POD-based model to revolutionize the role of data and technology in OOH advertising. What You’ll Do: Architect scalable solutions aligned with business goals and market needs. Lead Agile POD teams to deliver iterative, high-impact solutions. Enhance products with advanced features like dynamic rate cards and inventory mapping. Ensure best practices in security, scalability, and performance. What You Bring: Strong expertise in cloud-based architectures, API integrations, and data analytics. Proven experience in Agile environments and POD-based execution. Technical proficiency in Java, Angular, Python, and AWS. Required Skills: 8+ years of experience as a Solution Architect. Bachelor’s/Master’s in Computer Science or related field. Proficiency in Java, Angular, Python, MongoDB, SQL, NoSQL, and AWS. Strong understanding of Agile methodologies and POD-based execution. Tech Stack: Languages: Java, Python Frontend: Angular Databases: MongoDB, SQL, NoSQL Cloud: AWS Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Vellore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Madurai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

6.0 years

60 - 65 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

7.0 years

40 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

Exploring Scalability Jobs in India

The scalability job market in India is on the rise, with numerous opportunities available for skilled professionals in this field. Businesses are constantly looking for individuals who can help them scale their operations efficiently and effectively. If you are a job seeker interested in scalability roles, India is a great place to start your career.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for scalability roles: 1. Bangalore 2. Hyderabad 3. Pune 4. Mumbai 5. Delhi

Average Salary Range

The salary range for scalability professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn between INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in scalability roles may include progressing from: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to scalability expertise, professionals in this field are often expected to have skills in: - Cloud computing - Distributed systems - Performance optimization - Automation

Interview Questions

  • What is scalability and why is it important? (basic)
  • Can you explain the difference between vertical and horizontal scaling? (medium)
  • How would you handle a sudden increase in traffic on a website? (medium)
  • What is load balancing and how does it work? (basic)
  • Can you discuss the pros and cons of microservices architecture for scalability? (advanced)
  • How do you determine the scalability requirements for a new application? (medium)
  • What is the CAP theorem and how does it relate to scalability? (advanced)
  • Have you worked with any specific tools or technologies to improve scalability in your previous projects? (medium)
  • Explain the concept of sharding in database scalability. (medium)
  • How do you monitor and measure the performance of a scalable system? (medium)
  • What are the common challenges faced when scaling a system horizontally? (advanced)
  • Can you explain the concept of eventual consistency in distributed systems? (medium)
  • How do you ensure data consistency when scaling a distributed system? (medium)
  • Have you implemented any caching strategies to improve scalability in your projects? (medium)
  • What is the difference between scaling up and scaling out? (basic)
  • How do you handle database sharding in a scalable system? (advanced)
  • Can you discuss the role of CDNs in improving scalability? (medium)
  • How do you approach capacity planning for a scalable system? (medium)
  • What are some common bottlenecks that can affect the scalability of a system? (medium)
  • How do you handle fault tolerance in a scalable system? (medium)
  • Can you discuss the impact of latency on scalability? (medium)
  • How do you design APIs for scalability and performance? (medium)
  • What are some best practices for optimizing the performance of a scalable system? (medium)
  • How do you ensure security while scaling a system? (medium)
  • Can you explain the concept of auto-scaling in cloud computing? (medium)

Closing Remark

As you prepare for scalability roles in India, make sure to brush up on your technical skills and be ready to showcase your knowledge during interviews. With the right preparation and confidence, you can land a rewarding career in the field of scalability. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies