Jobs
Interviews

24278 Etl Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

No. of Positions: 100 Relevant Experience: 4+ Years Experience: 1+ Years 2+ Years 3+ Years Job Location: Hyderabad Job Type: Full Time Work Timing: General Shift Company overview: About Estuate At Estuate, more than 400 uniquely talented people work together, to provide the world with next-generation product engineering and IT enterprise services. We help companies reimagine their business for the digital age. Incorporated in 2005 in Milpitas (CA), we have grown to become a global organization with a truly global vision. At Estuate, we bring together talent, experience, and technology to meet our customer’s needs. Our ‘Extreme Service’ culture helps us deliver extraordinary results. Our key to success We are an ISO-certified organization present across four distinct global geographies. We cater to industry verticals such as BFSI, Healthcare & Pharma, Retail & E-Commerce, and ISVs/Startups, as well as having over 2,000 projects in our portfolio. Our solution-oriented mindset fuels our offerings, including Digital Platform Engineering, Digital Business Apps, and Digital Security & GRC. Our culture of oneness At Estuate, we are committed to fostering an inclusive workplace that welcomes people from diverse social circumstances. Our diverse culture shapes our success stories. Our values unite us. And, our curiosity inspires our creativity. Now, if that sounds like the place you’d like to be, we look forward to hearing more from you. Currently, we are looking for Technology & Business Function specialists to join our team. Job Description We are hiring experienced professionals to join global Fortune 500 companies across diverse technology and business domains. Open Categories & Skill Sets Data Engineering & Analytics: Data Engineering, ETL, Python, Google BigQuery (GBQ), Tableau Cybersecurity & IT Infrastructure: Cybersecurity, Forensics, IAM, Vulnerability Management Product/Project Management: Project Management, Product Ownership, RTE, Incident Handling eCommerce Development: eCommerce Platforms and Solutions Software Engineering / DevOps / GCP: Cloud (GCP), DevOps, CI/CD Pipelines PLM (Product Lifecycle Management): PLM, FlexPLM, ThingWorx, Agile PLM Anaplan & Demand Planning: Anaplan, Demand Planning Salesforce & CRM: Salesforce Supply Chain / Logistics / Operations: Supply Chain, Logistics, WMS, ESP HR & People Functions: HR, People & Culture, TA Lead Our commitment to Equal Opportunity Employment (EOE) Estuate, Inc. is an equal opportunity employer. Our employment decisions are made on the basis of merit and business needs. We comply with all applicable state & local laws governing non-discrimination in employment at each of our facilities and locations. Disclaimer The above job definition information has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job. Job duties and responsibilities are subject to change based on changing business needs and conditions. Share This Job

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

India

Remote

Mandatory skill- Azure Databricks, Datafactory, Pyspark, SQL Experience- 5 to 8 years Location- Remote Key Responsibilities: Design and build data pipelines and ETL/ELT workflows using Azure Databricks and Azure Data Factory Ingest, clean, transform, and process large datasets from diverse sources (structured and unstructured) Implement Delta Lake solutions and optimize Spark jobs for performance and reliability Integrate Azure Databricks with other Azure services including Data Lake Storage, Synapse Analytics, and Event Hubs

Posted 6 days ago

Apply

0.0 - 3.0 years

5 - 7 Lacs

Gurugram, Haryana

On-site

Key Responsibilities · Manage and maintain Microsoft SQL Server databases (2016 and later) across development, UAT, and production environments. · Monitor and improve database performance using Query Store, Extended Events, and Dynamic Management Views (DMVs). · Design and maintain indexes, partitioning strategies, and statistics to ensure optimal performance. · Develop and maintain T-SQL scripts, views, stored procedures, and triggers. · Implement robust backup and recovery solutions using native SQL Server tools and third-party backup tools (if applicable). · Ensure business continuity through high-availability configurations such as Always On Availability Groups, Log Shipping, or Failover Clustering. · Perform database capacity planning and forecast growth requirements. · Ensure SQL Server security by managing logins, roles, permissions, and encryption features like TDE. · Collaborate with application developers for schema design, indexing strategies, and performance optimization. · Handle deployments, patching, and version upgrades in a controlled and documented manner. · Maintain clear documentation of database processes, configurations, and security policies. Required Skills & Qualifications · Bachelor’s degree in Computer Science, Engineering, or related field. · 3–5 years of solid experience with Microsoft SQL Server (2016 or later). · Strong command of T-SQL including query optimization, joins, CTEs, window functions, and error handling. · Proficient in interpreting execution plans, optimizing long-running queries, and using indexing effectively. · Understanding of SQL Server internals such as page allocation, buffer pool, and lock escalation. · Hands-on experience with backup/restore strategies and consistency checks (DBCC CHECKDB). · Experience with SQL Server Agent Jobs, alerts, and automation scripts (PowerShell or T-SQL). · Ability to configure and manage SQL Server high-availability features. · Exposure to tools like Redgate SQL Monitor, SolarWinds DPA, or similar is a plus. Nice to Have · Exposure to Azure SQL Database or cloud-hosted SQL Server infrastructure. · Basic understanding of ETL workflows using SSIS. · Microsoft Certification: MCSA / Azure Database Administrator Associate or equivalent. · Experience with database deployments in CI/CD pipelines. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Education: Bachelor's (Required) Experience: Microsoft SQL Server: 3 years (Required) Location: Gurugram, Haryana (Required) Work Location: In person

Posted 6 days ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

bangalore, pune

On-site

Key Responsibilities: Design, develop, and maintain SAP BI reports and dashboards Work on data modeling , data extraction , and ETL processes using SAP BW Collaborate with business users to gather reporting requirements Create and manage InfoCubes, DSO, MultiProviders , and BEx Queries Ensure data accuracy and optimize report performance Integrate SAP BI with front-end tools like SAP BO, Lumira , or Analytics Cloud Support testing, documentation, and end-user training Skills Required: 23 years of hands-on experience in SAP BI/BW development and support Strong knowledge of SAP BW Data Modeling , BEx Queries , and ETL Experience with data extraction from SAP and non-SAP sources Good understanding of BEx Analyzer, BO tools , and data flow architecture Familiarity with SAP HANA , S/4HANA , or SAP BW on HANA is an advantage Excellent analytical and problem-solving skills Strong communication and stakeholder management abilities To Apply: Walk-in / Contact us at: White Horse Manpower #12, Office 156, 3rd Floor, Jumma Masjid Golden Complex, Jumma Masjid Road, Bangalore 560051. Contact Numbers: 9739002621

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Candidates should have implemented S/4 Data Migration Projects and possess good understanding of data migration life cycle. Expertise in either SAP Business Objects Data Services/DMC/ Syniti/ Winshuttle is mandatory including sound knowledge in RDBMS, SQL concepts. Knowledge of either ETL Admin activities, LTMC/LTMON, IDOC, BAPI would be an added advantage. Ability to participate across mock and test cycles including test case preparation, defect fixing, client communication and updates to project plans. Knowledge of cutover plan and activities involved is required. A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem-solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Location of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Chennai, Chandigarh, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Kolkata, Coimbatore, Jaipur. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible.

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 6 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Sr. Software Engineer (SDE-2) DViO is one of the largest independent, highly awarded, tech-first marketing companies with a team of 175+ people operating across India, Middle East and South East Asia. We are looking for a Senior Software Engineer (SDE-2) to join our team. The ideal candidate will have a strong background in software development and experience with both frontend and backend technologies. We are looking for someone who is also passionate about solving challenging problems through code and is looking to grow in this field. Responsibilities Lead technical design sessions, establish coding standards, and conduct code reviews. Contribute hands‑on to feature development, refactoring, and performance improvements. Mentor and upskill junior engineers through pair programming, feedback, and structured learning plans. Maintain and evolve our deployment pipelines on our cloud stack. Oversee ETL workflows and ensure data freshness, integrity, and observability. Integrate and optimize AI capabilities within the product. Collaborate closely with Product and Design to translate requirements into robust technical solutions. Champion best practices in testing, security, and documentation. Requirements Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 4–6 years of professional software development experience, including ≥1 year mentoring or leading engineers. Strong computer‑science fundamentals in data structures, algorithms, and system design. Deep understanding of object-oriented/functional design and relational databases. Proficiency in one or more modern web stacks and comfort navigating both backend and frontend codebases. Proven ability to balance delivery speed with long-term maintainability; strong written and verbal communication skills. Must-have skills: Backend: Proficiency in at least one of Python, Node.js, PHP, Go, or Java; experience with an MVC or equivalent framework. Frontend: Proficiency in React, Next.js, Vue, or Angular; deep knowledge of HTML5, CSS3, and modern JavaScript/TypeScript. APIs & Data: Designing and consuming RESTful APIs; working with Relational databases (MySQL, PostgreSQL, etc.). Containers & Cloud: Docker‑based development and deployment; basic familiarity with AWS, GCP, or Azure services; CI/CD using GitHub Actions, GitLab CI, or similar. Quality & DevEx: Unit/integration testing, Git‑based workflows, and peer code reviews. Good‑to‑Have Skills Practical experience integrating LLM APIs (OpenAI, Anthropic) into applications, including prompt design and cost/performance considerations. Hands‑on experience with data engineering, ETL pipelines, and warehouse querying; comfort debugging data issues. UI component libraries (shadcn/ui, Chakra UI, Radix UI) and CSS frameworks (TailwindCSS, Bootstrap). Data‑visualization libraries (D3.js, Chart.js, Recharts). Caching (Redis, Memcached) and search systems (Elasticsearch, Meilisearch, Typesense). Data warehouses or lakes (Snowflake, BigQuery, Redshift) and SQL performance tuning. Bash scripting and strong Linux system knowledge.

Posted 6 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Software Engineer Consultant / Expert 34326 Location: Chennai (Onsite/Hybrid) Employment Type: Contract Budget: Up to ₹24 LPA (Starting at ₹21 LPA) Notice Period: Immediate Joiners Preferred Assessment: Full Stack Backend Java (via Hacker Platform) Position Overview We are seeking a highly experienced Full Stack Java Developer with strong expertise in backend development, cloud technologies, and data solutions. This role involves building and maintaining a global logistics data warehouse on Google Cloud Platform (GCP) , supporting key supply chain operations and enhancing visibility from production to final delivery. The ideal candidate will have a minimum of 6+ years of relevant experience and hands-on skills in BigQuery, Microservices, and REST APIs , with exposure to tools like Pub/Sub, Kafka, and Terraform . Key Responsibilities Collaborate closely with product managers, architects, and engineers to design and implement technical solutions Develop and maintain full-stack applications using Java, Spring Boot, and GCP Cloud Run Build and optimize ETL/data pipelines to apply business logic and transformation rules Monitor and enhance data warehouse performance on BigQuery Support end-to-end testing: unit, functional, integration, and user acceptance Conduct peer reviews, code refactoring, and ensure adherence to best coding practices Implement infrastructure as code and CI/CD using tools like Terraform Required Skills Java, Spring Boot Full Stack Development (Backend-focused) Google Cloud Platform (GCP) – Minimum 1 year hands-on with BigQuery Cloud Run, Microservices, REST APIs Messaging: Pub/Sub, Kafka DevOps & Infrastructure: Terraform Exposure to AI/ML integration is a plus Experience Requirements Minimum 6+ years of experience in Java/Spring Boot development Strong hands-on experience with GCP services, particularly BigQuery Experience in developing enterprise-grade microservices and backend systems Familiarity with ETL pipelines, data orchestration, and performance tuning Agile team collaboration and modern development practices Preferred Experience Exposure to AI agents or AI-driven application features Experience in large-scale logistics or supply chain data systems Education Requirements Bachelor’s Degree in Computer Science, Information Technology, or related field (mandatory) Skills: rest apis,terraform,full stack development,data,google cloud platform (gcp),microservices,kafka,gcp,bigquery,pub/sub,java,cloud run,boot,spring boot

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Gurugram, Haryana, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Informatica MDM Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica MDM. - Good To Have Skills: Experience with data warehousing concepts and practices. - Strong understanding of data modeling techniques. - Familiarity with SQL and database management systems. - Experience in implementing data governance and data quality frameworks. Additional Information: - The candidate should have minimum 5 years of experience in Informatica MDM. - This position is based at our Bengaluru office. - A 15 years full time education is required.

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Cuttack, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Kolkata, West Bengal, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Guwahati, Assam, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Amritsar, Punjab, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Surat, Gujarat, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

18.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data Qualifications And Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation Strong communication and collaboration skills Ability to translate complex technical concepts into actionable business strategies Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.

Posted 6 days ago

Apply

4.0 years

15 - 30 Lacs

Jaipur, Rajasthan, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 6 days ago

Apply

0 years

0 Lacs

India

On-site

At impress.ai our mission is to make accurate hiring easier. We combine I/O Psychology with AI to create application screening processes that allow each and every candidate to undergo a structured interview. While candidates benefit from the enhanced experience, recruiters benefit from the AI-enabled automation. Launched in 2017, impress.ai is a no-code, self-service platform that is highly focused on simplifying and accelerating various parts of the recruitment workflow. Our co-founders observed problems in hiring processes at several companies before building impress.ai. They noticed challenges in candidate experience as well as recruiters having a tough time with a large scale of hiring, the variety of roles, and handling various systems. After immense research, they found a solution to the power of AI and intelligent automation. The Job: We are looking for a Senior Data Analyst at impress.ai you will be responsible for working on all aspects of data and analytics on the impress.ai platform. This ranges from providing analytics support, to maintaining the data pipeline as well as research and development of AI/ML algorithms that are to be implemented in the platform. Responsibilities: Work closely with stakeholders to identify issues related to business and use data to propose solutions for effective decision making Build Algorithms and design experiments Write well-designed, maintainable and performant code that adheres to impress.ai coding styles, conventions and standards Use Machine learning and statistical techniques to provide solutions to problems Develop interactive dashboards and visualizations (Metabase, Looker Studio, Power BI). Manage ETL pipelines using PySpark, AWS Glue, and Step Functions. Processing, cleansing, and verifying the integrity of data used for analysis Enhancing data collection procedures to include information that is relevant for building analytic systems Communicate actionable insights using data, often for a non-technical audience. Work in cross functional teams with product managers, Software Engineers. designers, QA and Ops teams to achieve business objectives. Recruit and train a team of Junior Data Analysts You Bring to the Table: Proficient in Python and SQL for data manipulation and analysis. Experienced in multi page dashboard building (Metabase, Looker Studio or Power BI), data storytelling Strong in advanced SQL, cross-dialect querying, stored procedures, and data privacy best practices Experience with Jupyter notebooks for data exploration and documentation Experience with NLP tasks such as text and sentiment analysis Strong understanding of statistical techniques (e.g., regression, distributions, statistical tests) and their application Knowledge of with PySpark, Pandas, and AWS services like Glue, Athena, S3, Step Functions, and DMS for large-scale ETL workflows Knowledge of machine learning and deep learning techniques and their practical trade-offs Skilled in prompt engineering for LLMs (e.g., ChatGPT, Claude), with experience in RAG, Agentic AI, fine-tuning, and building scalable, secure GenAI applications Excellent problem-solving and analytical skills Effective in communicating your data as a story and the ability to influence stakeholders Effective written and verbal communicator; experienced in cross-functional collaboration Ability to document and communicate technical requirements clearly Familiar with Agile methodology, Jira, Git, and version control systems Curious and self-driven with a passion for exploring new algorithms and tools Proficient in using software engineering tools for scalable and maintainable development Our Benefits: Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions. A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team. Opportunity to contribute to the success of a fast-growing, market-leading product. Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two. Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies