Jobs
Interviews

745 Amazon Redshift Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Gurugram

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Python (Programming Language) Good to have skills : MySQL, Cloud Data MigrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your role will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and maintain data pipelines.- Ensure data quality throughout the data lifecycle.- Implement ETL processes for data migration and deployment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with MySQL, Cloud Data Migration.- Strong understanding of data engineering principles.- Experience in designing and implementing data solutions.- Knowledge of ETL processes and tools.- Familiarity with data quality management. Additional Information:- The candidate should have a minimum of 5 years of experience in Python (Programming Language).- This position is based at our Gurugram office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to design and implement data platform solutions.- Develop and maintain data pipelines for efficient data processing.- Implement data security and privacy measures to protect sensitive information.- Optimize data storage and retrieval processes for improved performance.- Conduct regular data platform performance monitoring and troubleshooting. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data platforms.- Experience with data modeling and database design.- Hands-on experience with ETL processes and tools.- Knowledge of data governance and compliance standards. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Ahmedabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with data modeling and database design.- Strong understanding of ETL processes and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure.- Experience in performance tuning and optimization of data queries. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As part of a Data Transformation programme you will be part of the Data Marketplace team. In this team you will be responsible for the design and implementation of dashboard for assessing compliance to controls and policies at various stages of the data product lifecycle, with centralised compliance scoring.Preferable having experience with data product lifecycle.Example skills Data Visualisation, Amazon Quicksight, Tableau, PowerBI, Qlik, Data Analysis & Interpretation.As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Hands-on experience in designing and implementing scalable applications- Knowledge of data warehousing concepts Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 11 Lacs

Pune

Work from Office

What You'll Do The Global Analytics & Insights (GAI) team is looking for a Data Engineer to help build of the data infrastructure for Avalara's core data assets- empowering the organization with accurate, timely data to drive data backed decisions. As a Data Engineer, you will help implement and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will learn the ins and outs of Avalara's financial, sales, and marketing data to become a go-to resource of Avalara knowledge. You will have foundational SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data-related. What Your Responsibilities Will Be Design functional data models by demonstrating understanding of business use cases and different data sources Develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Build scalable, complex dbt models to support a variety of data products Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency, and timeliness Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions\ you will report to the Senior Manager, Data & Analytics Engineering What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 4+ years experience in data engineering field, with deep SQL knowledge 3+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 2+ years of working with Snowflake 2+ years working with dbt (dbt core preferred) Experience working with complex Salesforce data Functional experience with AWS Functional experience with Infrastructure as Code, preferably Terraform Functional experience with CI CD and DevOps concepts

Posted 3 weeks ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Proven Experience in Business and Data Analytics Mentor team-mates on various business critical projects Solid experience in data analysis and reporting; Exposure to BFSI customer and business data is a plus. Able to communicate with the various stakeholders, manage tasks and issues and monitor progress to ensure the project is on track Proficient in SQL (Data Prep, Procedures, etc.) and Adv. Excel (Pivots, Data Models, Adv. Formulas, etc.) Experience working on MSSQL, Redshift, Databricks and business intelligence tools (e.g. Tableau) Problem-solving skills; methodical and logical approach Willingness to learn and adapt to new technologies Excellent written and verbal communication skills Roles and Responsibilities Effective data crunching and data analysis. Analyse all complex data, business logic, processes and help business take data driven decision. Prove to be a liaison between various teams and stakeholders in ensuring project runs smoothly and is completed and delivered within the stipulated time Working alongside teams to establish business needs Provide recommendations to optimize current systems and process

Posted 3 weeks ago

Apply

12.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 4 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Pune

Hybrid

Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn.

Posted 4 weeks ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BTech,BCom,BSc,MTech,MSc Service Line Cloud & Infrastructure Services Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Technologies - AWS Redshift DBA or AWS AuroraMiddleware Admin - Weblogic Preferred Skills: Technology-Cloud Platform-AWS App Development-Amazon Redshift Technology-Cloud Platform-AWS Core services-Amazon Aurora Technology-Middleware Administration-WebLogic application server Admin-Weblogic

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

About the Role: We are seeking a skilled and detail-oriented Data Engineer with deep expertise in PostgreSQL and SQL to design, maintain, and optimize our database systems. As a key member of our data infrastructure team, you will work closely with developers, DevOps, and analysts to ensure data integrity, performance, and scalability of our applications. Key Responsibilities: Design, implement, and maintain PostgreSQL database systems for high availability and performance. Write efficient, well-documented SQL queries, stored procedures, and database functions. Analyze and optimize slow-performing queries and database structures. Collaborate with software engineers to support schema design, indexing, and query optimization. Perform database migrations, backup strategies, and disaster recovery planning. Ensure data security and compliance with internal and regulatory standards. Monitor database performance and proactively address bottlenecks and anomalies. Automate routine database tasks using scripts and monitoring tools. Contribute to data modeling and architecture discussions for new and existing systems. Support ETL pipelines and data integration processes as needed. Required Qualifications: Bachelor's degree in Computer Science, Information Systems, or related field. 5 years of professional experience in a database engineering role. Proven expertise with PostgreSQL (version 12+ preferred). Strong SQL skills with the ability to write complex queries and optimize them. Experience with performance tuning, indexing, query plans, and execution analysis. Familiarity with database design best practices and normalization techniques. Solid understanding of ACID principles and transaction management. Preferred Qualifications: Experience with cloud platforms (e.g., AWS RDS, GCP Cloud SQL, or Azure PostgreSQL). Familiarity with other database technologies (e.g., MySQL, NoSQL, MongoDB, Redis). Knowledge of scripting languages (e.g., Python, Bash) for automation. Experience with monitoring tools (e.g., pgBadger, pg_stat_statements, Prometheus/Grafana). Understanding of CI/CD processes and infrastructure as code (e.g., Terraform). Exposure to data warehousing or analytics platforms (e.g., Redshift, BigQuery).

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job_Description":" This is a remote position. Overview : 7-12 years of experience using AWS Data Landscape and data Ingestion pipeline. Able to understand and explain the data ingestion from different sources like file, database, applications etc. Build and enhance the Python, PySpark Based Framework for ingestion.Data engineering experience in AWS Data Services like Glue, EMR, Airflow, CloudWatch, Lambda, Step functions, Event triggers. Able to work as a senior engineer with sole interaction point with different business functional teams. Requirements 7\u201312 years of experience in ETL and Data Engineering roles. AWS Glue, PySpark, and Amazon Redshift. Strong command of SQL and procedural programming in cloud or enterprise databases. Deep understanding of data warehousing concepts and data modeling. Proven ability to deliver efficient, well-documented, and scalable data pipelines on AWS. Familiarity with Airflow, AWS Lambda, and other orchestration tools is a plus. AWS Certification (e.g., AWS Data Analytics Specialty) is an advantage. Benefits At Exavalu, we are committed to building a diverse and inclusive workforce. We welcome applications for employment from all qualified candidates, regardless of race, color, gender, national or ethnic origin, age, disability, religion, sexual orientation, gender identity or any other status protected by applicable law. We nurture a culture that embraces all individuals and promotes diverse perspectives, where you can make an impact and grow your career. Exavalu also promotes flexibility depending on the needs of employees, customers and the business. It might be part-time work, working outside normal 9-5 business hours or working remotely. We also have a welcome back program to help people get back to the mainstream after a long break due to health or family reasons. ","Job_Type":"Full time","

Posted 1 month ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL Postgres, MySQL, MS SQL Server Azure ADF, Synapse Analytics, SQL Server, ADLS G2 AWS Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

6.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Your Role Knowledge in Cloud Computing by using AWS Services like Glue, Lamda, Athena, Step Functions, S3 etc. Knowledge in programming language Python/Scala. Knowledge in Spark/PySpark (Core and Streaming) and hands-on to transform using Streaming. Knowledge building real time or batch ingestion and transformation pipelines. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile Working experience and strong knowledge in Databricks is a plus. Analyze existing queries for performance improvements. Develop procedures and scripts for data migration. Provide timely scheduled management reporting. Investigate exceptions regarding asset movements. What will you love working at Capgemini Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. Capgemini serves clients across industries, so you may get to work on varied data engineering projects involving real-time data pipelines, big data processing, and analytics. You'll work extensively with AWS services like S3, Redshift, Glue, Lambda, and more.

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply

1.0 - 2.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers - on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL - Postgres, MySQL, MS SQL Server Azure - ADF, Synapse Analytics, SQL Server, ADLS G2 AWS - Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure - Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Noida

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS BigData Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS BigData.- Strong understanding of data processing frameworks such as Apache Hadoop and Apache Spark.- Experience with cloud services and architecture, particularly in AWS environments.- Familiarity with data warehousing solutions and ETL processes.- Ability to design and implement scalable data pipelines. Additional Information:- The candidate should have minimum 5 years of experience in AWS BigData.- This position is based at our Noida office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years or more of full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Google BigQuery. Your typical day will involve collaborating with cross-functional teams, analyzing business requirements, and developing scalable solutions to meet the needs of our clients. Roles & Responsibilities:- Design, build, and configure applications to meet business process and application requirements using Google BigQuery.- Collaborate with cross-functional teams to analyze business requirements and develop scalable solutions to meet the needs of our clients.- Develop and maintain technical documentation, including design documents, test plans, and user manuals.- Ensure the quality of deliverables by conducting thorough testing and debugging of applications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google BigQuery.- Good To Have Skills: Experience with other cloud-based data warehousing solutions such as Amazon Redshift or Snowflake.- Strong understanding of SQL and database design principles.- Experience with ETL tools and processes.- Experience with programming languages such as Python or Java. Additional Information:- The candidate should have a minimum of 5 years of experience in Google BigQuery.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Bengaluru office. Qualification 15 years or more of full time education

Posted 1 month ago

Apply

7.0 - 8.0 years

9 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale . Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model:: Direct placement with client This is remote role Shift timings::10 AM to 7 PM

Posted 1 month ago

Apply

4.0 - 7.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Team Our Data Science team is the Avengers to Meeshos S.H.I.E.L.D ???. And why not? We are the ones who assemble during the toughest challenges and devise creative solutions, building intelligent systems for millions of our users looking at a thousand different categories of products. Weve barely scratched the surface, and have amazing challenges in charting the future of commerce for Bharat. Our typical day involves dealing with fraud detection, inventory optimisation, and platform vernacularisation. As Data Scientist, you will navigate uncharted territories with us, discovering new paths to creating solutions for our users.?? You will be at the forefront of interesting challenges and solve unique customer problems in an untapped market. But wait theres more to us. Our team is huge on having a well-rounded personal and professional life. When we aren't nose-deep in data, you will most likely find us belting Summer of 69 at the nearest Karaoke bar, or debating who the best Spider-Man is: Maguire, Garfield, or Holland? You tell us ?? About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilisation of the generated data. Other things you will do include working closely with the business stakeholders, transforming scattered pieces of information into valuable data and sharing and presenting your valuable insights with peers. What you will do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognise end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organisational metrics Mentor junior data scientists in the team What you will need Bachelor's/Master's degree in computer science (or similar degrees) 4-7 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalization or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift

Posted 1 month ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Mumbai, Bengaluru, Delhi

Work from Office

Must have skills required : Java, Groovy, SQL, AWS, Data Engineering, Agile, Database Good to have skills : Machine Learning, Python, CI/CD, MicroServices, Problem Solving Intro and job overview: As a Senior Software Engineer II, you will join a team working with next gen technologies on geospatial solutions in order to identify areas for future growth, new customers and new markets in the Geocoding data integrity space. You will be working on the distributed computing platform in order to migrate existing geospatial datasets creation process-and bring more value to Preciselys customers and grow market share. Responsibilities and Duties: You wil be working on the distributing computing platform to migrate the existing Geospatial data processes including sql scripts,groovy scripts. Strong Concepts in Object Oriented Programming and development languages, Java, including SQL, Groove/Gradle/maven You will be working closely with Domain/Technical experts and drive the overall modernization of the existing processes. You will be responsible to drive and maintain the AWS infrastructures and other Devops processes. Participate in design and code reviews within a team environment to eliminate errors early in the development process. Participate in problem determination and debugging of software product issues by using technical skills and tools to isolate the cause of the problem in an efficient and timely manner. Provide documentation needed to thoroughly communicate software functionality. Present technical features of product to customers and stakeholders as required. Ensure timelines and deliverables are met. Participate in the Agile development process. Requirements and Qualifications: UG - B.Tech/B.E. OR PG M.S. / M.Tech in Computer Science, Engineering or related discipline At least 5-7 years of experience implementing and managing geospatial solutions Expert level in programming language Java, Python. Groovy experience is preferred. Expert level in writing optimized SQL queries, procedures, or database objects to support data extraction, manipulation in data environment Strong Concepts in Object Oriented Programming and development languages, Java, including SQL, Groovy/Gradle/maven Expert in script automation in Gradle and Maven. Problem Solving and Troubleshooting Proven ability to analyze and solve complex data problems, troubleshoot data pipelines issues effectively Experience in SQL, database warehouse and data engineering concepts Experience with AWS platform provided Big Data technologies (IAM, EC2, S3, EMR, RedShift, Lambda, Aurora, SNS, etc.) Strong analytical, problem-solving, data analysis and research Good knowledge of Continuous Build Integration (Jenkins and Gitlab pipeline) Experience with agile development and working with agile engineering teams Excellent interpersonal skills Knowledge on micro services and cloud native framework. Knowledge of Machine Learning / AI. Knowledge on programming language Python.

Posted 1 month ago

Apply

2.0 - 4.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilisation of the generated data. Other things you will do include working closely with the business stakeholders, transforming scattered pieces of information into valuable data and sharing and presenting your valuable insights with peers. What you will do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognise end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organisational metrics Mentor junior data scientists in the team What you will need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning, etc Familiarity with tools like SQL, R, Python, etc Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalization or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift

Posted 1 month ago

Apply

4.0 - 6.0 years

9 - 13 Lacs

Solapur

Work from Office

Role Overview: EssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact. This is not a traditional marketing roleyour job is to engineer growth through product innovation, user journey optimization, and experimentation. Youll be the bridge between editorial, tech, and analyticsturning insights into actions that drive sustainable audience and revenue growth. Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention. Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior. Optimize high-traffic areas of the sitelanding pages, article CTAs, newsletter modulesfor conversion and time-on-page. Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows. Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users. Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops. Monitor analytics pipelines from GA4 ? Athena ? dashboards to derive insights and drive decision-making. Introduce AI-driven features (LLM prompts, content auto-summaries, etc. ) that personalize or simplify the user experience. Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities. Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles. Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops. Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics. Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founder: drives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams. Proactive and solution-oriented, always spotting opportunities for real growth. Thrive in a fast-moving environment, taking ownership and driving impact.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Gurugram

Work from Office

Role & responsibilities Key Responsibilities Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Secondary Skills: Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies. Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 5 years. Hands-on experience with Python and PySpark: 3 years. PL/SQL experience: 3 years CloudFormation and Terraform: 2 years CI/CD GitHub actions: 1 year Experience with BI systems (PowerBI, Tableau): 1 year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 2 years Additionally, familiarity with any of the following is highly desirable: Jira, GitHub, Snowflake

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies