Jobs
Interviews

152 Redshift Aws Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

9 - 14 Lacs

Noida

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using AWS, Python & SQL. * Optimize performance with Apache Spark & Amazon Redshift. * Collaborate on cloud architecture with cross-functional teams. Redshift

Posted 2 months ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Bengaluru

Hybrid

We are looking for a Cloud Data Engineer with strong hands-on experience in data pipelines, cloud-native services (AWS), and modern data platforms like Snowflake or Databricks. Alternatively, were open to Data Visualization Analysts with strong BI experience and exposure to data engineering or pipelines. You will collaborate with technology and business leads to build scalable data solutions, including data lakes, data marts, and virtualization layers using tools like Starburst. This is an exciting opportunity to work with modern cloud tech in a dynamic, enterprise-scale financial services environment. Key Responsibilities: Design and develop data pipelines for structured/unstructured data in AWS. Build semantic layers and virtualization layers using Starburst or similar tools. Create intuitive dashboards and reports using Power BI/Tableau. Collaborate on ETL designs and support testing (SIT/UAT). Optimize Spark jobs and ETL performance. Implement data quality checks and validation frameworks. Translate business requirements into scalable technical solutions. Participate in design reviews and documentation. Skills & Qualifications: Must-Have: 10+ years in Data Engineering or related roles. Hands-on with AWS Glue, Redshift, Athena, EMR, Lambda, S3, Kinesis. Proficient in HiveQL, Spark, Python, Scala. Experience with modern data platforms (Snowflake/Databricks). 3+ years in ETL tools (Informatica, SSIS) & recent experience in cloud-based ETL. Strong understanding of Data Warehousing, Data Lakes, and Data Mesh. Preferred: Exposure to Data Virtualization tools like Starburst or Denodo. Experience in financial services or banking domain. AWS Certification (Data specialty) is a plus.

Posted 2 months ago

Apply

12.0 - 15.0 years

0 - 3 Lacs

Bengaluru

Hybrid

Role & responsibilities Design, develop, and maintain scalable enterprise data architecture incorporating data warehouse, data lake, and data mesh concepts Create and maintain data models, schemas, and mappings that support Reporting, business intelligence, analytics, and AI/ML initiatives Establish data integration patterns for batch and real-time processing using AWS services (Glue, DMS, Lambda), Redshift, Snowflake or Data Bricks [KM1] . Define technical specifications for data storage, data processing, and data access patterns Develop data models and enforce data architecture standards [KM2] , policies, and best practices Partner with business stakeholders to translate requirements into architectural solutions Lead data modernization initiatives, including legacy system migrations Create roadmaps for evolving data architecture to support future business needs Provide expert guidance on complex data problems and architectural decisions Preferred candidate profile Bachelors degree in computer science, Information Systems, or related field; Masters degree preferred 8+ years of experience in data architecture, database design, data modelling or related roles[KM1] 5+ years of experience with cloud data platforms, particularly AWS data services 3+ years of experience architecting MPP database solutions (Redshift, Snowflake, etc.) Expert knowledge of data warehouse architecture and dimensional modelling Strong understanding of AWS [KM2] data services ecosystem (Redshift, S3, Glue, DMS, Lambda) Experience with SQL Server and migration to cloud data platforms Proficiency in data modelling, entity relationship diagrams, and schema design Working knowledge of data integration patterns and technologies (ETL/ELT, CDC) Experience with one or more programming/scripting languages (Python, SQL, Shell) Familiarity with data lake architectures and technologies (Parquet, Delta Lake, Athena) Excellent verbal and written communication skills, with ability to translate complex technical concepts to varied audiences Strong stakeholder management and influencing skills Experience implementing data warehouse, data lake and data mesh architectures Good to have knowledge of machine learning workflows and feature engineering Understanding of regulatory requirements related to data (Fed Ramp, GDPR, CCPA, etc.)[KM3] Experience with big data technologies [KM4] (Spark, Hadoop)

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is open only for women professionals returning to the workforce after a career break (9+ months career gap, e.g, last working day prior to NOV 2024). We encourage you to apply only if you fit this criteria. Position Overview This is a Level 5 Data Engineer role within a leading e-commerce organization's Selling Partner Services division in India. The position focuses on building and scaling API authorization and customization systems that serve thousands of global selling partners. This is a senior-level position requiring significant technical expertise and leadership capabilities. Team Context & Mission Organization : Selling Partner Services division Focus : API authorization and customization systems for global selling partners Mission : Create flexible, reliable, and extensible API solutions to help businesses thrive on the platform Culture : Startup excitement with enterprise-level resources and scale Impact : Direct influence on thousands of global selling partners Key Responsibilities Technical Leadership Lead design and implementation of complex data pipelines and ETL processes Architect scalable, high-performance data systems using cloud technologies and big data platforms Evaluate and recommend new technologies and tools for data infrastructure enhancement Troubleshoot and resolve complex data-related issues in production environments Collaboration & Stakeholder Management Work closely with data scientists, analysts, and business stakeholders Understand data requirements and implement appropriate solutions Contribute to data governance policies and procedures development Performance & Quality Optimization Optimize data storage and retrieval systems for performance and cost-effectiveness Implement data quality checks and monitoring systems Ensure data integrity and reliability across all systems Mentorship & Leadership Mentor junior engineers on the team Provide technical leadership on data engineering best practices and methodologies Drive adoption of industry standards and innovative approaches Required Qualifications (Must-Have) Experience Requirements 5+ years of data engineering experience - Senior-level expertise expected 5+ years of SQL experience - Advanced SQL skills for complex data manipulation Data modeling, warehousing, and ETL pipeline building - Core competencies Distributed systems knowledge - Understanding of data storage and computing in distributed environments Technical Skills Advanced proficiency in designing and implementing data solutions Strong understanding of data architecture principles Experience with production-level data systems Knowledge of data governance and quality assurance practices Preferred Qualifications Cloud Technology Stack Data Warehousing : Redshift, Snowflake, BigQuery Object Storage : S3, Azure Blob, Google Cloud Storage ETL Services : AWS Glue, Azure Data Factory, Google Dataflow Big Data Processing : EMR, Databricks, Apache Spark Real-time Streaming : Kinesis, Kafka, Apache Storm Data Delivery : FireHose, Apache NiFi Serverless Computing : Lambda, Azure Functions, Google Cloud Functions Identity Management : IAM, Active Directory, role-based access control Non-Relational Database Experience Object Storage : S3, blob storage systems Document Stores : MongoDB, CouchDB Key-Value Stores : Redis, DynamoDB Graph Databases : Neo4j, ArangoDB Column-Family : Cassandra, HBase Key Success Factors Scalability Focus : Building systems that can handle massive enterprise scale Performance Optimization : Continuous improvement of system efficiency Quality Assurance : Maintaining high data quality and reliability standards Innovation : Staying current with emerging technologies and best practices Collaboration : Effective partnership with stakeholders across the organization This role represents a significant opportunity for a senior data engineer to make a substantial impact on a global e-commerce seller ecosystem while working with cutting-edge technologies and leading a team of talented professionals.

Posted 2 months ago

Apply

7.0 - 9.0 years

7 - 17 Lacs

Pune

Remote

Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.

Posted 2 months ago

Apply

4.0 - 7.0 years

24 - 40 Lacs

Hyderabad

Work from Office

Design and optimize scalable data pipelines using Python, Scala, and SQL. Work with AWS services, Redshift, Terraform, Docker, and Jenkins. Implement CI/CD, manage infrastructure as code, and ensure efficient data flow across systems.

Posted 2 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities Total Years of Experience 7 to 9 Relevant years of Experience 5 to 6 Mandatory Skills • Overall 6 plus year experience as DBA. AWS 3 years. Minimum 2 years of experience with Amazon Redshift (traditional or serverless). • Proficient in Redshift backup and snapshot recovery processes. • Experience with AWS account design and management. AWS Services such as Lambda, Glue, Airflow Alternate Skills Good to have (Not Mandatory) • Expertise in day-to-day DBA activitie s • Experience automating DBAtasks using Python. • Ability to create and manage Redshift maintenance tasks, such as vacuuming and analyzing tables. • Exposure to data governance and compliance frameworks (e.g., GDPR, HIPAA) in Redshift environments. Detailed Job Description • Overall 6 plus year experience as DBA. AWS 3 years. Minimum 2 years of experience with Amazon Redshift (traditional or serverless). • Proficient in Redshift backup and snapshot recovery processes. • Expertise in day-to-day DBA activities, including:o Managing user access and permissions. o Monitoring database performance and optimizing queries. o Ensuring database security and compliance. o Handling cluster resizing and scaling operations. o Strong understanding of database design and schema management in Redshift. o Experience with query monitoring and analyzing workload management (WLM) to optimize cluster performance. o Familiarity with data distribution strategies and tuning table designs for performance. o Proficient in troubleshooting ETL/ELT processes and optimizing data ingestion workflows. o Understanding of Amazon S3 integration and external table usage in Redshift Spectrum, Athena. • Experience automating DBA tasks using Python. • Experience with AWS account design and management. • Familiarity with AWS services like: o Lambda: For serverless data processing. o Glue: For ETL workflows and data cataloging. o Airflow: For orchestration and automation of data workflows. o Knowledge of Redshift Serverless design and best practices. o Hands-on experience with CloudWatch for monitoring and setting up alarms for Redshift clusters. o Exposure to Data Lake architecture and integration with Redshift. o Understanding of Redshift security configurations, including IAM roles and VPC security. o Ability to debug and resolve Redshift query performance bottlenecks. • Familiarity with cross-region disaster recovery strategies for Redshift. • Knowledge of data retention policies and partition management. • Ability to create and manage Redshift maintenance tasks, such as vacuuming and analyzing tables.

Posted 2 months ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Noida

Remote

Job Summary: We are seeking a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience to lead the design and implementation of scalable, cloud-native data solutions. This role focuses on modernizing enterprise data infrastructure, driving realtime data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Key Responsibilities: • Architect and implement cloud-based data platforms using AWS services including Redshift, Glue, DMS, and Data Lake solutions. • Lead the migration of Oracle E-Business Suite or similar ERP systems to AWS, ensuring data integrity and performance. • Design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. • Define and enforce data architecture standards, governance policies, and best practices. • Collaborate with engineering, data, and business teams to align architecture with strategic goals. • Optimize data pipelines and storage for scalability, reliability, and cost-efficiency. Required Qualifications: • 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. • Deep expertise in AWS services including Redshift, Glue, DMS, and Data Lake architectures. • Proven experience with Confluent Kafka for real-time data streaming and eventdriven architectures. • Hands-on experience migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms. • Strong understanding of data governance, security, and compliance in cloud environments. • Proficiency in designing scalable, fault-tolerant data systems. Preferred Qualifications: • Experience with data modeling, metadata management, and lineage tracking. • Familiarity with infrastructure-as-code and CI/CD practices. • Strong communication and leadership skills to guide cross-functional teams

Posted 2 months ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 2 months ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 2 months ago

Apply

4.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Title: AWS Engineer Experience: 4 - 8 Years Location: Bengaluru (Hybrid 2- 3 Days Onsite per Week) Employment Type: Full-Time Notice Period: Only Immediate to 15 Days Joiners Preferred Job Description: We are looking for an experienced AWS Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience building and maintaining robust, scalable data pipelines and cloud-based architectures on AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, S3, Redshift, and EMR Collaborate with data scientists and ML engineers to operationalize machine learning models using AWS SageMaker Implement efficient data transformation and feature engineering workflows Optimize ETL/ELT processes and enforce best practices for data quality and governance Work with structured and unstructured data using Amazon Athena, DynamoDB, RDS, and similar services Build and manage CI/CD pipelines for data and ML workflows using AWS CodePipeline, CodeBuild, and Step Functions Monitor data infrastructure for performance, reliability, and cost-effectiveness Ensure data security and compliance with organizational and regulatory standards Required Skills: Strong experience with AWS data and ML services Solid knowledge of ETL/ELT frameworks and data modeling Proficiency in Python, SQL, and scripting for data engineering Experience with CI/CD and DevOps practices on AWS Good understanding of data governance and compliance standards Excellent collaboration and problem-solving skills

Posted 2 months ago

Apply

8.0 - 13.0 years

27 - 35 Lacs

Kochi, Bengaluru

Work from Office

About Us. DBiz Solution is a Transformational Partner. Digital transformation is intense. Wed like for you to have something to hold on to, whilst you set out bringing your ideas into existence. Beyond anything, we put humans first. This means solving real problems with real people and providing needs with real, working solutions. DBiz leverages a wealth of experience building a variety of software to improve our client's ability to respond to change and build tomorrows digital business. Were quite proud of our record of accomplishment. Having delivered over 150 projects for over 100 clients, we can honestly say we leave our clients happy and wanting more. Using data, we aim to unlock value and create platforms/products at scale that can evolve with business strategies using our innovative Rapid Application Development methodologies. The passion for creating an impact: Our passion for creating an impact drive everything we do. We believe that technology has the power to transform businesses and improve lives, and it is our mission to harness this power to make a difference. We constantly strive to innovate and deliver solutions that not only meet our client's needs but exceed their expectations, allowing them to achieve their goals and drive sustainable growth. Through our world-leading digital transformation strategies, we are always growing and improving. That means creating an environment where every one of us can strive together for excellence. Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (8+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate

Posted 2 months ago

Apply

12.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 2 months ago

Apply

7.0 - 12.0 years

0 - 1 Lacs

Chennai

Hybrid

Role & responsibilities Detailed job description - Skill Set: Design, develop, and maintain ETL (Extract, Transform, Load) workflows using ETL Tools Optimize ETL processes for performance, scalability, and reliability. Ensure data is accurately extracted from source systems, transformed according to business rules, and loaded into target systems (data warehouses, data lakes, etc.). Work closely with data analysts, data scientists, business stakeholders, and other developers to understand data requirements. Document ETL processes, data mappings, and workflows for maintainability and knowledge sharing. Participate in code reviews and contribute to best practices in data engineering. Mandatory Skills AWS Redshift PLSQL Python ETL DWH

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.

Posted 2 months ago

Apply

5.0 - 9.0 years

15 - 22 Lacs

Chennai

Work from Office

We are looking for a skilled and motivated Senior Data Engineer to join data integration and analytics team. The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems. You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs. We value individuals who are passionate about continuous learning, problem-solving, and enabling data-driven decision-making. Years of Experience: Min. 5 years (Note: with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration) Primary Skills: Informatica IICS, AWS (especially Redshift) Secondary Skills: Python, Unix/Linux Role Description: As a Senior Data Engineer, you will lead the design, development, and management of scalable data platforms and pipelines. This role demands a strong technical foundation in data architecture, big data technologies, and database systems (both SQL and NoSQL), along with the ability to collaborate across functional teams to deliver robust, secure, and high-performing data solutions. Key Responsibilities: Design, develop, and maintain end-to-end data pipelines and infrastructure. Translate business and functional requirements into scalable, well-documented technical solutions. Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations. Ensure data integrity and quality through automated validations, unit testing, and comprehensive documentation. Optimize data processing performance and manage large datasets efficiently. Collaborate closely with stakeholders and project teams to align data solutions with business objectives. Implement and maintain security and privacy protocols to ensure safe data handling. Set up development environments and configure tools and services. Mentor junior data engineers and contribute to continuous improvement and automation initiatives. Coordinate with QA and UAT teams during testing and release phases. Role Requirements: Strong proficiency in SQL, including procedures, performance tuning, and analytical functions. Solid understanding of data warehousing concepts, including dimensional modeling and slowly changing dimensions (SCDs). Hands-on experience with scripting languages (Shell / PowerShell). Proficiency in data profiling, validation, and testing practices. Excellent problem-solving, communication (written and verbal), and documentation skills. Exposure to Agile methodologies and CI/CD practices. Additional Requirements: Overall 5+ years of experience, with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration). Strong proficiency in AWS Redshift and writing complex SQL queries. Solid programming experience in Python for scripting, data wrangling, and automation.

Posted 2 months ago

Apply

6.0 - 11.0 years

0 - 2 Lacs

Chennai

Work from Office

Requirement 1: Skills: AWS Redshift dev with Apache Airflow Location: Chennai Experience: 8+ Years Work Mode: Hybrid. Role & responsibilities: Senior Data Engineer AWS Redshift & Apache Airflow Location: Chennai Experience Required: 8+ Years Job Summary We are seeking a highly experienced Senior Data Engineer to lead the design, development, and optimization of scalable data pipelines using AWS Redshift and Apache Airflow. The ideal candidate will have deep expertise in cloud-based data warehousing, workflow orchestration, and ETL processes, with a strong background in SQL and Python. Key Responsibilities Design, build, and maintain robust ETL/ELT pipelines using Apache Airflow. Integrate data from various sources into AWS Redshift for analytics and reporting. Develop and optimize Redshift schemas, tables, and queries. Monitor and tune Redshift performance for large-scale data operations. Implement and manage DAGs in Airflow for scheduling and monitoring data workflows. Ensure reliability and fault tolerance in data pipelines. Work closely with data scientists, analysts, and business stakeholders to understand data requirements. Translate business needs into technical solutions. Enforce data quality, integrity, and security best practices. Implement access controls and audit mechanisms using AWS IAM and related tools. Mentor junior engineers and promote best practices in data engineering. Stay updated with emerging technologies and recommend improvements. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, or related field. 8+ years of experience in data engineering, with a focus on AWS Redshift and Apache Airflow. Strong proficiency in AWS Services: Redshift, S3, Lambda, Glue, IAM. Proficient in Programming Languages: SQL, Python. Experience with ETL Tools & Frameworks: Apache Airflow, DBT (preferred). Experience with data modeling, performance tuning, and large-scale data processing. Familiarity with CI/CD pipelines and version control (Git). Excellent problem-solving and communication skills. Preferred Skills Experience with big data technologies (Spark, Hadoop). Knowledge of NoSQL databases (DynamoDB, Cassandra). AWS certification (e.g., AWS Certified Data Analytics – Specialty). Preferred candidate profile Regards, Bhavani Challa Sr. Talent Acquisition E: bhavani.challa@arisetg.com M: 9063067791 www.arisetg.com

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 2 months ago

Apply

3.0 - 6.0 years

15 - 20 Lacs

Hyderabad

Hybrid

Hello, Urgent job openings for Data Engineer role @ GlobalData(Hyd). Job Description given below please go through to understand the requirement. if requirement is matching to your profile & interested to apply please share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Data Engineer @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : How many years of experience working on Snowflake/Google BigQuery : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- We are looking for a skilled and experienced Data Delivery Specification (DDS) Engineer to join our data team. The DDS Engineer will be responsible for designing, developing, and maintaining robust data pipelines and delivery mechanisms, ensuring timely and accurate data delivery to various stakeholders. This role requires strong expertise in cloud data platforms such as AWS, Snowflake, and Google BigQuery, along with a deep understanding of data warehousing concepts. Key Responsibilities Design, develop, and optimize data pipelines for efficient data ingestion, transformation, and delivery from various sources to target systems. Implement and manage data delivery solutions using cloud platforms like AWS (S3, Glue, Lambda, Redshift), Snowflake, and Google BigQuery. Collaborate with data architects, data scientists, and business analysts to understand data requirements and translate them into technical specifications. Develop and maintain DDS documents, outlining data sources, transformations, quality checks, and delivery schedules. Ensure data quality, integrity, and security throughout the data lifecycle. Monitor data pipelines, troubleshoot issues, and implement solutions to ensure continuous data flow. Optimize data storage and query performance on cloud data warehouses. Implement automation for data delivery processes and monitoring. Stay current with new data technologies and best practices in data engineering and cloud platforms. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Data Engineering, or a related quantitative field. 4+ years of experience in data engineering, with a focus on data delivery and warehousing. Proven experience with cloud data platforms, specifically: AWS: S3, Glue, Lambda, Redshift, or other relevant data services. Snowflake: Strong experience with data warehousing, SQL, and performance optimization. Google BigQuery: Experience with data warehousing, SQL, and data manipulation. Proficient in SQL for complex data querying, manipulation, and optimization. Experience with scripting languages (e.g., Python) for data pipeline automation. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams Thanks & Regards, Salim (Human Resources)

Posted 2 months ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Primary Skills:AWS,Redshift,Python,Pyspark Location:- Hyderabad,Banaglore,Pune,Mumbai,Chennai

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role Join a high-impact global business team that is building cutting-edge B2B technology solutions. As part of a structured returnship program, this role is ideal for experienced professionals re-entering the workforce after a career break. Youll work on mission-critical data infrastructure in one of the worlds largest cloud-based environments, helping transform enterprise procurement through intelligent architecture and scalable analytics. This role merges consumer-grade experience with enterprise-grade features to serve businesses worldwide. Youll collaborate across engineering, sales, marketing, and product teams to deliver scalable solutions that drive measurable value. Key Responsibilities: Design, build, and manage scalable data infrastructure using modern cloud technologies Develop and maintain robust ETL pipelines and data warehouse solutions Partner with stakeholders to define data needs and translate them into actionable solutions Curate and manage large-scale datasets from multiple platforms and systems Ensure high standards for data quality, lineage, security, and governance Enable data access for internal and external users through secure infrastructure Drive insights and decision-making by supporting sales, marketing, and outreach teams with real-time and historical data Work in a high-energy, fast-paced environment that values curiosity, autonomy, and impact Who You Are: 5+ years of experience in data engineering or related technical roles Proficient in SQL and familiar with relational database management Skilled in building and optimizing ETL pipelines Strong understanding of data modeling and warehousing Comfortable working with large-scale data systems and distributed computing Able to work independently, collaborate with cross-functional teams, and communicate clearly Passionate about solving complex problems through data Preferred Qualifications: Hands-on experience with cloud technologies including Redshift, S3, AWS Glue, EMR, Lambda, Kinesis, and Firehose Familiarity with non-relational databases (e.g., object storage, document stores, key-value stores, column-family DBs) Understanding of cloud access control systems such as IAM roles and permissions Returnship Benefits: Dedicated onboarding and mentorship support Flexible work arrangements Opportunity to work on meaningful, global-scale projects while rebuilding your career momentum Supportive team culture that encourages continuous learning and professional development Top 10 Must-Have Skills: SQL ETL Development Data Modeling Cloud Data Warehousing (e.g., Redshift or equivalent) Experience with AWS or similar cloud platforms Working with Large-Scale Datasets Data Governance & Security Awareness Business Communication & Stakeholder Collaboration Automation with Python/Scala (for ETL pipelines) Familiarity with Non-Relational Databases

Posted 2 months ago

Apply

8.0 - 10.0 years

7 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Mandatory skills: AWS Python

Posted 2 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Managing incidents Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees

Posted 2 months ago

Apply

7.0 - 12.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Managing incidents Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees AWS Redshift PLSQL 1A Unix

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies