Jobs
Interviews

95 Redshift Aws Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

7 - 17 Lacs

Hyderabad, Bengaluru

Hybrid

Hexaware Technologies is Hiring AWS Redshift developers Primary Skill set - AWS redshift, Glue, Lambda, Pyspark Total Exp Required - 6 + years to 12yrs Location - Bangalore & Hyderabad only Work mode - Hybrid Job Description: Mandatory Skills: 8 to 10 years Experience with most of the cloud products such as Amazon AWS. Experience as developer in multiple cloud technologies including AWS EC2, S3, Amazon API Gateway , AWS Lambda, AWS Glue, AWS RDS, AWS Step Functions. Good knowledge on AWS environment and Service knowledge with S3 storage understanding. Must have good knowledge of AWS Glue and serverless architecture. Must have good knowledge of Pyspark. Must have good knowledge of SQL. Nice to have skills: Collaborate with data analysts and stakeholders to meet data requirements. AWS Postgres experience for DB design. Must have worked with Dynamo DB. Interested candidates, Kindly share your updated resume to ramyar2@hexaware.com with below required details. Full Name: Contact No: Total Exp: Rel Exp in AWS: Current & Joining Location: Notice Period (If serving mention LWD): Current CTC: Expected CTC:

Posted 1 week ago

Apply

5.0 - 9.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Noida

Work from Office

We are excited to invite fresh BTech graduates to our Walk-In Drive for Trainee Roles at our Noida office. This is a great opportunity for recent graduates to kickstart their careers in one of the following domains: Available Domains: Python Java Frontend Development DevOps Software Testing Data Warehouse Walk-In Dates: Wednesday, July 23, 2025 Thursday, July 24, 2025 Important: Only 20 walk-in candidates will be shortlisted. Eligibility Criteria: BTech degree completed (20222025 pass-outs) Basic knowledge in at least one of the mentioned domains Good communication skills Eagerness to learn and grow in the tech field How to Apply: Interested candidates must register using the form below. Only shortlisted candidates will be contacted with interview location details. Apply Here: https://forms.gle/a9LesdmF7g1MM2PW7 Stipend/CTC: As per industry standards (To be discussed during the interview)

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Job Description: We are seeking an experienced and visionary Senior Data Architect to lead the design and implementation of scalable enterprise data solutions. This is a strategic leadership role for someone who thrives in cloud-first, data-driven environments and is passionate about building future-ready data architectures. Key Responsibilities: Define and implement enterprise-wide data architecture strategy aligned with business goals. Design and lead scalable, secure, and resilient data platforms for both structured and unstructured data. Architect data lake/warehouse ecosystems and cloud-native solutions (Snowflake, Databricks, Redshift, BigQuery). Collaborate with business and tech stakeholders to capture data requirements and translate them into scalable designs. Mentor data engineers, analysts, and other architects in data best practices. Establish standards for data modeling, integration, and management. Drive governance across data quality, security, metadata, and compliance. Lead modernization and cloud migration efforts. Evaluate new technologies and recommend adoption strategies. Support data cataloging, lineage, and MDM initiatives. Ensure compliance with privacy standards (e.g., GDPR, HIPAA, CCPA). Required Qualifications: Bachelors/Master’s degree in Computer Science, Data Science, or related field. 10+ years of experience in data architecture; 3+ years in a senior/lead capacity. Hands-on experience with modern cloud data platforms: Snowflake, Azure Synapse, AWS Redshift, BigQuery, etc. Strong skills in data modeling tools (e.g., Erwin, ER/Studio). Deep understanding of ETL/ELT , APIs, and data integration. Expertise in SQL, Python , and data-centric languages. Experience with data governance, RBAC, encryption , and compliance frameworks. DevOps/CI-CD experience in data pipelines is a plus. Excellent communication and leadership skills.

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Hyderabad

Work from Office

Job Title: Data EngineerClient: Amazon Employment Type: Full-time (On-site) Payroll: BCT Consulting Pvt Ltd Work Location: Hyderabad (Work from Office Monday to Friday, General Shift) Experience Required: 5+ Years Joining Mode: Permanent with BCT Consulting Pvt Ltd, deployed at Amazon About the Role: We are seeking a highly skilled and motivated Data Engineer with strong expertise in SQL, Python, Big Data technologies, AWS, Airflow, and Redshift . The ideal candidate will play a key role in building and optimizing data pipelines, ensuring data integrity, and enabling scalable data solutions across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and SQL . Work with Big Data technologies to process and manage large datasets efficiently. Implement and manage workflows using Apache Airflow . Develop and optimize data models and queries in Amazon Redshift . Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, consistency, and security across all data platforms. Monitor and troubleshoot data pipeline performance and reliability. Leverage AWS services (S3, Lambda, Glue, EMR, etc.) for cloud-native data engineering solutions. Required Skills & Qualifications: 5+ years of experience in Data Engineering . Strong proficiency in SQL and Python . Hands-on experience with Big Data tools (e.g., Spark, Hadoop). Expertise in AWS cloud services related to data engineering. Experience with Apache Airflow for workflow orchestration. Solid understanding of Amazon Redshift and data warehousing concepts. Excellent problem-solving and communication skills. Ability to work in a fast-paced, collaborative environment. Nice to Have: Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance and compliance standards. Perks & Benefits: Opportunity to work on cutting-edge data technologies. Collaborative and innovative work culture. Immediate joining preferred.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. J ob Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills. Hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 30 Lacs

Gurugram

Remote

Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Chennai, Bengaluru

Work from Office

Job Description: Job Title: Data Engineer Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: 5-8 years' experience candidate as back end - data engineer. Strong experience in SQL. Strong knowledge and experience Python and Py Spark. Experience in AWS. Experience in Docker and OpenShift. Hands on experience with REST Concepts. Design and Develop business solutions on the data front. Experience in implementation of new enhancements and also handling defect triage. Candidate must have strong analytical abilities. Skills/ Competency Additionally Preferred Jira, Bit Bucket Experience on Kafka. Experience on Snowflake. Domain knowledge in Banking. Analytical skills. Excellent communication skills Working knowledge of Agile. Thanks & Regards, Suresh Kumar Raja, CGI.

Posted 1 week ago

Apply

5.0 - 9.0 years

10 - 12 Lacs

Bengaluru

Remote

Sr Data Engineer Tenure: Min. 3 months (potential for extension) Contract. Remote. We are seeking a Sr. Data Engineer to join our technology team. This role is a hands-on position responsible for the build and continued evolution of the data platform, business applications and integration tools. We are looking for a hands on engineer that can recommend best practices to work with enterprise data. The ideal candidate is going to be a hands on engineer, and very strong with AWS Products to help build out data pipelines, create jobs, manage the quality of the data warehouse and integrated tools. RESPONSIBILITIES Design, develop, and maintain scalable, efficient data pipelines to support ETL/ELT processes across multiple sources and systems Partner with Data Science, Analytics, and Business teams to understand data needs, prioritize use cases, and deliver reliable datasets and models Monitor, optimize, and troubleshoot data jobs, ensuring high availability and performance of data infrastructure Build and manage data models and schemas in Redshift and other data technologies, enabling self-service analytics Implement data quality checks, validation rules, and alerting mechanisms to ensure trust in data Leverage AWS services like Glue, Lambda, S3, Athena, and EMR to build modular, reusable data solutions Drive improvements in data lineage, cataloging, and documentation to ensure transparency and reusability of data assets Create and maintain technical documentation and version-controlled workflows (e.g., Git, dbt) Contribute to and promote a culture of continuous improvement, mentoring peers and advocating for scalable and modern data practices Participate in sprint planning, code reviews, and team retrospectives as part of an Agile development process Stay current on industry trends and emerging technologies to identify opportunities for innovation and automation Advanced Python, including experience building APIs, scripting ETL processes, and automating workflows Expert in SQL, with ability to write complex queries, optimize performance, and work across large datasets Hands-on experience with AWS data ecosystem including Redshift, S3, Glue, Athena, EMR, EC2, DynamoDB, Lambda, and Redis Strong understanding of data warehousing and data modeling principles (e.g., star/snowflake schema, dimensional modeling) Familiarity with dbt Labs and modern ELT/analytics engineering practices Experience working with structured, semi-structured, and unstructured data

Posted 2 weeks ago

Apply

6.0 - 11.0 years

9 - 14 Lacs

Noida

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using AWS, Python & SQL. * Optimize performance with Apache Spark & Amazon Redshift. * Collaborate on cloud architecture with cross-functional teams. Redshift

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Bengaluru

Hybrid

We are looking for a Cloud Data Engineer with strong hands-on experience in data pipelines, cloud-native services (AWS), and modern data platforms like Snowflake or Databricks. Alternatively, were open to Data Visualization Analysts with strong BI experience and exposure to data engineering or pipelines. You will collaborate with technology and business leads to build scalable data solutions, including data lakes, data marts, and virtualization layers using tools like Starburst. This is an exciting opportunity to work with modern cloud tech in a dynamic, enterprise-scale financial services environment. Key Responsibilities: Design and develop data pipelines for structured/unstructured data in AWS. Build semantic layers and virtualization layers using Starburst or similar tools. Create intuitive dashboards and reports using Power BI/Tableau. Collaborate on ETL designs and support testing (SIT/UAT). Optimize Spark jobs and ETL performance. Implement data quality checks and validation frameworks. Translate business requirements into scalable technical solutions. Participate in design reviews and documentation. Skills & Qualifications: Must-Have: 10+ years in Data Engineering or related roles. Hands-on with AWS Glue, Redshift, Athena, EMR, Lambda, S3, Kinesis. Proficient in HiveQL, Spark, Python, Scala. Experience with modern data platforms (Snowflake/Databricks). 3+ years in ETL tools (Informatica, SSIS) & recent experience in cloud-based ETL. Strong understanding of Data Warehousing, Data Lakes, and Data Mesh. Preferred: Exposure to Data Virtualization tools like Starburst or Denodo. Experience in financial services or banking domain. AWS Certification (Data specialty) is a plus.

Posted 2 weeks ago

Apply

12.0 - 15.0 years

0 - 3 Lacs

Bengaluru

Hybrid

Role & responsibilities Design, develop, and maintain scalable enterprise data architecture incorporating data warehouse, data lake, and data mesh concepts Create and maintain data models, schemas, and mappings that support Reporting, business intelligence, analytics, and AI/ML initiatives Establish data integration patterns for batch and real-time processing using AWS services (Glue, DMS, Lambda), Redshift, Snowflake or Data Bricks [KM1] . Define technical specifications for data storage, data processing, and data access patterns Develop data models and enforce data architecture standards [KM2] , policies, and best practices Partner with business stakeholders to translate requirements into architectural solutions Lead data modernization initiatives, including legacy system migrations Create roadmaps for evolving data architecture to support future business needs Provide expert guidance on complex data problems and architectural decisions Preferred candidate profile Bachelors degree in computer science, Information Systems, or related field; Masters degree preferred 8+ years of experience in data architecture, database design, data modelling or related roles[KM1] 5+ years of experience with cloud data platforms, particularly AWS data services 3+ years of experience architecting MPP database solutions (Redshift, Snowflake, etc.) Expert knowledge of data warehouse architecture and dimensional modelling Strong understanding of AWS [KM2] data services ecosystem (Redshift, S3, Glue, DMS, Lambda) Experience with SQL Server and migration to cloud data platforms Proficiency in data modelling, entity relationship diagrams, and schema design Working knowledge of data integration patterns and technologies (ETL/ELT, CDC) Experience with one or more programming/scripting languages (Python, SQL, Shell) Familiarity with data lake architectures and technologies (Parquet, Delta Lake, Athena) Excellent verbal and written communication skills, with ability to translate complex technical concepts to varied audiences Strong stakeholder management and influencing skills Experience implementing data warehouse, data lake and data mesh architectures Good to have knowledge of machine learning workflows and feature engineering Understanding of regulatory requirements related to data (Fed Ramp, GDPR, CCPA, etc.)[KM3] Experience with big data technologies [KM4] (Spark, Hadoop)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is open only for women professionals returning to the workforce after a career break (9+ months career gap, e.g, last working day prior to NOV 2024). We encourage you to apply only if you fit this criteria. Position Overview This is a Level 5 Data Engineer role within a leading e-commerce organization's Selling Partner Services division in India. The position focuses on building and scaling API authorization and customization systems that serve thousands of global selling partners. This is a senior-level position requiring significant technical expertise and leadership capabilities. Team Context & Mission Organization : Selling Partner Services division Focus : API authorization and customization systems for global selling partners Mission : Create flexible, reliable, and extensible API solutions to help businesses thrive on the platform Culture : Startup excitement with enterprise-level resources and scale Impact : Direct influence on thousands of global selling partners Key Responsibilities Technical Leadership Lead design and implementation of complex data pipelines and ETL processes Architect scalable, high-performance data systems using cloud technologies and big data platforms Evaluate and recommend new technologies and tools for data infrastructure enhancement Troubleshoot and resolve complex data-related issues in production environments Collaboration & Stakeholder Management Work closely with data scientists, analysts, and business stakeholders Understand data requirements and implement appropriate solutions Contribute to data governance policies and procedures development Performance & Quality Optimization Optimize data storage and retrieval systems for performance and cost-effectiveness Implement data quality checks and monitoring systems Ensure data integrity and reliability across all systems Mentorship & Leadership Mentor junior engineers on the team Provide technical leadership on data engineering best practices and methodologies Drive adoption of industry standards and innovative approaches Required Qualifications (Must-Have) Experience Requirements 5+ years of data engineering experience - Senior-level expertise expected 5+ years of SQL experience - Advanced SQL skills for complex data manipulation Data modeling, warehousing, and ETL pipeline building - Core competencies Distributed systems knowledge - Understanding of data storage and computing in distributed environments Technical Skills Advanced proficiency in designing and implementing data solutions Strong understanding of data architecture principles Experience with production-level data systems Knowledge of data governance and quality assurance practices Preferred Qualifications Cloud Technology Stack Data Warehousing : Redshift, Snowflake, BigQuery Object Storage : S3, Azure Blob, Google Cloud Storage ETL Services : AWS Glue, Azure Data Factory, Google Dataflow Big Data Processing : EMR, Databricks, Apache Spark Real-time Streaming : Kinesis, Kafka, Apache Storm Data Delivery : FireHose, Apache NiFi Serverless Computing : Lambda, Azure Functions, Google Cloud Functions Identity Management : IAM, Active Directory, role-based access control Non-Relational Database Experience Object Storage : S3, blob storage systems Document Stores : MongoDB, CouchDB Key-Value Stores : Redis, DynamoDB Graph Databases : Neo4j, ArangoDB Column-Family : Cassandra, HBase Key Success Factors Scalability Focus : Building systems that can handle massive enterprise scale Performance Optimization : Continuous improvement of system efficiency Quality Assurance : Maintaining high data quality and reliability standards Innovation : Staying current with emerging technologies and best practices Collaboration : Effective partnership with stakeholders across the organization This role represents a significant opportunity for a senior data engineer to make a substantial impact on a global e-commerce seller ecosystem while working with cutting-edge technologies and leading a team of talented professionals.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

7 - 17 Lacs

Pune

Remote

Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.

Posted 3 weeks ago

Apply

4.0 - 7.0 years

24 - 40 Lacs

Hyderabad

Work from Office

Design and optimize scalable data pipelines using Python, Scala, and SQL. Work with AWS services, Redshift, Terraform, Docker, and Jenkins. Implement CI/CD, manage infrastructure as code, and ensure efficient data flow across systems.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities Total Years of Experience 7 to 9 Relevant years of Experience 5 to 6 Mandatory Skills • Overall 6 plus year experience as DBA. AWS 3 years. Minimum 2 years of experience with Amazon Redshift (traditional or serverless). • Proficient in Redshift backup and snapshot recovery processes. • Experience with AWS account design and management. AWS Services such as Lambda, Glue, Airflow Alternate Skills Good to have (Not Mandatory) • Expertise in day-to-day DBA activitie s • Experience automating DBAtasks using Python. • Ability to create and manage Redshift maintenance tasks, such as vacuuming and analyzing tables. • Exposure to data governance and compliance frameworks (e.g., GDPR, HIPAA) in Redshift environments. Detailed Job Description • Overall 6 plus year experience as DBA. AWS 3 years. Minimum 2 years of experience with Amazon Redshift (traditional or serverless). • Proficient in Redshift backup and snapshot recovery processes. • Expertise in day-to-day DBA activities, including:o Managing user access and permissions. o Monitoring database performance and optimizing queries. o Ensuring database security and compliance. o Handling cluster resizing and scaling operations. o Strong understanding of database design and schema management in Redshift. o Experience with query monitoring and analyzing workload management (WLM) to optimize cluster performance. o Familiarity with data distribution strategies and tuning table designs for performance. o Proficient in troubleshooting ETL/ELT processes and optimizing data ingestion workflows. o Understanding of Amazon S3 integration and external table usage in Redshift Spectrum, Athena. • Experience automating DBA tasks using Python. • Experience with AWS account design and management. • Familiarity with AWS services like: o Lambda: For serverless data processing. o Glue: For ETL workflows and data cataloging. o Airflow: For orchestration and automation of data workflows. o Knowledge of Redshift Serverless design and best practices. o Hands-on experience with CloudWatch for monitoring and setting up alarms for Redshift clusters. o Exposure to Data Lake architecture and integration with Redshift. o Understanding of Redshift security configurations, including IAM roles and VPC security. o Ability to debug and resolve Redshift query performance bottlenecks. • Familiarity with cross-region disaster recovery strategies for Redshift. • Knowledge of data retention policies and partition management. • Ability to create and manage Redshift maintenance tasks, such as vacuuming and analyzing tables.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Noida

Remote

Job Summary: We are seeking a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience to lead the design and implementation of scalable, cloud-native data solutions. This role focuses on modernizing enterprise data infrastructure, driving realtime data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Key Responsibilities: • Architect and implement cloud-based data platforms using AWS services including Redshift, Glue, DMS, and Data Lake solutions. • Lead the migration of Oracle E-Business Suite or similar ERP systems to AWS, ensuring data integrity and performance. • Design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. • Define and enforce data architecture standards, governance policies, and best practices. • Collaborate with engineering, data, and business teams to align architecture with strategic goals. • Optimize data pipelines and storage for scalability, reliability, and cost-efficiency. Required Qualifications: • 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. • Deep expertise in AWS services including Redshift, Glue, DMS, and Data Lake architectures. • Proven experience with Confluent Kafka for real-time data streaming and eventdriven architectures. • Hands-on experience migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms. • Strong understanding of data governance, security, and compliance in cloud environments. • Proficiency in designing scalable, fault-tolerant data systems. Preferred Qualifications: • Experience with data modeling, metadata management, and lineage tracking. • Familiarity with infrastructure-as-code and CI/CD practices. • Strong communication and leadership skills to guide cross-functional teams

Posted 3 weeks ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

10 - 18 Lacs

Bengaluru

Remote

We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Title: AWS Engineer Experience: 4 - 8 Years Location: Bengaluru (Hybrid 2- 3 Days Onsite per Week) Employment Type: Full-Time Notice Period: Only Immediate to 15 Days Joiners Preferred Job Description: We are looking for an experienced AWS Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience building and maintaining robust, scalable data pipelines and cloud-based architectures on AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, S3, Redshift, and EMR Collaborate with data scientists and ML engineers to operationalize machine learning models using AWS SageMaker Implement efficient data transformation and feature engineering workflows Optimize ETL/ELT processes and enforce best practices for data quality and governance Work with structured and unstructured data using Amazon Athena, DynamoDB, RDS, and similar services Build and manage CI/CD pipelines for data and ML workflows using AWS CodePipeline, CodeBuild, and Step Functions Monitor data infrastructure for performance, reliability, and cost-effectiveness Ensure data security and compliance with organizational and regulatory standards Required Skills: Strong experience with AWS data and ML services Solid knowledge of ETL/ELT frameworks and data modeling Proficiency in Python, SQL, and scripting for data engineering Experience with CI/CD and DevOps practices on AWS Good understanding of data governance and compliance standards Excellent collaboration and problem-solving skills

Posted 3 weeks ago

Apply

8.0 - 13.0 years

27 - 35 Lacs

Kochi, Bengaluru

Work from Office

About Us. DBiz Solution is a Transformational Partner. Digital transformation is intense. Wed like for you to have something to hold on to, whilst you set out bringing your ideas into existence. Beyond anything, we put humans first. This means solving real problems with real people and providing needs with real, working solutions. DBiz leverages a wealth of experience building a variety of software to improve our client's ability to respond to change and build tomorrows digital business. Were quite proud of our record of accomplishment. Having delivered over 150 projects for over 100 clients, we can honestly say we leave our clients happy and wanting more. Using data, we aim to unlock value and create platforms/products at scale that can evolve with business strategies using our innovative Rapid Application Development methodologies. The passion for creating an impact: Our passion for creating an impact drive everything we do. We believe that technology has the power to transform businesses and improve lives, and it is our mission to harness this power to make a difference. We constantly strive to innovate and deliver solutions that not only meet our client's needs but exceed their expectations, allowing them to achieve their goals and drive sustainable growth. Through our world-leading digital transformation strategies, we are always growing and improving. That means creating an environment where every one of us can strive together for excellence. Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (8+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate

Posted 3 weeks ago

Apply

12.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 3 weeks ago

Apply

7.0 - 12.0 years

0 - 1 Lacs

Chennai

Hybrid

Role & responsibilities Detailed job description - Skill Set: Design, develop, and maintain ETL (Extract, Transform, Load) workflows using ETL Tools Optimize ETL processes for performance, scalability, and reliability. Ensure data is accurately extracted from source systems, transformed according to business rules, and loaded into target systems (data warehouses, data lakes, etc.). Work closely with data analysts, data scientists, business stakeholders, and other developers to understand data requirements. Document ETL processes, data mappings, and workflows for maintainability and knowledge sharing. Participate in code reviews and contribute to best practices in data engineering. Mandatory Skills AWS Redshift PLSQL Python ETL DWH

Posted 4 weeks ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.

Posted 4 weeks ago

Apply

5.0 - 9.0 years

15 - 22 Lacs

Chennai

Work from Office

We are looking for a skilled and motivated Senior Data Engineer to join data integration and analytics team. The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems. You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs. We value individuals who are passionate about continuous learning, problem-solving, and enabling data-driven decision-making. Years of Experience: Min. 5 years (Note: with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration) Primary Skills: Informatica IICS, AWS (especially Redshift) Secondary Skills: Python, Unix/Linux Role Description: As a Senior Data Engineer, you will lead the design, development, and management of scalable data platforms and pipelines. This role demands a strong technical foundation in data architecture, big data technologies, and database systems (both SQL and NoSQL), along with the ability to collaborate across functional teams to deliver robust, secure, and high-performing data solutions. Key Responsibilities: Design, develop, and maintain end-to-end data pipelines and infrastructure. Translate business and functional requirements into scalable, well-documented technical solutions. Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations. Ensure data integrity and quality through automated validations, unit testing, and comprehensive documentation. Optimize data processing performance and manage large datasets efficiently. Collaborate closely with stakeholders and project teams to align data solutions with business objectives. Implement and maintain security and privacy protocols to ensure safe data handling. Set up development environments and configure tools and services. Mentor junior data engineers and contribute to continuous improvement and automation initiatives. Coordinate with QA and UAT teams during testing and release phases. Role Requirements: Strong proficiency in SQL, including procedures, performance tuning, and analytical functions. Solid understanding of data warehousing concepts, including dimensional modeling and slowly changing dimensions (SCDs). Hands-on experience with scripting languages (Shell / PowerShell). Proficiency in data profiling, validation, and testing practices. Excellent problem-solving, communication (written and verbal), and documentation skills. Exposure to Agile methodologies and CI/CD practices. Additional Requirements: Overall 5+ years of experience, with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration). Strong proficiency in AWS Redshift and writing complex SQL queries. Solid programming experience in Python for scripting, data wrangling, and automation.

Posted 4 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies