316 Redshift Aws Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

16 - 25 Lacs

pune

Remote

About Velotio: Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are looking for a highly skilled Data Engineer with strong expertise in building data pipelines, managing cloud-based data platforms, and deploying scalable dat...

Posted 1 month ago

AI Match Score
Apply

7.0 - 12.0 years

10 - 15 Lacs

hyderabad, pune, bengaluru

Work from Office

Skill: Senior Database Administrator Experience: 7+ Years Location: Pan India Notice Period: Immediate - 15 Days JOB SUMMARY: Job Description: Enforced Security Administration, system monitoring by adding and removing users, configuring quotas, auditing, and analyzing security problems. Experience 7 Plus years in relevant domain. Postgres RDS . AWS Oracle RDS instances, ensuring the latest features and bug fixes were implemented Worked extensively on Oracle Version Upgrade, Migrations and Patching. Configured and implemented different types of Advanced Replication Models of distributed databases. Knowledge on Golden Gate. Dealing with database security and auditing. Worked as a team member t...

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

15 - 25 Lacs

hyderabad

Work from Office

Role : AWS Data Engineer Exp : 5 to 10 Years Location : Hyderabad Mode : Hybrid Job Overview: We are seeking a highly skilled Data Engineer with strong expertise in AWS cloud services and Python programming . The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines, ensuring data availability, quality, and performance across enterprise systems. You will collaborate closely with data analysts, data scientists, and business stakeholders to deliver reliable, high-quality data solutions. Key Responsibilities Design, develop, and maintain ETL/ELT data pipelines using Python and AWS native services (Glue, Lambda, EMR, Step Functions, etc.) Build and ...

Posted 1 month ago

AI Match Score
Apply

10.0 - 15.0 years

20 - 30 Lacs

hyderabad, chennai, bengaluru

Hybrid

Job Title: Solution Architect - Technology Job Description: We are seeking a skilled Solution Architect to join our dynamic team. The ideal candidate will be responsible for designing, and implementing, Data warehouse and Data Lake processes to support our digital platforms analytics needs. The Solution Architect will collaborate closely with analysts, database administrators, and business users to understand requirements and deliver efficient solutions. Responsibilities: Design, develop, and maintain data management processes using industry best practices and tools like AWS and Microsoft Fabric Lakehouses, Amazon Redshift, Amazon Quicksight, AWS Glue, Kafka, Data Factory, Synapse, PowerBI. ...

Posted 1 month ago

AI Match Score
Apply

5.0 - 8.0 years

13 - 18 Lacs

bengaluru

Hybrid

Skills : AWS Redshift DBA Or AWS Aurora PostgreSQL/ MSSQL &RDS Experience : 5-8 years Location - Bangalore (Prefer to relocate ) Education : B.tech, BCA, B.Sc Educational Requirements Bachelor of Engineering,BTech,BSc,BCA Service Line Cloud & Infrastructure Services Responsibilities A day in the life of an Infoscion • As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. • You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. • You will participate in project e...

Posted 1 month ago

AI Match Score
Apply

2.0 - 7.0 years

4 - 9 Lacs

mumbai suburban, pune

Work from Office

Role and Responsibilities Design, develop, and maintain ETL pipelines across Zoho applications and SaaS platforms. Implement Deluge scripts to automate workflows and data transformations within Zoho. Configure and manage Zoho Analytics for data modeling, cleansing, dashboards, and BI reporting. Integrate Zoho with external SaaS tools and cloud platforms using Zoho Flow, REST APIs, and third-party connectors (Zapier, Make, etc.). Perform data migration, validation, and quality checks to ensure accuracy and consistency across systems. Collaborate with product, finance, sales, and operations teams to deliver data-driven insights. Ensure data governance, security, and compliance within SaaS and ...

Posted 1 month ago

AI Match Score
Apply

7.0 - 9.0 years

15 - 25 Lacs

bengaluru

Work from Office

Job Description: Strong knowledge in SQL, Spark and Python and AWS data integration and processing technologies (Glue, Athena , Redshift). Proficient in data analysis and data pipeline development. Well versed with functional and automated testing of data pipelines. Has good understanding about performance tuning and debugging of data pipelines. Some Understanding in Lake house development.

Posted 1 month ago

AI Match Score
Apply

5.0 - 9.0 years

8 - 18 Lacs

hyderabad, chennai, bengaluru

Work from Office

Dear Candidate, This is with reference to your profile on the job portal and Screen Select Deloitte India Consulting has an immediate requirement for the following role. Job Notice period: Looking for immediate 4 Weeks (Max) Location : Any Location Is fine Job Description Job Summary: We are seeking a highly skilled and experienced Data Engineer with strong expertise in AWS services, PySpark, EMR, Snowflake, and PostgreSQL. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and analytics solutions in a cloud-native environment. Key Responsibilities: • Design and implement scalable data pipelines using AWS services (EMR, S3, Lambda, Glue, etc.) • ...

Posted 1 month ago

AI Match Score
Apply

8.0 - 11.0 years

10 - 20 Lacs

chennai

Hybrid

Lead Data Engineer - AWS RedShift,ETL Concepts, Data Integration, Python, Datastage, Informatica IICS /SSIS, SQL, cloud databases. 8+ years in ETL(Informatica IICS/Powercenter) Experience with version control Git & CI/CD workflows Location - Chennai Required Candidate profile Proficiency in AWS Redshift & complex SQL queries Python for scripting, data wrangling, and automation Experience with version control Git & CI/CD workflows Data modeling and data warehousing concepts Perks and benefits Great Learning and Career Progression.

Posted 1 month ago

AI Match Score
Apply

5.0 - 7.0 years

25 - 35 Lacs

udaipur, gurugram

Work from Office

Looking for Senior Data Engineer (5–7 yrs relevant) in Python, SQL, ETL, Airflow, Spark & AWS. Build enterprise-grade data pipelines for high-scale platforms, automation, and cloud-first architecture focus. Apply: www. nurimtech.ai/career.html Required Candidate profile Experienced Data Engineer (5–7 yrs) with strong Python + SQL, ETL, Airflow, Spark & AWS skills. Proven ability to build scalable pipelines, ensure data quality. Ownership , strong problem-solving. Perks and benefits Refer JD on company careers page

Posted 1 month ago

AI Match Score
Apply

6.0 - 11.0 years

15 - 30 Lacs

hyderabad, bengaluru, mumbai (all areas)

Hybrid

3+ years of hands-on experience with AWS services including (Lambda, API Gateway, CloudFront, EC2, S3, Lambda, and Elastic Beanstalk) Build and deployment automation (Continuous Integration/Delivery) experience using industry-standard platforms (VSTS, Jenkins, Bamboo, TeamCity) Hands-on experience with infrastructure as code automation (CloudFormation, Terraform) Experience with modern scripting languages, Python, PowerShell are preferred In depth knowledge of high availability (HA) design and implementation across AWS services Understanding of general database technologies and maintenance (RDS, DynamoDB, Redshift, Aurora) Familiarity with AWS networking and routing technologies (VPC, Securi...

Posted 1 month ago

AI Match Score
Apply

3.0 - 5.0 years

10 - 15 Lacs

bengaluru

Hybrid

Key Responsibilities: Develop complex SQL queries for data extraction, transformation, and loading (ETL) processes within the data warehouseDocument SQL development standards and procedures. Design and optimize SQL scripts to improve data retrieval performance and data quality. Implement and maintain dimensional data models (star schema, snowflake schema) using SQL. Write stored procedures, functions, views, and triggers to automate data workflows. Collaborate with BI teams and business stakeholders to gather reporting requirements and translate them into SQL-based solutions. Optimize existing SQL queries and database objects for better performance. Work with large datasets to validate and e...

Posted 1 month ago

AI Match Score
Apply

6.0 - 8.0 years

4 - 9 Lacs

kolkata, hyderabad, chennai

Work from Office

Dear Candidate Greetings from TATA Consultancy Services Job Openings at TCS Skill : AWS and RedShift Exp range: 6 yrs to 8yrs Location: Chennai / Hyderabad / Kolkata Notice Period : 30 days Pls find the Job Description below. Hands-on experience in AWS Redshift, DBA, SQL Good to have experience on AWS S3, IAM CI/CD build and deployment support Monitor and Performance tuning of Queries Educate Data Engineers, Recommendation for improvement Set up office hours, Interact with Architects Implement data quality best practices, optimize data workflows If you are Interested in the above opportunity kindly share your updated resume to sivabhavani.t@tcs.com immediately with the details below (Mandato...

Posted 1 month ago

AI Match Score
Apply

8.0 - 13.0 years

18 - 36 Lacs

madurai

Work from Office

Design and maintain scalable schemas in MSSQL, Redshift, and Snowflake. Optimize queries, build data pipelines, ensure data reliability, enforce SLAs, and collaborate across teams to enhance performance, security, and compliance.

Posted 1 month ago

AI Match Score
Apply

6.0 - 11.0 years

15 - 30 Lacs

noida, hyderabad, chennai

Hybrid

immediate Joiner AWS Data Engineer with SCD, SQL,Python,Pyspark, AWS Data Engineer WIth SCD ,Pyhton,Pyspark,SQL,AWS @Hyd,Chen ,Noida,Bangalore,Pune Work mode :Hybrid Key skillsScdSQLPysparkPythonAWSExperience4 to 9Job locationHyderabad, Chennai, NoidaAnnual offered salary 15 lacs to 27.5 lacsJob description Immediate Joiner For AWS Data Engineer @ Hyderabad ,Chennai ,Noida Location. Skill Rating: Communication, DWH/ETL , PySpark, SQL, AWS Experience: Total Exp : 4-9 Yrs Data Eng AWS (S3, Lambda, Glue, Athena, DMS, Airflow, IAM, SNS-SQS) PySpark SQL Python Questions/Topics: SCD Type 1 vs SCD Type 2 key components of Airflow boto3 S3 life cycle policy AWS secret manager schema evolution handli...

Posted 1 month ago

AI Match Score
Apply

7.0 - 11.0 years

14 - 24 Lacs

pune

Hybrid

We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. Note: Only apply if you are willing to attend one round of F2F interview. We are looking for November joiners only. Experience: 7 to 10 Years Work: Hybrid If interested, please refer to the JD below and share the resume with required details. Total Exp: Exp as a Data Engineer: Relevant Exp in AWS Services: Relevant Exp in Python: Relevant Exp in Pyspark: Notice period: Must have skills are: 1. Data Engineer with a strong foundation in data w...

Posted 1 month ago

AI Match Score
Apply

8.0 - 13.0 years

0 - 0 Lacs

pune, chennai, bengaluru

Work from Office

Minimum 8+ years of experience working as Snowflake , AWS, and Big Data Developer Key Skills required Snowflake Snowpark Apache Iceberg AWS services – Glue, Cloud formation, Lake Formation, IAM, Lambda, Redshift Coding Languages – Python, SQL Bigdata technologies – PySpark, Spark Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. • Understand project requirements and translate them into technical solutions which meets the project quality standards • Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. • St...

Posted 1 month ago

AI Match Score
Apply

6.0 - 10.0 years

20 - 25 Lacs

gurugram

Hybrid

Roles and Responsibilities: Collaborate with data product owners, data engineers, and business stakeholders to gather and document detailed business and technical requirements for data products and reporting needs Capture and maintain use cases, business rules, and logic for assigned data domains (e.g., fan data, marketing), ensuring alignment across teams and platforms Lead the effort in gathering technical specifications to support the integration of various internal and external data sources, including understanding integration methods and patterns Translate business needs into clear, structured documentation outlining data transformations, validations, standardizations, and reporting log...

Posted 1 month ago

AI Match Score
Apply

5.0 - 8.0 years

8 - 16 Lacs

gurugram, bengaluru, delhi / ncr

Work from Office

Role & responsibilities Job Title: Big Data Engineer Experience: 5+ Years Location: Gurugram / Bangalore Budget: Up to 16.5 LPA Job Description: We are seeking an experienced Big Data Engineer with strong expertise in data processing, analytics, and cloud-based data platforms. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and architectures to support business intelligence, analytics, and reporting needs. Key Responsibilities: Design and develop data pipelines and ETL processes for large-scale data ingestion and transformation. Work extensively with Big Data technologies such as Spark and Hadoop ecosystems. Build and optimize data m...

Posted 1 month ago

AI Match Score
Apply

3.0 - 6.0 years

6 - 10 Lacs

chennai, bengaluru

Work from Office

Job Title: SQL Developer Cloud Data Platforms (Redshift / BigQuery) Job Description: Design, develop, and optimize SQL queries, procedures, and ETL workflows for large-scale datasets. Work with cloud-based data warehouses such as Amazon Redshift , Google BigQuery , or Snowflake . Build and maintain data pipelines ensuring high performance and reliability. Collaborate with data engineers and analysts to support reporting and analytics requirements. Implement data modeling, partitioning, and performance tuning strategies. Manage data integration from multiple sources (APIs, flat files, and databases). Ensure data quality, accuracy, and compliance with best practices. Support migration of on-pr...

Posted 1 month ago

AI Match Score
Apply

8.0 - 12.0 years

20 - 22 Lacs

chennai

Work from Office

Role Support Executive Experience 8+ Qualification Engineering / Engineering Equivalent Skill AWS Redshift Ops PL SQL Unix Python Airflow Job Description Incident Management Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees

Posted 1 month ago

AI Match Score
Apply

8.0 - 13.0 years

15 - 25 Lacs

noida, hyderabad

Work from Office

Job Title: Data Engineer Employment Type: Full Time, Permanent Position Location: Noida/Hyderabad Qualifications: BE/B.Tech/MCA Degree in Computer Science, Engineering, or similar relevant field Total Experience: 8+ years Working Model: Work from Office ABOUT THE ROLE We are seeking a skilled Senior Data Engineer/Lead Engineer to design, build, and maintain robust data infrastructure and pipelines that enable our organization to leverage data for strategic decision- making. The ideal candidate will have strong technical expertise in data engineering, cloud technologies, and data architecture, with a passion for building scalable and efficient data solutions. PRIMARY RESPONSIBILITIES : • Tran...

Posted 2 months ago

AI Match Score
Apply

3.0 - 5.0 years

15 - 20 Lacs

noida

Work from Office

We are seeking a hands-on data engineer with a strong understanding of both data engineering and reporting . The ideal candidate will design and maintain data pipelines, curate analytical datasets, and build reports and dashboards that empower business stakeholders with insights. Youll work across ETL development, data modelling, and business reporting ensuring our data ecosystem is accurate, scalable, and reliable. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using AWS Glue, Python, and SQL to integrate data from multiple sources (MySQL, MongoDB, Postgres, Redshift, APIs, and S3). Build and manage Power BI dashboards and reports for business teams (Sales, Marketing, ...

Posted 2 months ago

AI Match Score
Apply

5.0 - 10.0 years

17 - 32 Lacs

bengaluru

Remote

Job Title: AWS Data Engineer Location: Hyderabad/Bangalore/Gurugram( Remote) Job Type: Full time Shift time: 1PM-10PM About Straive: Straive is a market leading Content and Data Technology company providing data services, subject matter expertise, & technology solutions to multiple domains. Data Analytics & Al Solutions, Data Al Powered Operations and Education & Learning form the core pillars of the companys long-term vision. The company is a specialized solutions provider to business information providers in finance, insurance, legal, real estate, life sciences and logistics. Straive continues to be the leading content services provider to research and education publishers. Data Analytics ...

Posted 2 months ago

AI Match Score
Apply

5.0 - 8.0 years

4 - 9 Lacs

pune, bengaluru, mumbai (all areas)

Work from Office

Role & responsibilities JD for Databricks, Pyspark Engineer: Candidate should have 5- 8 years of experience in Data Engineering Must have knowledge and working experience in Databricks, PySpark and in Redshift, AWS, Apache Airflow, ETL, SQL Nice to Have Life science domain experience (Pharma), BI Tools (Tableau/Power BI) Designing, creating, testing and maintaining the complete data management & processing systems. Working closely with the stakeholders & solution architect. Ensuring architecture meets the business requirements. Building highly scalable, robust & fault-tolerant systems. Discovering data acquisitions opportunities Finding ways & methods to find value out of existing data. Impr...

Posted 2 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies