Jobs
Interviews

152 Redshift Aws Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

25 - 32 Lacs

kochi, bengaluru

Work from Office

Data marts using AWS services (Redshift, Aurora, RDS), Data modelling (Star schema, Snowflake),Perf tuning (compression, materialised views), Architecting scalable data warehouses using AWS Redshift, Athena,Glue, Batch analytics using Kinesis, Lambda

Posted 18 hours ago

Apply

2.0 - 3.0 years

6 - 7 Lacs

coimbatore

Work from Office

ETL Developer Job Title: ETL Developer FTE Location: Coimbatore Start Date: ASAP Job Summary: We are looking for an experienced ETL Developer with strong expertise in Apache Airflow , Redshift , and SQL-based data pipelines, with upcoming transitions to Snowflake . This is a contract role based in Coimbatore, ideal for professionals who can independently deliver high-quality ETL solutions in a cloud-native, fast-paced environment. Key Responsibilities: 1. ETL Design and Development: Design and develop scalable and modular ETL pipelines using Apache Airflow , with orchestration and monitoring capabilities. Translate business requirements into robust data transformation pipelines across cloud data platforms. Develop reusable ETL components to support a configuration-driven architecture. 2. Data Integration and Transformation: Integrate data from multiple sources: Redshift , flat files, APIs, Excel, and relational databases. Implement transformation logic such as cleansing, standardization, enrichment, and deduplication. Manage incremental and full loads, along with SCD handling strategies. 3. SQL and Database Development: Write performant SQL queries for data staging and transformation within Redshift and Snowflake . Utilize joins, window functions, and aggregations effectively. Ensure indexing and query tuning for high-performance workloads. 4. Performance Tuning: Optimize data pipelines and orchestrations for large-scale data volumes. Tune SQL queries and monitor execution plans. Implement best practices in distributed data processing and cloud-native optimizations. 5. Error Handling and Logging: Implement robust error handling and logging in Airflow DAGs. Enable retry logic, alerting mechanisms, and failure notifications. 6. Testing and Quality Assurance: Conduct unit and integration testing of ETL jobs. Validate data outputs against business rules and source systems. Support QA during UAT cycles and help resolve data defects. 7. Deployment and Scheduling: Deploy pipelines using Git-based CI/CD practices. Schedule and monitor DAGs using Apache Airflow and integrated tools. Troubleshoot failures and ensure data pipeline reliability. 8. Documentation and Maintenance: Document data flows, DAG configurations, transformation logic, and operational procedures. Maintain change logs and update job dependency charts. 9. Collaboration and Communication: Work closely with data architects, analysts, and BI teams to define and fulfill data needs. Participate in stand-ups, sprint planning, and post-deployment reviews. 10. Compliance and Best Practices: Ensure ETL processes adhere to data security, governance, and privacy regulations (HIPAA, GDPR, etc.). Follow naming conventions, version control standards, and deployment protocols. Required Skills & Experience: 3–6 years of hands-on experience in ETL development. Proven experience with Apache Airflow , Amazon Redshift , and strong SQL. Strong understanding of data warehousing concepts and cloud-based data ecosystems. Familiarity with handling flat files, APIs, and external sources. Experience with job orchestration, error handling, and scalable transformation patterns. Ability to work independently and meet deadlines. Preferred Skills: Exposure to Snowflake or plans to migrate to Snowflake platforms. Experience in healthcare , life sciences , or regulated environments is a plus. Familiarity with Azure Data Factory , Power BI , or other cloud BI tools. Knowledge of Git, Azure DevOps, or other version control and CI/CD platforms. Role & responsibilities Preferred candidate profile

Posted 1 day ago

Apply

4.0 - 8.0 years

20 - 30 Lacs

bengaluru

Work from Office

NOTE:- WHO IS CURRENTLY WORKING WITH PROUCT BASED COMPANY / CLIENT ONLY ELIGIBLE FOR ABOVE POSITION. Senior Engineer Data Alberts mission is to power a cloud-native platform that accelerates innovation in R&D through intelligent, data-driven workflows. We combine big data, AI, and secure infrastructure to help scientific teams formulate new materials and products faster and more reliably. We are looking for a Sr Engineer with strong expertise in designing and scaling data systems that support high-throughput, low-latency applications. This role is ideal for someone who enjoys working at the intersection of data architecture, distributed systems , and performance optimization across a diverse set of use cases including Data Warehouse , AI pipelines , SQL-based analytics , NoSQL storage such as DynamoDB , and GraphQL based APIs . Youll lead the design and evolution of our core data platforms and infrastructure that powers our AI models, analytical queries, and operational workloads. Responsibilities: Develop , and maintain SQL and NoSQL databases, ensuring high performance, scalability, and reliability. Collaborate with the API team and Data Science team to build robust data pipelines and automations. Work closely with stakeholders to understand database requirements and provide technical solutions. Optimize database queries and performance tuning to enhance overall system efficiency. Implement and maintain data security measures, including access controls and encryption. Monitor database systems and troubleshoot issues proactively to ensure uninterrupted service. Develop and enforce data quality standards and processes to maintain data integrity. Create and maintain documentation for database architecture, processes, and procedures. Stay updated with the latest database technologies and best practices to drive continuous improvement. Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune complex queries for performance and efficiency. Experience with monitoring and visualization tools such as Grafana to monitor database performance and health. Requirements: 4+ years of experience in data engineering, with a focus on large-scale data systems . Proven experience designing data models and access patterns across SQL and NoSQL ecosystems. Hands-on experience with technologies like PostgreSQL, DynamoDB, S3, GraphQL, or vector databases. proficient in SQL stored procedures with extensive expertise in MySQL schema design , query optimization, and resolvers, along with hands-on experience in building and maintaining data warehouses. Strong programming skills in Python or JavaScrip t, with the ability to write efficient, maintainable code. Familiarity with distributed systems, data partitioning, and consistency models. Familiarity with observability stacks ( Prometheus, Grafana, OpenTelemetry ) and debugging production bottlenecks. Deep understanding of cloud infrastructure ( preferably AWS ), including networking, IAM, and cost optimization. Prior experience building multi-tenant systems with strict performance and isolation guarantees. Excellent communication and collaboration skills to influence cross-functional technical decisions. Culture The Albert team uses an iterative/agile development methodology, and you will be a key contributor in the entire development cycle. At Albert, we put a great deal of emphasis on collaboration and maintaining an open working environment - having great coworkers is one of the biggest determinants for enjoying your work, and we take our enjoyment of work very seriously. Your opinions matter. We are driven by technology and innovation, and we look to the smartest, most passionate people on the team as the source of ideas. Albert Invent Private Limited WTC Annex, Block A ,1st Floor, Office No. 3, Brigade Gateway Campus 26/1 Dr. Rajkumar Road, Malleswaram Rajajinagar, Bengaluru -560055 www.albertinvent.com

Posted 4 days ago

Apply

6.0 - 11.0 years

25 - 37 Lacs

gurugram, bengaluru

Work from Office

Key Responsibilities Lead and mentor a team of data engineers, providing technical guidance, performance feedback, and career development. Architect, develop, and maintain scalable ETL/ELT pipelines using AWS services , PySpark , SQL , and Python . Drive the design and implementation of robust data models and data warehouses/lakes to support analytics and reporting. Ensure data quality, security, and governance across all stages of the data lifecycle. Collaborate with product and engineering teams to integrate data solutions into production environments. Optimize data workflows and performance across large, complex datasets. Manage stakeholder relationships and translate business needs into technical solutions. Stay up to date with latest technologies and recommend best practices for data engineering. Technical Skills & Qualifications 7+ years of experience in Data Engineering or related roles. 2+ years in a leadership or managerial capacity. Deep expertise in AWS ecosystem , including services like S3, Glue, Redshift, Lambda, EMR, Athena, DynamoDB, and IAM. Proficient in PySpark , SQL , and Python for large-scale data processing and transformation. Strong understanding of ETL/ELT development , data modeling (star/snowflake schemas) , and data architecture . Experience in managing data infrastructure, CI/CD pipelines, and workflow orchestration tools (e.g., Airflow, Step Functions). Knowledge of data governance, security, and compliance best practices. Excellent communication and leadership skills. Preferred Qualifications AWS Certified Data Analytics, Solutions Architect, or equivalent certifications. Experience working in Agile environments. Familiarity with BI tools like QuickSight, Tableau, or Power BI. Exposure to machine learning workflows or MLOps is a plus. Why Join Decision Point? At Decision Point , we dont just build data solutions—we build decision intelligence. Joining us means being part of a fast-growing, innovation-driven company where your contributions directly impact business outcomes for global clients. We foster a culture of learning, ownership, and growth. As a Data Engineering Manager, you'll have the autonomy to drive strategic projects while working with some of the brightest minds in data and analytics. If you're looking to scale your career in a dynamic environment that values deep technical expertise and leadership, Decision Point is the place for you .

Posted 5 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

hyderabad

Work from Office

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Amazon Redshift, EMR, and AWS Glue. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Develop complex SQL queries to optimize database performance and troubleshoot issues. Implement data security measures to ensure compliance with company policies. Participate in code reviews to improve overall quality of the software. Desired Candidate Profile 5-10 years of experience as a Data Engineer with expertise in Amazon Redshift, EMR, and AWS Glue. Strong understanding of Lambda functions and their integration with other AWS services. Proficiency in writing efficient SQL queries for large datasets. Experience working with big data technologies such as Hadoop or Spark is an added advantage.

Posted 5 days ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

navi mumbai

Work from Office

Hi, Greetings from HR Central! Job Title: AWS Data Engineer Location: Airoli, Navi Mumbai (Work from Office) Client: A leading global partner in sustainable construction Experience: 4 to 8 years Education: BE / B.Tech Certification: AWS (preferred) Job Description: Strong hands-on experience in Python programming (mandatory) Expertise in Data Engineering with exposure to large-scale projects/accounts Hands-on experience with AWS Big Data platforms Redshift, Glue, Lambda, Data Lakes, Data Warehouses Strong skills in SQL, Spark, PySpark Experience in building data pipelines and data integration workflows Exposure to orchestration tools like Airflow, Luigi, Azkaban Apply Now: Interested candidates can share their updated CV to: rajalakshmi@hr-central.in Thanks & Regards, Rajalakshmi HR Central rajalakshmi@hr-central.in

Posted 6 days ago

Apply

4.0 - 8.0 years

10 - 17 Lacs

mumbai, navi mumbai

Work from Office

Greetings from HR Central! A global leading partner for innovative and sustainable construction and offering building materials and solutions is looking for you. So, we are excited to share a job opening for Data Engineer- AWS with you for our client. Experience : 4-8 Years Job Location: Mumbai (Airoli, Navi Mumbai) Mode: WFO (Mon-Fri) Please find attached the job description: About The Role The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment. Education / Qualification BE / B. Tech from IIT or Tier I / II colleges Certification in Cloud Platforms AWS or GCP Experience Total Experience of 4-8years Hands on experience in python coding is must . Experience in data engineering which includes laudatory account Hands-on experience in Big Data cloud platforms like AWS (redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline. Experience in SQL, writing code in spark engine using python, pyspark.. Experience in data pipeline and workflow management tools (such as Azkaban, Luigi, Airflow etc.) Key Personal Attributes Business focused, Customer & Service minded Strong Consultative and Management skills Good Communication and Interpersonal skills We are an equal opportunity employer and consider all qualified applicants without regard to race, color, religion, gender, sexual orientation, gender identity, age, disability, or any other characteristic protected by law. In case you find this position suitable then kindly send your updated CV to tina.sapra@hr-central.in with the below details: 1. Current CTC 2. Expected CTC 3. Notice Period 4. Current Location with area 5. Years of experience as a Data Engineer 6. Years of experience in AWS 7. Years of experience in a) Redshift b) Glue c) Lamba d) Python e) Pyspark f) SQL 8. Rate your self on below mentioned skills from 1-10 (10 being highest) a) Redshift b) Glue c) Lamba d) Python e) Pyspark f) SQL 9. Are you writing python codes or mofifying and maintaining codes 10. Are you writing code in spark engine using python, pyspark 11. Mention a 3-line profile summary of the overall relevant experience (based on Skills, Kind of projects, Industry, Product based exp) Thanks and Regards, Tina Sapra HR Central tina.sapra@hr-central.in https://www.linkedin.com/in/tina-sapra-331954241

Posted 6 days ago

Apply

5.0 - 8.0 years

13 - 14 Lacs

pune

Work from Office

Skills: Strong SQL (Redshift + Snowflake). ETL/data pipeline development Data migration (S3 Snowflake, Snowpipe, bulk copy). Performance optimization (clustering, caching, scaling warehouses). Rewrite queries and transformations in Snowflake.

Posted 6 days ago

Apply

5.0 - 6.0 years

10 - 11 Lacs

coimbatore

Work from Office

Sr ETL Developer Job Title: Sr ETL Developer Location: Coimbatore Start Date: ASAP Job Summary: We are looking for an experienced Sr ETL Developer with strong expertise in Apache Airflow , Redshift , and SQL-based data pipelines, with upcoming transitions to Snowflake . This is a contract role based in Coimbatore, ideal for professionals who can independently deliver high-quality ETL solutions in a cloud-native, fast-paced environment. Key Responsibilities: 1. ETL Design and Development: Design and develop scalable and modular ETL pipelines using Apache Airflow , with orchestration and monitoring capabilities. Translate business requirements into robust data transformation pipelines across cloud data platforms. Develop reusable ETL components to support a configuration-driven architecture. 2. Data Integration and Transformation: Integrate data from multiple sources: Redshift , flat files, APIs, Excel, and relational databases. Implement transformation logic such as cleansing, standardization, enrichment, and deduplication. Manage incremental and full loads, along with SCD handling strategies. 3. SQL and Database Development: Write performant SQL queries for data staging and transformation within Redshift and Snowflake . Utilize joins, window functions, and aggregations effectively. Ensure indexing and query tuning for high-performance workloads. 4. Performance Tuning: Optimize data pipelines and orchestrations for large-scale data volumes. Tune SQL queries and monitor execution plans. Implement best practices in distributed data processing and cloud-native optimizations. 5. Error Handling and Logging: Implement robust error handling and logging in Airflow DAGs. Enable retry logic, alerting mechanisms, and failure notifications. 6. Testing and Quality Assurance: Conduct unit and integration testing of ETL jobs. Validate data outputs against business rules and source systems. Support QA during UAT cycles and help resolve data defects. 7. Deployment and Scheduling: Deploy pipelines using Git-based CI/CD practices. Schedule and monitor DAGs using Apache Airflow and integrated tools. Troubleshoot failures and ensure data pipeline reliability. 8. Documentation and Maintenance: Document data flows, DAG configurations, transformation logic, and operational procedures. Maintain change logs and update job dependency charts. 9. Collaboration and Communication: Work closely with data architects, analysts, and BI teams to define and fulfill data needs. Participate in stand-ups, sprint planning, and post-deployment reviews. 10. Compliance and Best Practices: Ensure ETL processes adhere to data security, governance, and privacy regulations (HIPAA, GDPR, etc.). Follow naming conventions, version control standards, and deployment protocols. Required Skills & Experience: 6+ years of hands-on experience in ETL development. Proven experience with Apache Airflow , Amazon Redshift , and strong SQL. Strong understanding of data warehousing concepts and cloud-based data ecosystems. Familiarity with handling flat files, APIs, and external sources. Experience with job orchestration, error handling, and scalable transformation patterns. Ability to work independently and meet deadlines. Preferred Skills: Exposure to Snowflake or plans to migrate to Snowflake platforms. Experience in healthcare , life sciences , or regulated environments is a plus. Familiarity with Azure Data Factory , Power BI , or other cloud BI tools. Knowledge of Git, Azure DevOps, or other version control and CI/CD platforms. Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

hyderabad

Work from Office

Job Summary: We are seeking a skilled and motivated Data Engineer to join our team and help build scalable data pipelines and infrastructure. The ideal candidate will have strong expertise in Python and SQL scripting , a solid understanding of data engineering concepts , and hands-on experience with Big Data technologies and AWS cloud services . Required Skills & Qualifications: Proficiency in Python and SQL scripting Strong understanding of data engineering principles and ETL processes Experience with Big Data technologies (e.g., Hadoop, Spark, etc.) Hands-on experience with AWS services including S3, Glue, Redshift, and Lambda Familiarity with Apache Airflow or equivalent orchestration tools Knowledge of data warehousing concepts and performance optimization Excellent problem-solving and communication skills

Posted 1 week ago

Apply

4.0 - 9.0 years

14 - 24 Lacs

hyderabad

Work from Office

As a Database Engineer supporting the banks Analytics platforms, you will be a part of a centralized team of database engineers who are responsible for the maintenance and support of Citizens’ most critical databases. A Database Engineer will be responsible for: Requires conceptual knowledge of database practices and procedures such as DDL, DML and DCL. Requires how to use basic SQL skills including SELECT, FROM, WHERE and ORDER BY. Ability to code SQL Joins, subqueries, aggregate functions (AVG, SUM, COUNT), and use data manipulation techniques (UPDATE, DELETE). Understanding basic data relationships and schemas. Develop Basic Entity-Relationship diagrams. Conceptual understanding of cloud computing Can solve routine problems using existing procedures and standard practices. Can look up error codes and open tickets with vendors Ability to execute, explain and identify poorly written queries Review data structures to ensure they adhere to database design best practices. Develop a comprehensive backup plan. Understanding the different cloud models (IaaS, PaaS, SaaS), service models, and deployment options (public, private, hybrid). Solves standard problems by analyzing possible solutions using experience, judgment and precedents. Troubleshoot database issues, such as integrity issues, blocking/deadlocking issues, log shipping issues, connectivity issues, security issues, memory issues, disk space, etc. Understanding cloud security concepts, including data protection, access control, and compliance. Manages risks that are associated with the use of information technology. Identifies, assesses, and treats risks that might affect the confidentiality, integrity, and availability of the organization's assets. Ability to design and implement a highly performing database using partitioning & indexing that meet or exceed the business requirements. Documents a complex software system design as an easily understood diagram, using text and symbols to represent the way data needs to flow. Ability to code complex SQL. Performs effective backup management and periodic databases restoration testing. General DB Cloud networking skills – VPCs, SGs, KMS keys, private links. Ability to develop stored procedures and at least one scripting language for reusable code and improved performance. Know how to import and export data into and out of databases using ETL tools, code, migration tools like DMS or scripts Knowledge of DevOps principles and tools, such as CI/CD. Attention to detail and demonstrate a customer centric approach. Solves complex problems by taking a new perspective on existing solutions; exercises judgment based on the analysis of multiple sources of information Ability to optimize queries for performance and resource efficiency Review database metrics to identify performance issues. Required Qualifications 5+ years of experience with database management/administration, Redshift, Snowflake or Neo4J 5+ years of experience working with incident, change and problem management processes and procedures. Experience maintaining and supporting large-scale critical database systems in the cloud. 3+ years of experience working with AWS cloud hosted databases An understanding of one programming language, including at least one front end framework (Angular/React/Vue), such as Python3, Java, JavaScript, Ruby, Golang, C, C++, etc. Experience with cloud computing, ETL and streaming technologies – OpenShift, DataStage, Kafka Experience with agile development methodology Strong SQL performance & tuning skills Excellent communication and client interfacing skills Strong team collaboration skills and capacity to prioritize tasks efficiently. Desired Qualifications Experience working in an agile development environment Experience working in the banking industry Experience working in cloud environments such as AWS, Azure or Google Experience with CI/CD pipeline (Jenkins, Liquibase or equivalent) Education and Certifications Bachelor’s degree in computer science or related discipline

Posted 1 week ago

Apply

4.0 - 7.0 years

16 - 25 Lacs

pune

Remote

About Velotio: Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are looking for a highly skilled Data Engineer with strong expertise in building data pipelines, managing cloud-based data platforms, and deploying scalable data architectures on AWS. The ideal candidate should have hands-on experience with AWS services and must hold a valid AWS certification. Requirements Design, build, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes on AWS. Work closely with data scientists, analysts, and business teams to understand data requirements and deliver solutions. Integrate data from multiple internal and third-party sources into unified data platforms. Optimize data lake and data warehouse performance (e.g., S3, Redshift, Glue, Athena). Ensure data quality, governance, and lineage using appropriate tools and frameworks. Implement CI/CD practices for data pipelines and workflows. Monitor and troubleshoot production data pipelines to ensure reliability and accuracy. Ensure compliance with data privacy and information security policies. Qualifications 5 years of experience in data engineering or a related role. Strong programming skills in Python, PySpark, or Scala. Proficient in SQL and working with structured, semi-structured, and unstructured data. Solid experience with AWS services such as: S3, Glue, Lambda, Redshift, Athena, Kinesis, Step Functions, CloudFormation, CodeBuild Hands-on experience with workflow orchestration tools like Apache Airflow or AWS-native alternatives. Must have an active AWS certification (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect). Experience with infrastructure as code (IaC) and DevOps practices is a plus. Desired Skills & Experience: Experience with Delta Lake, Parquet, Apache Hudi, or similar formats. Exposure to data cataloging and metadata management tools. Familiarity with data security frameworks and GDPR/data privacy considerations. Experience in client-facing roles and agile delivery environments. Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented get things done culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 13 Lacs

hyderabad

Hybrid

Role & responsibilities Offers/ Pipelines Job Description: Required Skills & Experience: Cloud Technologies (AWS) : Strong technical expertise with AWS services, specifically Redshift and Airflow for data engineering and orchestration. Hands-on experience in Redshift for data warehousing and optimization. Familiarity with AWS services like S3 , EC2 , Lambda , and Glue . Data Engineering & ETL/ELT Processes : Solid understanding of ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes. Experience in building scalable and efficient data pipelines to handle large-scale data processing using Airflow for orchestration. Proficiency in SQL for querying and manipulating data in data warehouses, including Redshift . Data Warehousing & Data Modeling : Strong background in data warehousing concepts and best practices. Experience in data modeling , including designing and implementing schema for data warehousing solutions. DataVault Knowledge (Good to Have) : DataVault methodology knowledge is a plus. Understanding how to build scalable and flexible data models using the DataVault approach. Telecom Domain Expertise : Prior experience working in the Telecom industry is a significant advantage. Familiarity with Telecom data, network operations, and relevant business processes. Incident Resolution & Production Support : Ability to work in production support environments, resolve incidents, and troubleshoot data pipeline failures or data quality issues. Flexibility to work during European Shift timings (till 9:30 PM/10:30 PM IST). Programming & Scripting : Strong Python skills for automation, data processing, and integration tasks. Experience with Airflow core concepts, operators, and building workflows for data pipeline orchestration. Agile Methodologies : Comfortable working in an Agile environment , participating in sprint cycles, and contributing to iterative development. Communication & Customer Skills : Excellent communication skills with the ability to interact with stakeholders and customers. Ability to translate technical concepts into business-friendly language and manage customer expectations. Preferred candidate profile

Posted 1 week ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

bengaluru

Work from Office

Expertise in designing zoned data lakes, federated models, dimensional modeling , data vault Exp in Redshift, Snowflake, BigQuery, Delta Lake ETL: Airflow, Kafka, Spark, Glue Cloud: with AWS (Glue, Athena, Lake Formation) SQL,Python, Spark, Bash

Posted 1 week ago

Apply

3.0 - 8.0 years

0 - 2 Lacs

hyderabad

Work from Office

We are conducting a Walk-In Interview in Hyderabad for the position of Data Engineering on 5th/6th/7th September 2025. Note: Candidates who attended interview with us last 6 months are not eligible. Position: Data Engineering Job description: Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift) Handling of data related activities such as data parsing, cleansing quality definition data pipelines, storage and ETL scripts Experiences in programming language Python/Pyspark/SQL Experience with data migration with hands-on experience Experiences in consuming rest API using various authentication options with in AWS Lambda architecture orchestrate triggers, debug and schedule batch job using a AWS Glue, Lambda and step functions understanding of AWS security features such as IAM roles and policies Exposure to Devops tools AWS certification in AWS is highly preferred Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Date: 5th/6th/7th September 2025. Time : 9.00 AM to 6.00 PM Eligibility: Any Graduate Experience : 2- 10 Years Gender: ANY Interested candidates can walk in directly. For any queries, please contact us at +91 8555079906/7644967546 Interview Venue Details: Selectify Analytics Address: Capital Park (Jain Sadguru Capital Park) Ayyappa Society, Silicon Valley, Madhapur, Hyderabad, Telangana 500081 Contact Person: Mr. Deepak/Saqeeb Interview Time: 9.00 AM to 6.00 PM Contact Number : +91 8555079906

Posted 1 week ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

noida

Remote

Job Title: Sr. Data Engineer Location: Noida/Remote Profile Summary: We are looking for a skilled data professional with hands-on experience in SQL, AWS Redshift, and PostgreSQL for efficient data management, transformation, and optimization. Proficient in developing insightful dashboards and reports using Power BI to support data-driven decision-making. Strong understanding of cloud-based data warehousing, ETL pipelines, and relational database systems. Ideal candidate will demonstrate the ability to translate complex data into actionable business intelligence. Key Responsibilities Develop, optimize, and maintain complex SQL queries, stored procedures, and data models to support analytics and reporting. Design, build, and enhance dashboards, visualizations, and self-service BI solutions in platforms such as Qlik, Tableau, Power BI, Domo, or MicroStrategy Ensure data quality, consistency, and performance across reporting layers through unit testing and source reconciliation. Provide analytical insights that drive decision-making across functional areas (claims, premiums, sales, and operations). Collaborate with data engineering and business stakeholders in both agile and ad-hoc project work. Required Qualifications 7+ years of hands-on experience with SQL (advanced query writing, optimization, performance tuning). Strong expertise with at least one major BI / dashboarding tool (Qlik, Tableau, Power BI, Domo, or MicroStrategy). Solid understanding of data modeling, ETL concepts, and BI best practices . Experience working with large datasets and relational databases (e.g., SQL Server, Oracle, Postgres, Redshift, Snowflake). Strong analytical and problem-solving skills with a business-oriented mindset. Excellent communication skills with the ability to work independently in an offshore/remote setup. Preferred Qualifications Property & Casualty insurance experience . Experience with cloud data platforms (AWS Redshift, Snowflake, etc). Familiarity with data governance, performance monitoring, and self-service BI adoption strategies.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

kolkata

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

chennai

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

pune

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

mumbai

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

bengaluru

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

gurugram

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

hyderabad

Work from Office

Experienced in cloud operations in AWS services like Lambda Glue S3 Redshift, SNS and SQS. Skilled in planning, designing, and developing cloud applications. Collaborates with engineering teams to design and deploy scalable enterprise-wide solutions.

Posted 1 week ago

Apply

13.0 - 16.0 years

35 - 60 Lacs

bengaluru

Hybrid

Role: Engineering Manager Job Summary : Lead and manage the Avis Budget Group India Data Platform Engineering team, providing platform development and operations support within a new centralized global data architecture and engineering framework. You will enforce best practices, governance, and drive automation, efficiency, and performance of the enterprise data platform. This role requires a resourceful individual, a persistent problem solver, people leader and a strong hands-on engineer who can move around various technology stacks with ease. What Youll Do: Lead configuration and design of database systems and data integration services, ensuring PII, GDPR, encryption, and security best practices. Develop and maintain robust data pipelines on AWS and OCI clouds for ingestion, transformation and storage. Implement and maintain IBM CDC target environments and Confluent Kafka clusters to meet strong performance requirements. Handle data migrations using AWS DMS, ensuring data encryption at rest and in motion with IBM Voltage. Drive POCs and rollout of cloud-based solutions with infrastructure-as-code (Terraform/CloudFormation) and CI/CD pipelines. Oversee data modelling standards, metadata management, and data quality practices across the platform. Provide L3 support for critical production systems, enhance observability stacks (Dynatrace, AWS CloudWatch) and enforce operational excellence. Manage, mentor and coach a team of Data Platform Engineers, fostering a collaborative environment for innovation and growth. Engage with global data teams and stakeholders to deliver secure, cost-optimized, and high-impact platform solutions. Skills you should bring to the table: 12+ years of experience in data platform engineering, including 5+ years in a people-management role. Advanced expertise in AWS services (RDS, Redshift, DMS, Lambda), OCI cloud data services and infrastructure-as-code. Proficiency with IBM CDC replication technologies, Confluent Kafka and IBM Voltage encryption solutions. Strong background in data modelling, ETL design, database tuning and big data solutions (NoSQL, streaming). Strong expertise in NoSQL database (preferably Couchbase) cluster management, including version upgrades, security patching, performance tuning, and ensuring high availability through node scaling and rebalancing. Hands-on experience with Python/Shell scripting, Terraform, CloudFormation and Jenkins-based CI/CD. Solid understanding of Linux/Unix environments, networking protocols and production system ownership. Excellent communication skills and ability to collaborate across IST and US East time zones.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

0 - 2 Lacs

chennai

Work from Office

Requirement 1: Skills: AWS Redshift dev with Apache Airflow Location: Chennai Experience: 8+ Years Work Mode: Hybrid. Role & responsibilities: Senior Data Engineer AWS Redshift & Apache Airflow Location: Chennai Experience Required: 8+ Years Job Summary We are seeking a highly experienced Senior Data Engineer to lead the design, development, and optimization of scalable data pipelines using AWS Redshift and Apache Airflow. The ideal candidate will have deep expertise in cloud-based data warehousing, workflow orchestration, and ETL processes, with a strong background in SQL and Python. Key Responsibilities Design, build, and maintain robust ETL/ELT pipelines using Apache Airflow. Integrate data from various sources into AWS Redshift for analytics and reporting. Develop and optimize Redshift schemas, tables, and queries. Monitor and tune Redshift performance for large-scale data operations. Implement and manage DAGs in Airflow for scheduling and monitoring data workflows. Ensure reliability and fault tolerance in data pipelines. Work closely with data scientists, analysts, and business stakeholders to understand data requirements. Translate business needs into technical solutions. Enforce data quality, integrity, and security best practices. Implement access controls and audit mechanisms using AWS IAM and related tools. Mentor junior engineers and promote best practices in data engineering. Stay updated with emerging technologies and recommend improvements. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, or related field. 8+ years of experience in data engineering, with a focus on AWS Redshift and Apache Airflow. Strong proficiency in AWS Services: Redshift, S3, Lambda, Glue, IAM. Proficient in Programming Languages: SQL, Python. Experience with ETL Tools & Frameworks: Apache Airflow, DBT (preferred). Experience with data modeling, performance tuning, and large-scale data processing. Familiarity with CI/CD pipelines and version control (Git). Excellent problem-solving and communication skills. Preferred Skills Experience with big data technologies (Spark, Hadoop). Knowledge of NoSQL databases (DynamoDB, Cassandra). AWS certification (e.g., AWS Certified Data Analytics – Specialty). Preferred candidate profile Regards, Bhavani Challa Sr. Talent Acquisition E: bhavani.challa@arisetg.com M: 9063067791 www.arisetg.com

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies