Jobs
Interviews

152 Redshift Aws Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

12 - 14 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile

Posted 3 months ago

Apply

5.0 - 9.0 years

0 Lacs

Nagpur

Work from Office

Role & responsibilities Job Role- AWS Data Engineer(L3) Experience-7+ years Location-Nagpur 5+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts.

Posted 3 months ago

Apply

12.0 - 22.0 years

25 - 40 Lacs

Bangalore Rural, Bengaluru

Work from Office

Role & responsibilities Requirements: Data Modeling (Conceptual, Logical, Physical)- Minimum 5 years Database Technologies (SQL Server, Oracle, PostgreSQL, NoSQL)- Minimum 5 years Cloud Platforms (AWS, Azure, GCP) - Minimum 3 Years ETL Tools (Informatica, Talend, Apache Nifi) - Minimum 3 Years Big Data Technologies (Hadoop, Spark, Kafka) - Minimum 5 Years Data Governance & Compliance (GDPR, HIPAA) - Minimum 3 years Master Data Management (MDM) - Minimum 3 years Data Warehousing (Snowflake, Redshift, BigQuery)- Minimum 3 years API Integration & Data Pipelines - Good to have. Performance Tuning & Optimization - Minimum 3 years business Intelligence (Power BI, Tableau)- Minimum 3 years Job Description: We are seeking experienced Data Architects to design and implement enterprise data solutions, ensuring data governance, quality, and advanced analytics capabilities. The ideal candidate will have expertise in defining data policies, managing metadata, and leading data migrations from legacy systems to Microsoft Fabric/DataBricks/ . Experience and deep knowledge about at least one of these 3 platforms is critical. Additionally, they will play a key role in identifying use cases for advanced analytics and developing machine learning models to drive business insights. Key Responsibilities: 1. Data Governance & Management Establish and maintain a Data Usage Hierarchy to ensure structured data access. Define data policies, standards, and governance frameworks to ensure consistency and compliance. Implement Data Quality Management practices to improve accuracy, completeness, and reliability. Oversee Metadata and Master Data Management (MDM) to enable seamless data integration across platforms. 2. Data Architecture & Migration Lead the migration of data systems from legacy infrastructure to Microsoft Fabric. Design scalable, high-performance data architectures that support business intelligence and analytics. Collaborate with IT and engineering teams to ensure efficient data pipeline development. 3. Advanced Analytics & Machine Learning Identify and define use cases for advanced analytics that align with business objectives. Design and develop machine learning models to drive data-driven decision-making. Work with data scientists to operationalize ML models and ensure real-world applicability. Required Qualifications: Proven experience as a Data Architect or similar role in data management and analytics. Strong knowledge of data governance frameworks, data quality management, and metadata management. Hands-on experience with Microsoft Fabric and data migration from legacy systems. Expertise in advanced analytics, machine learning models, and AI-driven insights. Familiarity with data modelling, ETL processes, and cloud-based data solutions (Azure, AWS, or GCP). Strong communication skills with the ability to translate complex data concepts into business insights. Preferred candidate profile Immediate Joiner

Posted 3 months ago

Apply

5 - 10 years

15 - 20 Lacs

Hyderabad

Work from Office

DBA Role: Expertise in writing and optimizing queries for performance, including but not limited to Redshift/Postgres/SQL/Big Query and using query plans. Expertise in database permissions, including but not limited to Redshift/BigQuery /Postgres/ SQL / Windows AD Knowledge of database design/ ability to work with data architects and other IT specialists to set up, maintain and monitor data networks/ storage / metrics. Expertise in backup and recovery, including AWS Redshift snapshot restores. Redshift (provisioned and serverless) configuration and creation. Redshift Workload Management, Redshift table statistics. Experience working with third party vendors, being able to coordinate with third parties and internal stakeholders to troubleshoot issues. Experience working with internal stakeholders and business partners on both long- and short-term projects related to efficiency, optimization and cost reduction. Expertise in database management best practices/ IT security best practices Experience with the following a plus: Harness Git Cloud watch Cloudablity Other monitoring dashboard configurations Experience with a variety of computer information systems Excellent attention to detail Problem-solving and critical thinking Ability to explain complex ideas in simple terms

Posted 3 months ago

Apply

5 - 9 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

AWS Data Engineer To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 JOB Profile: Significant 5 to 9 years of experience in designing and implementing scalable data engineering solutions on AWS. Strong proficiency in Python programming language. Expertise in serverless architecture and AWS services such as Lambda, Glue, Redshift, Kinesis, SNS, SQS, and CloudFormation. Experience with Infrastructure as Code (IaC) using AWS CDK for defining and provisioning AWS resources. Proven leadership skills with the ability to mentor and guide junior team members. Excellent understanding of data modeling concepts and experience with tools like ERStudio. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Apache Airflow for orchestrating data pipelines is a plus. Knowledge of Data Lakehouse, dbt, or Apache Hudi data format is a plus. Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Redshift. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Desired Candidate Profile 5-9 years of experience in an IT industry setting with expertise in Python programming language (Pyspark). Strong understanding of AWS ecosystem including S3, Glue, Lambda, Redshift. Bachelor's degree in Any Specialization (B.Tech/B.E.).

Posted 4 months ago

Apply

5 - 10 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

To complement the existing cross-functional team, Zensar is looking for a Data Engineer who will assist in designing and also implement scalable and robust processes to support the data engineering capability. This role will be responsible for implementing and supporting large-scale data ecosystems across the Group. This incumbent will use best practices in cloud engineering, data management and data storage to continue our drive to optimize the way that data is stored, consumed and ultimately democratized. The incumbent will also engage with stakeholders across the organization with use of the Data Engineering practices to facilitate the improvement in the way that data is stored and consumed. Role & responsibilities Assist in designing and implementing scalable and robust processes for ingesting and transforming complex datasets. Designs, develops, constructs, maintains and supports data pipelines for ETL from a multitude of sources. Creates blueprints for data management systems to centralize, protect, and maintain data sources. Focused on data stewardship and curation, the data engineer enables the data scientist to run their models and analyses to achieve the desired business outcomes Ingest large, complex data sets that meet functional and non-functional requirements. Enable the business to solve the problem of working with large volumes of data in diverse formats, and in doing so, enable innovative solutions. Design and build bulk and delta data lift patterns for optimal extraction, transformation, and loading of data. Supports the organisations cloud strategy and aligns to the data achitecture and governance including the implementation of these data governance practices. Engineer data in the appropriate formats for downstream customers, risk and product analytics or enterprise applications. Assist in identifying, designing and implementing robust process improvement activities to drive efficiency and automation for greater scalability. This includes looking at new solutions and new ways of working and being on the forefront of emerging technologies. Work with various stakeholders across the organization to understand data requirements and apply technical knowledge of data management to solve key business problems. Provide support in the operational environment with all relevant support teams for data services. Provide input into the management of demand across the various data streams and use cases. Create and maintain functional requirements and system specifications in support of data architecture and detailed design specifications for current and future designs. Support test and deployment of new services and features. Provides technical leadership to junior data engineers in the team Preferred candidate profile A degree in Computer Science, Business Informatics, Mathematics, Statistics, Physics or Engineering. 3+ years of data engineering experience 3+ years of experience with any data warehouse technical architectures, ETL/ELT, and reporting/analytics tools including , but not limited to , any of the following combinations (1) SSIS ,SSRS or something similar (2) ETL Frameworks, (3) Spark (4) AWS data builds Should be at least at a proficient level in at least one of Python or Java Some experience with R, AWS, XML, json, cron will be beneficial Experience with designing and implementing Cloud (AWS) solutions including use of APIs available. Knowledge of Engineering and Operational Excellence using standard methodologies. Best practices in software engineering, data management, data storage, data computing and distributed systems to solve business problems with data.

Posted 4 months ago

Apply

6 - 10 years

15 - 27 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

Job Description: 1. Candidate should have good experience in all the functionalities of Dataiku 2. Should have previous exposure handling large data sets using Dataiku and preparing and calculating data. 3. Should be able to write queries to extract and connect from RDBSM/Data lake and any other manual datasets 4. Most importantly, should be able to understand existing developments and take over with minimal handover. 5. Must be expert in Excel as well given all of the information produced in mostly furnished in excel at the right level of detail to the stakeholders for validation and discussions 6. Must have a eye for accuracy ensuring the flows are robust. 7. Banking process knowledge is a good to have Note: Kindly go through the JD and apply accordingly, its for PAN India Hiring

Posted 4 months ago

Apply

6 - 10 years

15 - 22 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

AWS Data Engineer with hands-on experience in Amazon Redshift and EMR, responsible for building scalable data pipelines and managing big data processing workloads. The role requires strong skills in Spark, Hive, S3 on AWS cloud infrastructure.

Posted 4 months ago

Apply

3 - 8 years

3 - 8 Lacs

Hyderabad

Work from Office

Name of Organization: Jarus Technologies (India) Pvt. Ltd. Organization Website: www.jarustech.com Position: Senior Software Engineer - Data warehouse Domain Knowledge: Insurance (Mandatory) Job Type: Permanent Location: Hyderabad - IDA Cherlapally, ECIL and Divyasree Trinity, Hi-Tech City. Experience: 3+ years Education: B. E. / B. Tech. / M. C. A. Resource Availability: Immediately or a maximum period of 30 days. Technical Skills: • Strong knowledge of data warehousing concepts and technologies. • Proficiency in SQL and other database languages. • Experience with ETL tools (e.g., Informatica, Talend, SSIS). • Familiarity with data modelling techniques. • Experience in building dimensional data modelling objects, dimensions, and facts. • Experience with cloud-based data warehouse platforms (e.g., AWS Redshift, Azure Synapse, Google Big Query). • Familiar with optimizing SQL queries and improving ETL processes for better performance. • Knowledge of data transformation, cleansing, and validation techniques. Experience with incremental loads, change data capture (CDC) and data scheduling. • • Comfortable with version control systems like GIT. • Familiar with BI tools like Power BI for visualization and reporting. Responsibilities: Design, develop and maintain data warehouse systems and ETL (Extract, Transform, Load) processes. • • Develop and optimize data models and schemas to support business needs. • Design and implement data warehouse architectures, including physical and logical designs. • Design and develop dimensions, facts and bridges. • Ensure data quality and integrity throughout the ETL process. • Design and implement relational and multidimensional database structures. • Understand data structures and fundamental design principles of data warehouses. • Analyze and modify data structures to adapt them to business needs. • Identify and resolve data quality issues and data warehouse problems. • Debug ETL processes and data warehouse queries. Communication skills: • Good communication skills to interact with customer • Ability to understand requirements for implementing an insurance warehouse system

Posted 4 months ago

Apply

6 - 10 years

15 - 20 Lacs

Gurugram

Remote

Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.

Posted 4 months ago

Apply

8 - 13 years

30 - 40 Lacs

Bengaluru

Hybrid

Key Responsibilities: Develop & Optimize Data Pipelines Architect, build, and enhance scalable data pipelines for high-performance processing. Troubleshoot & Sustain Identify, diagnose, and resolve data pipeline issues to ensure operational efficiency. Data Architecture & Storage Design efficient data storage and retrieval strategies using Postgres, Redshift, and other databases. CI/CD Pipeline Management Implement and maintain continuous integration and deployment strategies for smooth workflow automation. Scalability & Performance Tuning Ensure the robustness of data solutions while optimizing performance at scale. Collaboration & Leadership Work closely with cross-functional teams to ensure seamless data flow and lead engineering best practices. Security & Reliability – Establish governance protocols and ensure data integrity across all pipelines. Technical Skills Required: Programming: Expert in Python and Scala Big Data Technologies: Proficient in Spark, Kafka DevOps & Cloud Infrastructure: Strong understanding of Kubernetes SQL & Database Management: Skilled in SQL administration, Postgres, Redshift CI/CD Implementation: Experience in automating deployment processes for efficient workflow Job Location Bangalore Notice Period: Immediate to 15 days. Interested candidates can share your profiles t o marygracy.antony@ilink-systems.com

Posted 4 months ago

Apply

4.0 - 9.0 years

6 - 16 Lacs

hyderabad, pune, bengaluru

Work from Office

Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Responsibilities: • Design and develop data lakes, manage data flows that integrate information from various sources into a common data lake platform through an ETL Tool • Code and manage delta lake implementations on S3 using technologies like Databricks or Apache Hoodie • Triage, debug and fix technical issues related to Data Lakes • Design and Develop Data warehouses for Scale • Design and Evaluate Data Models (Star, Snowflake and Flattened) • Design data access patterns for OLTP and OLAP based transactions • Coordinate with Business and Technical teams through all the phases in the software development life cycle • Participate in making major technical and architectural decisions • Maintain and Manage Code repositories like Git Must Have : • 5+ Years of Experience operating on AWS Cloud with building Data Lake architectures • 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and Redshift • 3+ Years of Experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, Exasol etc. • 3+ Years of working knowledge in Spark • 3+ Years of Experience in building Delta Lakes using technologies like Apache Hoodie or Data bricks • 3+ Years of Experience working on any ETL tools and technologies • 3+ Years of Experience in any programming language (Python, R, Scala, Java • Bachelors degree in computer science, information technology, data science, data analytics or related field • Experience working on Agile projects and Agile methodology in general Good To Have: • Strong understanding of RDBMS principles and advanced data modelling techniques. • AWS cloud certification (e.g., AWS Certified Data Analytics Specialty) is a strong plus. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Databricks, Apache Hudi • Databricks on AWS • AWS Services: S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, Redshift • Data warehouses: Snowflake, Redshift, HANA, Teradata, Exasol • Data Modelling: Star Schema, Snowflake Schema, Flattened Models • DevOps & CI/CD: Git, Agile Methodology, ETL Methodology

Posted Date not available

Apply

6.0 - 11.0 years

13 - 18 Lacs

pune

Hybrid

Who are we looking for? We are looking for a data bricks engineer (developer), having strong software development experience of 6 to 10 years on Apache Spark & Scala Technical Skills: Strong knowledge & hands-on experience in Apache Spark & Scala Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities: Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile Scrum Qualification: Bachelor of Engineering (Computer background preferred)

Posted Date not available

Apply

4.0 - 9.0 years

7 - 12 Lacs

hyderabad

Work from Office

Key responsibilities include: • Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quick sight/Power BI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About the team: The Global Operations Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic qualifications: 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, Power BI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Experience applying basic statistical methods (e.g. regression) to difficult business problems Preferred qualifications: Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts

Posted Date not available

Apply

8.0 - 12.0 years

5 - 15 Lacs

pune

Work from Office

Architect and Design: AWS services, ETL processes, Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).Strong programming skills in Python. data modeling, data warehousing, and big data technologies.

Posted Date not available

Apply

8.0 - 13.0 years

15 - 30 Lacs

mangaluru, mysuru, bengaluru

Hybrid

Analysis of the Snaplogic Lead Role 1. What does the work look like for someone in this job? As a Snaplogic Lead , your primary responsibilities will include: Designing and developing ETL pipelines using Snaplogic to integrate data across multiple systems. Collaborating with cross-functional teams (technical & non-technical) to ensure smooth data flow between upstream/downstream systems. Optimizing Snaplogic workflows by leveraging its features for performance and efficiency. Working with databases (Redshift, Oracle, or similar) in BI/DW environments. Troubleshooting and resolving technical issues related to data integration. Scripting and automation (Python, Unix) to enhance Snaplogic workflows. Managing AWS and Big Data technologies for data processing and transformation. Ensuring best practices in ETL development while maintaining scalability and reliability. 2. What are the main problems or pain points they are looking to solve with this hire? The company is likely facing: Complex data integration challenges across multiple systems (SAP, Oracle ERP, APIs, etc.). Performance bottlenecks in ETL processes that need optimization. Lack of standardization in Snaplogic implementations, requiring an expert to enforce best practices. Cross-team collaboration issues , needing someone who can bridge technical and business teams. Scalability concerns with growing data volumes, requiring AWS/Big Data expertise. 3. Are there any unwritten requirements they will most likely have? While not explicitly stated, they likely expect: Leadership experience guiding junior developers or leading Snaplogic projects. Strong problem-solving skills —ability to debug complex data pipelines. Proactive attitude —identifying inefficiencies and suggesting improvements. Business acumen —understanding how data flows impact business decisions. Cloud expertise beyond AWS —familiarity with Azure/GCP could be a plus. 4. What skills do I need to have already to do well in this role? Must-have skills: 7+ years of Snaplogic development (pipelines, optimization, troubleshooting). Strong ETL knowledge (data mapping, transformations, error handling). Database expertise (Redshift, Oracle, or similar in BI/DW contexts). Cross-functional collaboration (working with tech & business teams). AWS & Big Data exposure (data processing, transformations). Good-to-have skills (will give you an edge): Python/Unix scripting (1+ year experience). Knowledge of other ETL tools (Informatica, Talend, etc.). SAP/Oracle ERP integrations with Snaplogic. API connectivity experience . 5. What key skills should be highlighted in the CV for this role? To stand out, structure your CV to emphasize: Snaplogic Expertise – Highlight complex Snaplogic projects, optimizations, and integrations. ETL & Data Integration – Showcase experience in designing efficient ETL workflows. Database & BI/DW Knowledge – Mention Redshift/Oracle deployments in data warehousing. Cloud & Big Data – AWS, data processing, and transformation flows. Scripting & Automation – Python/Unix scripting experience (if applicable). Leadership & Collaboration – Instances where you led teams or worked across functions. Problem-Solving – Examples of resolving Snaplogic performance issues or system integrations. Final Recommendation: This role is for a Snaplogic expert who can design, optimize, and troubleshoot data pipelines while collaborating across teams . If you have strong Snaplogic + ETL + database experience , along with AWS/Big Data exposure , you’re a great fit.

Posted Date not available

Apply

5.0 - 10.0 years

6 - 16 Lacs

hyderabad, chennai, bengaluru

Work from Office

Dear Candidate, This is with reference to your profile on the job portal. And got shortlisted in RD1 Deloitte India Consulting has an immediate requirement for the following role. Job Notice period: Looking for immediate 4 Weeks (Max) Location : Any Job Description Skill : AWS Data Engineer Incase if you are interested, please share your updated resume along with the following details.(Mandatory) To smouni@deloitte.com Candidate Name Mobile No. Email ID Skill Total Experience Education Details Current Location Requested location Current Firm Current CTC Exp CTC Notice Period/LWD Feedback Mounika S Consultant| Talent Team

Posted Date not available

Apply

5.0 - 10.0 years

8 - 18 Lacs

hyderabad

Work from Office

Job Title: Data Engineer Employment Type: Full-time (On-site) Payroll: BCT Consulting Pvt Ltd Work Location: Hyderabad (Work from Office Monday to Friday, General Shift) Experience Required: 5+ Years Joining Mode: Permanent with BCT Consulting Pvt Ltd, deployed at Amazon About the Role: We are seeking a highly skilled and motivated Data Engineer with strong expertise in SQL, Python, Big Data technologies, AWS, Airflow, and Redshift . The ideal candidate will play a key role in building and optimizing data pipelines, ensuring data integrity, and enabling scalable data solutions across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and SQL . Work with Big Data technologies to process and manage large datasets efficiently. Implement and manage workflows using Apache Airflow . Develop and optimize data models and queries in Amazon Redshift . Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, consistency, and security across all data platforms. Monitor and troubleshoot data pipeline performance and reliability. Leverage AWS services (S3, Lambda, Glue, EMR, etc.) for cloud-native data engineering solutions. Required Skills & Qualifications: 5+ years of experience in Data Engineering . Strong proficiency in SQL and Python . Hands-on experience with Big Data tools (e.g., Spark, Hadoop). Expertise in AWS cloud services related to data engineering. Experience with Apache Airflow for workflow orchestration. Solid understanding of Amazon Redshift and data warehousing concepts. Excellent problem-solving and communication skills. Ability to work in a fast-paced, collaborative environment. Nice to Have: Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance and compliance standards. Perks & Benefits: Opportunity to work on cutting-edge data technologies. Collaborative and innovative work culture. Immediate joining preferred.

Posted Date not available

Apply

7.0 - 11.0 years

7 - 17 Lacs

pune

Remote

Requirements for the candidate: Data Engineer with a minimum of 7+ years of experience of data engineering experience. The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.

Posted Date not available

Apply

6.0 - 11.0 years

0 - 1 Lacs

hyderabad, chennai, bengaluru

Hybrid

Role & responsibilities

Posted Date not available

Apply

9.0 - 14.0 years

30 - 40 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

Data Engineer / Data Analyst (Hybrid Role) 10+ Yrs Data Analysis: Pandas, NumPy, SQL, EDA techniques Data Engineering: ETL pipelines, Airflow, Spark Cloud: GCP BigQuery, AWS Redshift Statistical Insight: Ability to assess data quality and completeness GoodTohave: Metadata & Standards: DCAT, schema.org, croissant Zero-Copy Architecture: Real-time data linking without duplication Data engineer, data analyst, Pandas, NumPy, SQL, EDA, Airflow, Spark, GCP, biq query, AWS, redshift

Posted Date not available

Apply

4.0 - 9.0 years

20 - 35 Lacs

bengaluru

Hybrid

Role: Sr. Data Engineer Location: Bangalore Experience: 4-12 Years About Company The organisation is a global technology consulting and software development firm headquartered in Bangalore, India. It specialises in providing end-to-end solutions in custom software development, enterprise applications, cloud services, and product engineering. It offers a collaborative and growth-oriented work culture, encouraging continuous learning, innovation, and opportunities to work on international projects using the latest technologies. About the role The Data Engineering Lead will be responsible for managing and leading a team of engineers to design, build and maintain data pipelines, data warehouses and ETL solutions. They will work closely with various teams to create scalable solutions, incubate data-driven ideas and provide technical guidance to the teams. As a subject matter expert, they will provide training, coaching and mentoring for the team. Qualifications: At least 4 years of experience in Data Engineering, with exposure to big data technologies like Hadoop, Spark, and NoSQL databases Hands-on experience in designing and developing large-scale data processing systems including ETL pipelines, data warehousing and business intelligence reports Experience in managing and leading a team and working with cross-functional and geographical teams Experience in Agile methodologies and ability to evangelize best-practices in Data Engineering Experience in programming languages such as Python, Java, and SQL Experience in Pyspark, Airflow, Redshift, Pandas, NumPy, Databricks,Lakehouse etc. Experience with cloud infrastructure and/or containerization technologies is preferred Excellent written and verbal communication skills with proficiency in English

Posted Date not available

Apply

9.0 - 14.0 years

0 - 1 Lacs

chennai, coimbatore, bengaluru

Work from Office

Role & responsibilities Design, develop, and maintain scalable, reliable, and high-performance data pipelines and ETL processes. Work extensively with AWS cloud services to build secure and optimized data solutions. Leverage PySpark for large-scale data processing and transformation. Collaborate with Data Architects, Analysts, and Business stakeholders to define data requirements and implement effective solutions. Ensure data quality, governance, and compliance across all data systems. Optimize data workflows for performance, scalability, and cost efficiency. Troubleshoot and resolve data pipeline and system issues. Mentor junior engineers and contribute to best practices, code reviews, and knowledge sharing within the team. Primary Skills Strong expertise in Data Engineering concepts, architecture, and frameworks. Hands-on experience with AWS services (S3, Redshift, Glue, EMR, Lambda, etc.). Proficiency in PySpark and distributed data processing. Solid understanding of data modeling, warehousing, and ETL design. Experience working with structured and unstructured data. Strong problem-solving skills with a focus on performance and scalability. Excellent communication and collaboration skills. Preferred Qualifications Experience with Databricks or similar cloud-based big data platforms. Knowledge of SQL and database optimization techniques. Exposure to CI/CD pipelines and DevOps practices for data engineering.

Posted Date not available

Apply

5.0 - 8.0 years

5 - 15 Lacs

hyderabad

Work from Office

Role & responsibilities Key Responsibilities: Coding proficiency in SQL, Python/PySpark AWS Glue , redshift , lambda Data Engineering (Designing ETL Workflows, Understanding on DB Concepts/Architecture, Handling Big Data Volumes) Performance (Measurement, Indexes, Partitioning and Tuning) Data modeling and design Preferred candidate profile Job Description: Job Title: Data Engineer Experience: 5+ Years Location: Hyderabad Job Type: Full-Time

Posted Date not available

Apply

5.0 - 10.0 years

17 - 25 Lacs

noida, hyderabad, chennai

Hybrid

Strong experience using AWS, Appflow, S3,Athena, Lambda, RDS, Event bridge, Lakeformation, Apache, SNS, Cloudformation, Secret manager, Glue and Glue(Pyspark). Proficiency with SQL, Python and Snowflake Strong technical skills in services like Appflow, S3,Athena, Lambda, RDS, Event bridge, Lakeformation, Apache, SNS, Cloudformation, Secret manager, Glue and Glue(Pyspark), SQL, Data Warehousing, Informatica, Oracle Knowledge on Data Warehousing concepts is essential and prior experience in Informatica Power Centre and Oracle Exadata will prove useful. Ability to clearly summarize methodology and key points of program/report in technical documentation/specifications is also required

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies