Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
12 - 22 Lacs
pune
Work from Office
We are looking for data engineers who have the right attitude, aptitude, skills, empathy, compassion, and hunger for learning. Build products in the data analytics space. A passion for shipping high-quality data products, interest in the data products space; curiosity about the bigger picture of building a company, product development and its people. Roles and Responsibilities Develop and manage robust ETL pipelines using Apache Spark (Scala) Understand park concepts, performance optimization techniques and governance tools Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse/Data Lake/Data Mesh Collaborate cross-functionally to design effective data solutions Implement data workflows utilizing AWS Step Functions for efficient orchestration. Leverage AWS Glue and Crawler for seamless data cataloging and automation Monitor, troubleshoot, and optimize pipeline performance and data quality Maintain high coding standards and produce thorough documentation. Contribute to high-level (HLD) and low-level (LLD) design discussions Technical Skills Minimum 3 years of progressive experience building solutions in Big Data environments. Have a strong ability to build robust and resilient data pipelines which are scalable, fault tolerant and reliable in terms of data movement. 3+ years of hands-on expertise in Python, Spark and Kafka. Strong command of AWS services like EMR, Redshift, Step Functions, AWS Glue, and AWS Crawler. Strong hands on capabilities on SQL and NoSQL technologies. Sound understanding of data warehousing, modeling, and ETL concepts Familiarity with High-Level Design (HLD) and Low-Level Design (LLD) principles Excellent written and verbal communication skills.
Posted 2 weeks ago
7.0 - 12.0 years
30 - 45 Lacs
chennai
Remote
Job Description Requirement: 5+ years of experience in the software industry Experience in working with AWS Cloud Services Exposure to at least 3+ AWS services including Glue, Lambda, EMR, Elastic Beanstalk, DocumentDB, ElasticCache (Redis), S3, Cloud Formation, Kinesis, Firehose, Redshift, Neptune, EKS*, ECS*, MCS* Strong experience in Relational Database (SQL/PostgreSQL) and data warehousing Strong experience in design/architectural patterns Experience in working on Node.JS Mandatory Skills: AWS SQL Data Warehousing Node.JS Nice to have: Experience in Apache Spark and Hadoop Experience in React.JS/Angular.JS Knowledge on Android Studio, Blue stack Emulator / Simulator Knowledge in NoSQL Databases (MongoDB, Cassandra, etc) Dinesh dinesh.m@softcrylic.com
Posted 3 weeks ago
10.0 - 16.0 years
17 - 25 Lacs
gurugram
Hybrid
Role & responsibilities Administer and maintain scalable AWS cloud environments and applications for data organization. Install and maintain software, services, and applications by identifying system requirements. Extensive database experience with AWS Redshift, AWS RDS and MySQL Hands-on AWS Services and DB and Server troubleshooting experience. Maintains environment by identifying system requirements, installing upgrades and monitoring system performance. Knowledge of day-to-day database operations, deployments, and development Experienced in Snowflake Knowledge of SQL and Performance tuning Knowledge of Linux Shell Scripting or Python Migrate system from one AWS cloud to another AWS account Hands-on DB and Server troubleshooting experience Maintains system performance by performing system monitoring and analysis and performance tuning. Troubleshooting system hardware, software and operating and system management systems. Incident and change management Release management and Code deployment After hour on-call support Testing disaster recovery policies and procedures; completing back-ups; and maintaining documentation. Upgrades system and services and developing, testing, evaluating, and installing enhancements and new software. Communicate with stakeholders Preferred candidate profile Bachelors degree in computer science or engineering 10 years of experience in System, platform, and AWS cloud administration 7 years of Database administration using latest AWS technologies – AWS Redshift, RDS, S3, EC2, NLB, VPC etc. 2 years of Snowflake administration Experience with Agile software development using JIRA Experience in multiple OS platforms with strong emphasis on Linux and Windows systems Experience with OS-level scripting environment such as KSH shell., PowerShell Experience with version management tools and CICD pipeline In-depth knowledge of the TCP / IP protocol suite, security architecture, securing and hardening Operating Systems, Networks, Databases and Applications. Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) , query performance tuning. Experience supporting and optimizing data pipelines and data sets. Knowledge of the Incident Response life cycle AWS solution architect certifications preferred Strong written and verbal communication skills.
Posted 3 weeks ago
5.0 - 10.0 years
0 - 2 Lacs
hyderabad
Work from Office
Role : Sr.Data Engineer EXP : 5+ YEARS Loc : Data Engineer Skill: Python , SQL Scripting , Big data , AWS , Airflow , Redshift
Posted 3 weeks ago
5.0 - 10.0 years
12 - 20 Lacs
bengaluru
Remote
Role & responsibilities Build and deploy data platforms, data warehouses, and big data solutions across industries (BFSI, Manufacturing, Healthcare, eCommerce, IoT, Digital Twin, etc) Integrating, transforming, and consolidating data from various structured and unstructured data systems. Expert in data ingestion, transformation, storage, and analysis, often using Azure services and migration from legacy on-premise services Essential skills include SQL, Python, R and knowledge of ETL/ELT processes and big data technologies like Apache Spark, Scala, PySpark. Maintain data integrity, resolve data-related issues, and ensure the reliability and performance of data solutions Work with stakeholders to provide real-time data analytics, monitor data pipelines, and optimize performance and scalability Strong understanding of data management fundamentals, data warehousing, and data modeling. Bigdata technologies HDFS, Spark, Hbase, Hive, Sqoop, Kafka, RabbitMQ, Flink Implement seamless data integration solutions between Azure/AWS/GCP and Snowflake platforms. Identify and resolve performance bottlenecks, optimize queries, and ensure the overall efficiency of data pipelines. Lead the development and management of data infrastructure, including tools, dashboards, queries, reports, and scripts, ensuring automation of recurring tasks while maintaining data quality and integrity Implement and maintain data security measures, ensuring compliance with industry standards and regulations. Ensure data architecture aligns with business requirements and best practices. Experience in Power BI /Tableau /Looker Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experience in Test Driven Development, building libraries, and proficiency in Pandas, NumPy, Elasticsearch, Apache Beam Familiarity with CI/CD pipelines, monitoring, and infrastructure-as-code (e. g., Terraform, CloudFormation). Proficient in query optimization, data partitioning, indexing strategies, and caching mechanisms. Ensure GDPR, SOX, and other regulatory compliance across data workflows Essential skills include SQL, Python, and knowledge of ETL/ELT processes and big data technologies like Spark. Exposure in working on Event/File/Table Formats such as Avro, Parquet, Iceberg, Delta Must have Skills (Atleast one or two) Azure Data Factory (ADF), Databricks, Synapse, Data Lake Storage, Timeseries Insights, Azure SQL Database, SQL Server, Presto, SSIS AWS data services such as S3, Glue Studio, Redshift, Athena, and EMR, Redshift , Airflow, IAM, DBT, Lambda, RDS, DynamoDB, Neo4j, Amazon Neptune GCP Big query, SQL, Composer, Dataflow, Dataform, DBT, /Python, Cloud functions, Dataproc+pyspark, Python injection, Dataflow, Cloud Storage, Pub/Sub, and Vertex AI, GKE, Cloud Functions OCI Object Storage, OCI Data Integration, Oracle Database, Oracle Analytics Cloud, Oracle Analytics Cloud (OAC), Autonomous Data Warehouse (ADW), NetSuite Analytics Warehouse (NSAW), PL/SQL, Exadata
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
bengaluru
Work from Office
Seikor is hiring for Tricon Infotech Pvt. Ltd. ( https://www.triconinfotech.com/ ) We are seeking full stack python developers. We are offering INR 1500 if you clear round 1 interview and are selected for round 2 interview. Apply, Earn during the process and find your next awesome job at Tricon powered by Seikor Job Title : Python Full-stack Developer Location : Bengaluru, India Experience : 4 - 10 Years Team Size : 5001,000 employees globally Function : Software Development Job Summary: We are looking for a skilled and experienced Python Full Stack Developer with hands-on experience in AWS . The ideal candidate should have a strong foundation in backend development using Python and frameworks like Django or Flask. This role offers an exciting opportunity to work on dynamic, scalable applications in a collaborative and fast-paced environment. Key Responsibilities: Lead and mentor a team of engineers, especially data engineers Architect scalable, secure backend systems using Python, FastAPI, and AWS Drive data infrastructure decisions with PostgreSQL, Redshift , and advanced data pipelines Collaborate cross-functionally to integrate AI-first features and stay ahead of emerging AI trends Ensure delivery of high-quality, maintainable code and manage technical debt Required Skills & Qualifications: Strong leadership and communication skills Deep understanding of AWS services (EC2, Lambda, S3, IAM, Redshift) Advanced proficiency in Python and FastAPI Expertise in relational databases ( PostgreSQL ) and data warehousing ( Redshift ) Proven experience in ETL pipelines, data modeling, and optimization Ability to thrive in fast-paced, iterative environments Nice to Have: Experience with AI/ML pipelines or data science platforms Familiarity with Airflow or similar orchestration tools Exposure to DevOps practices and CI/CD pipelines Soft Skills: Engineer-first mindset Team-oriented culture Growth mindset Strong problem-solving skills Educational Qualification: Bachelors or Masters degree in Computer Science, Engineering, or related field
Posted 3 weeks ago
9.0 - 14.0 years
20 - 35 Lacs
chennai, bengaluru
Work from Office
Dear Candidate , We are hiring for a "AWS Data Engineer " role for a leading MNC . Work Mode: Hybrid Experience: 7 to 14 years Location : Chennai / Bangalore Interview Date : 23rd Aug 2025 (Saturday) I nterview Mode : Face to Face Primary Skill : Pyspark (optimization), Redshift, Glue, SQL (queries), ETL (pipelines), python(coding) Detailed JD Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS, RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS, RDS, Redshift, Cloud Formation, and other AWS serverless resources. Responsibility We are seeking a highly skilled senior / junior AWS Developer to join our team. With a primary focus on SQL, the ideal candidate will also have experience with Agile methodologies. As a Senior AWS Developer, you will be responsible for optimizing data models for performance and efficiency, writing SQL queries to support data analysis and reporting, and designing and implementing ETL processes to load data into the data warehouse. You will also work with stakeholders to identify business needs and requirements for data-related projects and design and maintain the data architecture for all AWS data services. The ideal candidate will have at least 5 years of work experience and be comfortable working in a hybrid setting.
Posted 3 weeks ago
6.0 - 9.0 years
17 - 22 Lacs
pune
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 3 weeks ago
6.0 - 9.0 years
17 - 22 Lacs
hyderabad
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 3 weeks ago
6.0 - 9.0 years
17 - 22 Lacs
bengaluru
Work from Office
Role & responsibilities : AWS Resdshift, AWS Glue
Posted 3 weeks ago
5.0 - 10.0 years
15 - 27 Lacs
bengaluru
Work from Office
Hi, Greetings from Preludesys India Pvt Ltd!! We are hiring for one of our prestigious clients for the below position!!! Job Posting: Data Modeler -SA Notice Period: Immediate - 30 Days Role Overview We are looking for an experienced Data Modeler with a strong foundation in dimensional data modeling and a proven ability to design and maintain conceptual, logical, and physical data models. The ideal candidate will have a minimum of 5+ years of experience in data modeling and architecture, preferably within the banking or financial services industry. Key Responsibilities Design, develop, and maintain dimensional data models to support analytics and reporting. Design conceptual, logical, and physical data models Utilize AWS services for scalable data model design Align data models with business rules and governance standards. Collaborate with business stakeholders, data architects, and engineers to ensure data models align with business rules and data governance standards. Translate business requirements into scalable and efficient data models. Maintain comprehensive documentation for data models, metadata, and data dictionaries. Ensure consistency and integrity of data models across systems and platforms. Partner with data engineering teams to implement models in AWS-based environments, including Redshift, Glue, and Lake Formation. Required Skills and Qualifications 5+ years of experience in data modeling, with a focus on dimensional modeling and data warehouse design. Proficiency in developing conceptual, logical, and physical data models. Strong understanding of data governance, data quality, and metadata management. Hands-on experience with AWS services such as Redshift, Glue, and Lake Formation. Familiarity with data modeling tools (e.g., ER/Studio, ERwin, or similar). Excellent communication skills and ability to work with cross-functional teams. Preferred Qualifications Experience in the banking or financial services sector. Knowledge of data lake architecture and modern data stack tools. AWS or data modeling certifications are a plus.
Posted 3 weeks ago
7.0 - 12.0 years
22 - 27 Lacs
hyderabad, chennai, bengaluru
Hybrid
Role & responsibilities Data Engineering & Analytics: Strong background in building scalable data pipelines and analytics platforms. Databricks (AWS preferred): Mandatory hands-on expertise in Databricks, including cluster management, notebooks, job orchestration, and optimization. AWS Cloud Services: Proficiency in AWS ecosystem (S3, Glue, EMR, Lambda, Redshift, IAM, CloudWatch). Programming: Expertise in PySpark and Python for ETL, transformations, and analytics. GenAI & LLMs: Experience with Large Language Models (LLM), fine-tuning, and enterprise integration. CI/CD & DevOps Knowledge: Familiarity with Git-based workflows, deployment pipelines, and automation. Preferred candidate profile 812 years of IT experience with a strong focus on Data Engineering & Cloud Analytics. Minimum 45 years of hands-on Databricks experience (preferably on AWS). Strong expertise in PySpark, Python, SQL, and AWS Data Services. Experience in LLM fine-tuning, GenAI automation, and enterprise integration. Proven ability to lead teams, deliver projects, and engage stakeholders. Strong problem-solving, communication, and analytical skills.
Posted 3 weeks ago
4.0 - 8.0 years
10 - 18 Lacs
kolkata, pune, delhi / ncr
Work from Office
Design, develop, and implement robust microservices-based applications on AWS using Java. • Lead the architecture and design of EKS-based solutions, ensuring seamless deployment and scalability. • Collaborate with cross-functional teams to gather and analyze functional requirements, translating them into technical specifications. • Define and enforce best practices for software development, including coding standards, code reviews, and documentation. • Identify non-functional requirements such as performance, scalability, security, and reliability; ensure these are met throughout the development lifecycle. • Conduct architectural assessments and provide recommendations for improvements to existing systems. • Mentor and guide junior developers in best practices and architectural principles. • Proficiency in Java programming language with experience in frameworks such as Spring Boot. • Strong understanding of RESTful APIs and microservices architecture. • Experience with AWS services, especially EKS, Lambda, S3, RDS, DynamoDB, and CloudFormation. • Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI. • Ability to design data models for relational and NoSQL databases. • Experience in designing applications for high availability, fault tolerance, and disaster recovery. • Knowledge of security best practices in cloud environments. • Strong analytical skills to troubleshoot performance issues and optimize system efficiency. • - Excellent communication skills to articulate complex concepts to technical and non-technical stakeholders.
Posted 4 weeks ago
6.0 - 11.0 years
0 - 0 Lacs
chennai
Hybrid
Job details: Title AWS Data Engineer Type Hybrid Location Chennai Key Skills AWS Glue, RedShift, S3, Lambda Athena. Hands on experience in Data Engineer with AWS, Glue, Lambda, SQL, Python, Redshift. Must have working knowledge in designing and implementing data pipelines on any of the cloud providers (AWS is preferred). Must be able to work with large volumes of data coming from various sources. Perform data cleansing, data validation etc Hands on ETL developer who is good at python, SQL. AWS services like glue, glue crawlers, lambda, red shift, athena, s3, EC2, IAM, Monitoring and Logging mechanisms- AWS cloudwatch, setting up alerts. Deployment knowledge on cloud. Integrate CI/CD pipeline to build artifacts and deploy changed to higher Environments. Scheduling frame works Airflow, AWS Step functions Excellent Communication skills, should be able to work collaboratively with other teams
Posted 4 weeks ago
8.0 - 10.0 years
18 - 33 Lacs
pune
Hybrid
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions, etc. Develop ETL processes using PySpark and Redshift to extract insights from NoSQL databases like DynamoDB. Ensure high availability and scalability of the data warehousing infrastructure on AWS. Troubleshoot complex issues related to data processing, storage, and retrieval. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs.
Posted 4 weeks ago
3.0 - 7.0 years
11 - 18 Lacs
gurugram
Hybrid
Job Title: AWS Data Engineer Location: Gurugram, India No of openings: Experience Required: Company: PwC India Job Description: PwC India is seeking a talented AWS Data Engineer to join our team in Gurgaon. The ideal candidate will have 2-7 years of experience with a strong focus on AWS services, data engineering, and analytics. This role offers an exciting opportunity to work on cutting-edge projects for global clients while leveraging your expertise in cloud technologies and data management. Key Responsibilities: 1. AWS Service Implementation: Design, develop, and maintain data solutions using AWS services, with a particular focus on S3, Athena, Glue, EMR, and Redshift. Implement and optimize data lakes and data warehouses on AWS platforms. 2. Data Pipeline Development: Create and maintain efficient ETL processes using PySpark and other relevant tools. Develop scalable and performant data pipelines to process large volumes of data. Implement data quality checks and monitoring systems to ensure data integrity. 3. Database Management: Proficient with SQL and NoSQL databases, optimizing queries and database structures for performance. Design and implement database schemas that align with business requirements and data models. 4. Performance Optimization: Continuously monitor and optimize the performance of data processing jobs and queries. Implement best practices for cost optimization in AWS environments. Troubleshoot and resolve performance bottlenecks in data pipelines and analytics processes. 5. Collaboration and Documentation: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Develop and maintain comprehensive documentation for data architectures, processes, and best practices. Participate in code reviews and contribute to the team's knowledge base. Required Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 2-7 years of experience in data engineering, with a focus on AWS technologies. Strong hands-on experience with AWS services, particularly S3, Athena, Glue, EMR, and Redshift. Proficiency in Python and PySpark for data processing and analysis. Extremely good with SQL/PL SQL Demonstrated ability to optimize data pipelines and queries for performance. Strong problem-solving skills and attention to detail. Preferred Skills: AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect). Familiarity with data visualization tools (e.g., Tableau, Power BI). Experience with data modeling and data warehouse concepts. Innovation thinking and creativity in solution delivery
Posted 4 weeks ago
7.0 - 12.0 years
7 - 17 Lacs
Hyderabad, Bengaluru
Hybrid
Hexaware Technologies is Hiring AWS Redshift developers Primary Skill set - AWS redshift, Glue, Lambda, Pyspark Total Exp Required - 6 + years to 12yrs Location - Bangalore & Hyderabad only Work mode - Hybrid Job Description: Mandatory Skills: 8 to 10 years Experience with most of the cloud products such as Amazon AWS. Experience as developer in multiple cloud technologies including AWS EC2, S3, Amazon API Gateway , AWS Lambda, AWS Glue, AWS RDS, AWS Step Functions. Good knowledge on AWS environment and Service knowledge with S3 storage understanding. Must have good knowledge of AWS Glue and serverless architecture. Must have good knowledge of Pyspark. Must have good knowledge of SQL. Nice to have skills: Collaborate with data analysts and stakeholders to meet data requirements. AWS Postgres experience for DB design. Must have worked with Dynamo DB. Interested candidates, Kindly share your updated resume to ramyar2@hexaware.com with below required details. Full Name: Contact No: Total Exp: Rel Exp in AWS: Current & Joining Location: Notice Period (If serving mention LWD): Current CTC: Expected CTC:
Posted 1 month ago
5.0 - 9.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.
Posted 1 month ago
0.0 - 1.0 years
0 Lacs
Noida
Work from Office
We are excited to invite fresh BTech graduates to our Walk-In Drive for Trainee Roles at our Noida office. This is a great opportunity for recent graduates to kickstart their careers in one of the following domains: Available Domains: Python Java Frontend Development DevOps Software Testing Data Warehouse Walk-In Dates: Wednesday, July 23, 2025 Thursday, July 24, 2025 Important: Only 20 walk-in candidates will be shortlisted. Eligibility Criteria: BTech degree completed (20222025 pass-outs) Basic knowledge in at least one of the mentioned domains Good communication skills Eagerness to learn and grow in the tech field How to Apply: Interested candidates must register using the form below. Only shortlisted candidates will be contacted with interview location details. Apply Here: https://forms.gle/a9LesdmF7g1MM2PW7 Stipend/CTC: As per industry standards (To be discussed during the interview)
Posted 1 month ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Job Description: We are seeking an experienced and visionary Senior Data Architect to lead the design and implementation of scalable enterprise data solutions. This is a strategic leadership role for someone who thrives in cloud-first, data-driven environments and is passionate about building future-ready data architectures. Key Responsibilities: Define and implement enterprise-wide data architecture strategy aligned with business goals. Design and lead scalable, secure, and resilient data platforms for both structured and unstructured data. Architect data lake/warehouse ecosystems and cloud-native solutions (Snowflake, Databricks, Redshift, BigQuery). Collaborate with business and tech stakeholders to capture data requirements and translate them into scalable designs. Mentor data engineers, analysts, and other architects in data best practices. Establish standards for data modeling, integration, and management. Drive governance across data quality, security, metadata, and compliance. Lead modernization and cloud migration efforts. Evaluate new technologies and recommend adoption strategies. Support data cataloging, lineage, and MDM initiatives. Ensure compliance with privacy standards (e.g., GDPR, HIPAA, CCPA). Required Qualifications: Bachelors/Master’s degree in Computer Science, Data Science, or related field. 10+ years of experience in data architecture; 3+ years in a senior/lead capacity. Hands-on experience with modern cloud data platforms: Snowflake, Azure Synapse, AWS Redshift, BigQuery, etc. Strong skills in data modeling tools (e.g., Erwin, ER/Studio). Deep understanding of ETL/ELT , APIs, and data integration. Expertise in SQL, Python , and data-centric languages. Experience with data governance, RBAC, encryption , and compliance frameworks. DevOps/CI-CD experience in data pipelines is a plus. Excellent communication and leadership skills.
Posted 1 month ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Job Title: Data EngineerClient: Amazon Employment Type: Full-time (On-site) Payroll: BCT Consulting Pvt Ltd Work Location: Hyderabad (Work from Office Monday to Friday, General Shift) Experience Required: 5+ Years Joining Mode: Permanent with BCT Consulting Pvt Ltd, deployed at Amazon About the Role: We are seeking a highly skilled and motivated Data Engineer with strong expertise in SQL, Python, Big Data technologies, AWS, Airflow, and Redshift . The ideal candidate will play a key role in building and optimizing data pipelines, ensuring data integrity, and enabling scalable data solutions across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and SQL . Work with Big Data technologies to process and manage large datasets efficiently. Implement and manage workflows using Apache Airflow . Develop and optimize data models and queries in Amazon Redshift . Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, consistency, and security across all data platforms. Monitor and troubleshoot data pipeline performance and reliability. Leverage AWS services (S3, Lambda, Glue, EMR, etc.) for cloud-native data engineering solutions. Required Skills & Qualifications: 5+ years of experience in Data Engineering . Strong proficiency in SQL and Python . Hands-on experience with Big Data tools (e.g., Spark, Hadoop). Expertise in AWS cloud services related to data engineering. Experience with Apache Airflow for workflow orchestration. Solid understanding of Amazon Redshift and data warehousing concepts. Excellent problem-solving and communication skills. Ability to work in a fast-paced, collaborative environment. Nice to Have: Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance and compliance standards. Perks & Benefits: Opportunity to work on cutting-edge data technologies. Collaborative and innovative work culture. Immediate joining preferred.
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. J ob Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills. Hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus
Posted 1 month ago
5.0 - 7.0 years
15 - 30 Lacs
Gurugram
Remote
Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.
Posted 1 month ago
5.0 - 10.0 years
15 - 20 Lacs
Chennai, Bengaluru
Work from Office
Job Description: Job Title: Data Engineer Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: 5-8 years' experience candidate as back end - data engineer. Strong experience in SQL. Strong knowledge and experience Python and Py Spark. Experience in AWS. Experience in Docker and OpenShift. Hands on experience with REST Concepts. Design and Develop business solutions on the data front. Experience in implementation of new enhancements and also handling defect triage. Candidate must have strong analytical abilities. Skills/ Competency Additionally Preferred Jira, Bit Bucket Experience on Kafka. Experience on Snowflake. Domain knowledge in Banking. Analytical skills. Excellent communication skills Working knowledge of Agile. Thanks & Regards, Suresh Kumar Raja, CGI.
Posted 1 month ago
5.0 - 9.0 years
10 - 12 Lacs
Bengaluru
Remote
Sr Data Engineer Tenure: Min. 3 months (potential for extension) Contract. Remote. We are seeking a Sr. Data Engineer to join our technology team. This role is a hands-on position responsible for the build and continued evolution of the data platform, business applications and integration tools. We are looking for a hands on engineer that can recommend best practices to work with enterprise data. The ideal candidate is going to be a hands on engineer, and very strong with AWS Products to help build out data pipelines, create jobs, manage the quality of the data warehouse and integrated tools. RESPONSIBILITIES Design, develop, and maintain scalable, efficient data pipelines to support ETL/ELT processes across multiple sources and systems Partner with Data Science, Analytics, and Business teams to understand data needs, prioritize use cases, and deliver reliable datasets and models Monitor, optimize, and troubleshoot data jobs, ensuring high availability and performance of data infrastructure Build and manage data models and schemas in Redshift and other data technologies, enabling self-service analytics Implement data quality checks, validation rules, and alerting mechanisms to ensure trust in data Leverage AWS services like Glue, Lambda, S3, Athena, and EMR to build modular, reusable data solutions Drive improvements in data lineage, cataloging, and documentation to ensure transparency and reusability of data assets Create and maintain technical documentation and version-controlled workflows (e.g., Git, dbt) Contribute to and promote a culture of continuous improvement, mentoring peers and advocating for scalable and modern data practices Participate in sprint planning, code reviews, and team retrospectives as part of an Agile development process Stay current on industry trends and emerging technologies to identify opportunities for innovation and automation Advanced Python, including experience building APIs, scripting ETL processes, and automating workflows Expert in SQL, with ability to write complex queries, optimize performance, and work across large datasets Hands-on experience with AWS data ecosystem including Redshift, S3, Glue, Athena, EMR, EC2, DynamoDB, Lambda, and Redis Strong understanding of data warehousing and data modeling principles (e.g., star/snowflake schema, dimensional modeling) Familiarity with dbt Labs and modern ELT/analytics engineering practices Experience working with structured, semi-structured, and unstructured data
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |