Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
0 - 1 Lacs
Pune, Chennai, Bengaluru
Hybrid
Candidates must be well experienced in AWS Glue, Lambda, Athena Redshift. candidate has worked as a lead for at-least an year
Posted 3 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
THE TEAM This opportunity is to join a team of highly technically skilled engineers, who are redesigning Creditsafe data platform with high throughput and scalability as primary goal. The data delivery platform is being built upon AWS redshift and S3 cloud storage. The platform expected to manage over billion objects along with daily increment of more than 10 million objects while handling addition, deletion and correction of our data and indexes in auditable manner. Our data processing application is entirely based on Python and designed to efficiently transform incoming raw data volume into API consumable schema. The team is also building high available and low latency APIs to enable our clients with faster data delivery. JOB PROFILE Join us to take the above project of redesigning the Creditsafe platform into the cloud space. You will be expected to work with technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. KEY DUTIES AND RESPONSIBILITIES You will actively contribute to the codebase and participate in peer reviews. Design and build metadata driven, event based distributed data processing platform using technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. As an experienced Engineer, you will play a critical role in the design, development, and deployment of our business-critical system. You will be building and scaling Creditsafe APIs to securely support over 1000 transactions per second using server less technologies. Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code. Understanding company domain data to make recommendations to improve existing product. The responsibilities detailed above are not exhaustive and you may be requested to take on additional responsibilities deemed as reasonable by their direct line manager. SKILLS AND QUALIFICATIONS Demonstrate ability to write clean efficient code and knit it together with cloud environment for best performance. Proven experience of development within a commercial environment creating production grade APIs and data pipelines in python. You are looking to grow your skills through daily technical challenges and enjoy problem solving and whiteboarding in collaboration with a team. You have excellent communication skills, and the ability to explain your views clearly to the team and are open to understanding theirs. Have a proven track record to draw from a deep and broad technical expertise to mentor engineers, complete hands-on technical work, and provide leadership on complex technology issues. Share your ideas collaboratively via wikis, discussions boards, etc and share any decisions made, for the benefit of others.
Posted 3 weeks ago
5.0 - 10.0 years
2 - 6 Lacs
Gurugram
Work from Office
Skills: Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog)5 years. Hands-on experience with Python and PySpark3 years. PL/SQL experience3 years CloudFormation and Terraform2 years CI/CD GitHub actions1 year Experience with BI systems (PowerBI, Tableau)1 year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda2 years
Posted 3 weeks ago
5.0 - 7.0 years
10 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Data Engineer: Mandatory skills* AWS, KAFKA, ETL, Glue, Lamda, Phyton, SQL
Posted 3 weeks ago
7.0 - 12.0 years
20 - 22 Lacs
Pune, Bangalore Rural, Bengaluru
Hybrid
Job Title: AWS Data Engineer Exp- 7+ Years Overall experience of 4-8 years. - Proven experience with SQL , Python, Amazon Redshift, Apache Spark ( Pyspark ), AWS IAM, Amazon S3 , and AWS Glue ETL is mandatory. - Good to have knowledge on data modelling skills - Strong communication and collaboration skills, with the ability to work effectively in a team. Resource needs to be very strong in SQL and Pyspark / Python . AWS knowledge can be compromised if they are strong in SQL/Pyspark. Good Communication Skills
Posted 3 weeks ago
4.0 - 9.0 years
22 - 30 Lacs
Noida, Hyderabad
Hybrid
Hiring Alert Data Engineer | Xebia | Hyderabad & Noida Were on the lookout for skilled Data Engineers with 4+ years of experience to join our dynamic team at Xebia! If you thrive on solving complex data problems and have solid hands-on experience in Python, PySpark, and AWS, we’d love to hear from you. Location: Hyderabad / Noida Work Mode: 3 Days Work From Office (WFO) per week Timings: 2:30 PM – 10:30 PM IST Notice Period: Immediate to 15 days max Required Skills: Programming: Strong in Python, Spark, and PySpark SQL: Proficient in writing and optimizing complex queries AWS Services: Experience with S3, SNS, SQS, EMR, Lambda, Athena, Glue, RDS (PostgreSQL), CloudWatch, EventBridge, CloudFormation CI/CD: Exposure to Jenkins pipelines Analytical Thinking: Strong problem-solving capabilities Communication: Ability to explain technical topics to non-technical audiences Preferred Skills: Jenkins for CI/CD Familiarity with big data tools and frameworks Interested? Apply Now! Send your updated CV along with the following details to: vijay.s@xebia.com Required Details: Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Location: Notice Period / Last Working Day (if serving notice): Primary Skill Set (Choose from above or mention any other relevant expertise): LinkedIn Profile URL: Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia. Let’s build the future of data together! #DataEngineer #Xebia #AWS #Python #PySpark #BigData #HiringNow #HyderabadJobs #NoidaJobs #ImmediateJoiners #DataJobs
Posted 3 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Additionally, you will monitor project progress, address any challenges that arise, and facilitate communication among team members to maintain alignment and efficiency throughout the project lifecycle. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to foster their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms, particularly AWS.- Familiarity with data warehousing concepts and best practices.- Ability to troubleshoot and optimize data pipelines. Additional Information:- The candidate should have minimum 3 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to track progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Good To Have Skills: Experience with data integration and ETL processes.- Strong understanding of cloud computing concepts and services.- Familiarity with data warehousing solutions and best practices.- Experience in scripting languages such as Python or SQL. Additional Information:- The candidate should have minimum 5 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 9.0 years
20 - 30 Lacs
Pune
Hybrid
Job Summary : We are looking for a highly skilled AWS Data Engineer with over 5 years of experience in designing, developing, and maintaining scalable data pipelines on AWS. The ideal candidate will be proficient in data engineering best practices and cloud-native technologies, with hands-on experience in building ETL/ELT pipelines, working with large datasets, and optimizing data architecture for analytics and business intelligence. Key Responsibilities : Design, build, and maintain scalable and robust data pipelines and ETL processes using AWS services (e.g., Glue, Lambda, EMR, Redshift, S3, Athena). Collaborate with data analysts, data scientists, and stakeholders to understand data requirements and deliver high-quality solutions. Implement data lake and data warehouse architectures, ensuring data governance, data quality, and compliance. Optimize data pipelines for performance, reliability, scalability, and cost. Automate data ingestion and transformation workflows using Python, PySpark, or Scala. Manage and monitor data infrastructure including logging, error handling, alerting, and performance metrics. Leverage infrastructure-as-code tools like Terraform or AWS CloudFormation for infrastructure deployment. Ensure security best practices are implemented for data access and storage (IAM, KMS, encryption, etc.). Document data processes, architectures, and standards. Required Qualifications : Bachelors or Master’s degree in Computer Science, Information Systems, or a related field. Minimum 5 years of experience as a Data Engineer with a focus on AWS cloud services. Strong experience in building ETL/ELT pipelines using AWS Glue, EMR, Lambda , and Step Functions . Proficiency in SQL , Python , PySpark , and data modeling techniques. Experience working with data lakes (S3) and data warehouses (Redshift, Snowflake, etc.) . Experience with Athena , Kinesis , Kafka , or similar streaming data tools is a plus. Familiarity with DevOps and CI/CD processes, using tools like Git , Jenkins , or GitHub Actions . Understanding of data privacy, governance, and compliance standards such as GDPR, HIPAA, etc. Strong problem-solving and analytical skills, with the ability to work in a fast-paced environment.
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 3 weeks ago
4.0 - 7.0 years
6 - 15 Lacs
Kolkata, Gurugram, Chennai
Hybrid
Sr Data Engineer (SQL , Python , Snowflake , AWS) Exp- 4 to 7 Years We are looking for a Data Engineer with expertise in SQL and Python, along with foundational knowledge of AWS Glue ETL. The ideal candidate should have experience in access roles, policies, and RBAC/PBAC implementation, especially within Snowflake. Working in agile product teams is essential, and certifications in AWS Cloud and Snowflake Engineering are highly preferred . Roles and Responsibilities Data engineering Write efficient SQL queries to process and manipulate large datasets. Develop and optimize Python-based data processing scripts and workflows. Design, develop, and maintain scalable ETL pipelines using AWS Glue. Participate in agile development processes, including sprint planning, stand-ups, and retrospectives. Work on cloud-based data solutions, leveraging AWS and Snowflake best practices. Basic knowledge of access control mechanisms, including RBAC (Role-Based Access Control) and PBAC (Policy-Based Access Control) within Snowflake. Configure and monitor Snowflake access roles, policies, and user permissions Required Qualifications: 3-5 years of experience in data engineering or a related field. Strong proficiency in SQL and Python . Basic understanding of AWS Glue ETL and its functionalities. Experience working in an agile product development environment. Basic knowledge of access roles, policies, RBAC, PBAC, and Snowflake access role management. Strong understanding of cloud-based data solutions (AWS, Snowflake). Excellent problem-solving and analytical skills. Preferred Qualifications: AWS Cloud certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, etc.) Snowflake Engineering certification . Experience with big data processing frameworks (Spark, Hadoop). Knowledge of data governance and security best practices
Posted 3 weeks ago
8.0 - 13.0 years
15 - 25 Lacs
Bengaluru
Hybrid
We are looking for a seasoned Data Architect (Senior Engineer) to lead the design and implementation of scalable, secure, and privacy-compliant data architectures that support feedback-driven AI systems.
Posted 3 weeks ago
4.0 - 8.0 years
9 - 11 Lacs
Hyderabad
Remote
Role: Data Engineer (ETL Processes, SSIS, AWS) Duration: Fulltime Location: Remote Working hours: 4:30am to 10:30am IST shift timings. Note: We need a ETL engineer for MS SQL Server Integration Service working in 4:30am to 10:30am IST shift timings. Roles & Responsibilities: Design, develop, and maintain ETL processes using SQL Server Integration Services (SSIS). Create and optimize complex SQL queries, stored procedures, and data transformation logic on Oracle and SQL Server databases. Build scalable and reliable data pipelines using AWS services (e.g., S3, Glue, Lambda, RDS, Redshift). Develop and maintain Linux shell scripts to automate data workflows and perform system-level tasks. Schedule, monitor, and troubleshoot batch jobs using tools like Control-M, AutoSys, or cron. Collaborate with stakeholders to understand data requirements and deliver high-quality integration solutions. Ensure data quality, consistency, and security across systems. Maintain detailed documentation of ETL processes, job flows, and technical specifications. Experience with job scheduling tools such as Control-M and/or AutoSys. Exposure to version control tools (e.g., Git) and CI/CD pipelines.
Posted 3 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in ETL/Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 3 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai, Guindy
Work from Office
Data ELT Engineer Chennai - Guindy, India Information Technology 17075 Overview We are looking for a highly skilled DataELT Engineer to architect and implement data solutions that support our enterprise analytics and real-time decision-making capabilities. This role combines data modeling expertise with hands-on experience building and managing ELT pipelines across diverse data sources. You will work with Snowflake , AWS Glue , and Apache Kafka to ingest, transform, and stream both batch and real-time data, ensuring high data quality and performance across systems. If you have a passion for data architecture and scalable engineering, we want to hear from you. Responsibilities Design, build, and maintain scalable ELT pipelines into Snowflake from diverse sources including relational databases (SQL Server, MySQL, Oracle) and SaaS platforms. Utilize AWS Glue for data extraction and transformation, and Kafka for real-time streaming ingestion. Model data using dimensional and normalized techniques to support analytics and business intelligence workloads. Handle large-scale batch processing jobs and implement real-time streaming solutions. Ensure data quality, consistency, and governance across pipelines. Collaborate with data analysts, data scientists, and business stakeholders to align models with organizational needs. Monitor, troubleshoot, and optimize pipeline performance and reliability. Requirements 5+ years of experience in data engineering and data modeling. Strong proficiency with SQL and data modeling techniques (star, snowflake schemas). Hands-on experience with Snowflake data platform. Proficiency with AWS Glue (ETL jobs, crawlers, workflows). Experience using Apache Kafka for streaming data integration. Experience with batch and streaming data processing. Familiarity with orchestration tools (e.g., Airflow, Step Functions) is a plus. Strong understanding of data governance and best practices in data architecture. Excellent problem-solving skills and communication abilities.
Posted 4 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Advanced Application Engineer Project Role Description : Utilize modular architectures, next-generation integration techniques and a cloud-first, mobile-first mindset to provide vision to Application Development Teams. Work with an Agile mindset to create value across projects of multiple scopes and scale. Must have skills : BlueYonder Enterprise Supply Planning Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for an experienced Integration Architect to lead the design and execution of integration strategies for Blue Yonder (BY) implementations across cloud-native environments. The ideal candidate will possess strong expertise in integrating supply chain platforms with enterprise cloud systems, data lakes, and Snowflake, along with working knowledge of Generative AI (Gen AI) to enhance automation and intelligence in integration and data workflows. Roles & Responsibilities:- Architect and implement end-to-end integration solutions for Blue Yonder (WMS, TMS, ESP, etc) with enterprise systems (ERP, CRM, legacy).- Design integration flows using cloud-native middleware platforms (Azure Integration Services, AWS Glue, GCP Dataflow, etc.).- Enable real-time and batch data ingestion into cloud-based Data Lakes (e.g., AWS S3, Azure Data Lake, Google Cloud Storage) and downstream to Snowflake.- Develop scalable data pipelines to support analytics, reporting, and operational insights from Blue Yonder and other systems.- Integrate Snowflake as an enterprise data platform for unified reporting and machine learning use cases. Professional & Technical Skills: - Leverage Generative AI (e.g., OpenAI, Azure OpenAI) for :Auto-generating integration mapping specs and documentation.- Enhancing data quality and reconciliation with intelligent agents.- Developing copilots for integration teams to speed up development and troubleshooting.- Ensure integration architecture adheres to security, performance, and compliance standards.- Collaborate with enterprise architects, functional consultants, data engineers, and business stakeholders.- Lead troubleshooting, performance tuning, and hypercare support post-deployment Additional Information:- The candidate should have minimum 5 years of experience in BlueYonder Enterprise Supply Planning.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Pytest Good to have skills : DevOps, AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationJob Title:Data QA EngineerKey Responsibilities:Ensure the quality and reliability of data pipelines and workflows within the AWS ecosystem.Design and implement comprehensive test strategies for data validation, transformation, and integration processes.Collaborate with development teams to modernize applications and ensure data quality across strategic platforms and technologies, such as Amazon Web Services.Develop automated testing frameworks and scripts using Python to validate medium to complex data processing logic.Perform rigorous testing of ETL pipelines, ensuring scalability, efficiency, and adherence to data quality standards.Maintain detailed documentation for test cases, data validation logic, and quality assurance processes.Follow agile methodologies and CI/CD practices to integrate testing seamlessly into development workflows. Technical Experience:Expertise in testing scalable ETL pipelines developed using AWS Glue (PySpark) for large-scale data processing.Proficiency in validating data integration from diverse sources, including Amazon S3, Redshift, RDS, APIs, and on-prem systems.Experience in testing data ingestion, validation, transformation, and enrichment processes to ensure high data quality and consistency.Advanced skills in data cleansing, deduplication, transformation, and enrichment testing.Familiarity with job monitoring, error handling, and alerting mechanisms using AWS CloudWatch and SNS.Experience in maintaining technical documentation for data workflows, schema logic, and business transformations.Proficiency in agile methodologies and CI/CD practices with tools like GitLab and Docker.Good to have:Experience in Power BI for data visualization and reporting.Familiarity with building CI/CD pipelines using Git. Professional Attributes:Excellent communication, collaboration, and analytical skills.Flexibility to work shifts if necessary. Qualification 15 years full time education
Posted 4 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue.- Good To Have Skills: Experience with AWS Lambda, AWS S3, and AWS Redshift.- Strong understanding of data integration and ETL processes.- Experience with cloud-based data warehousing solutions.- Familiarity with data governance and security best practices. Additional Information:- The candidate should have minimum 7.5 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS BigData Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring project success. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Expected to provide solutions to problems that apply across multiple teams- Lead the application development process effectively- Ensure timely delivery of projects- Provide guidance and mentorship to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS BigData- Strong understanding of cloud computing and AWS services- Experience in designing and implementing Big Data solutions- Knowledge of data warehousing and data lake concepts- Hands-on experience with big data technologies such as Hadoop and Spark Additional Information:- The candidate should have a minimum of 12 years of experience in AWS BigData- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education
Posted 4 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, coordinating with team members, and ensuring project milestones are met. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Lead the effort to design, build, and configure applications- Act as the primary point of contact for the application development process- Coordinate with team members to ensure project milestones are met- Provide guidance and support to team members throughout the development lifecycle Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience in designing and implementing scalable applications- Knowledge of data integration and ETL processes- Hands-on experience with AWS services such as S3, Lambda, and Redshift Additional Information:- The candidate should have a minimum of 12 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education
Posted 4 weeks ago
4.0 - 8.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleQA Test Automation Exp8+ Years LocationPune We are seeking a Senior QA Data Engineer with expertise in AWS Cloud and proficiency in Jenkins, AWS Glue, Lambda, Python, S3 and test automation. The ideal candidate will ensure the quality and reliability of data pipelines through robust testing and collaboration with DevOps teams on CI/CD pipelines. Key Responsibilities: Develop and maintain automated test cases for ETL pipelines, APIs, and data workflows using Python. Build and validate Jenkins CI/CD pipelines for automated testing and deployment. Test and troubleshoot AWS services, including Glue, Lambda, RDS, S3, and Iceberg, ensuring seamless integration and scalability. Collaborate closely with DevOps to enhance deployment processes and pipeline efficiency. Design data validation frameworks and monitor pipeline performance for data quality assurance. Validate NoSQL and relational database integrations in data workflows. Document test strategies, results, and best practices for cross-team alignment. Required Skills: Strong expertise in CI/CD tools like Jenkins and test automation with Python. Proficient in SQL and NoSQL databases for data validation and analysis. Hands-on experience with AWS services, including Glue, Lambda, RDS, Iceberg, S3, and CloudWatch. Proven track record in testing and validating ETL workflows and data pipelines. Strong problem-solving skills and a team-oriented mindset for effective collaboration with DevOps and engineering teams. Preferred: AWS certifications (e.g., AWS Developer Associate). Experience with tools like Airflow or Spark.
Posted 4 weeks ago
5.0 - 8.0 years
17 - 20 Lacs
Pune
Remote
At Codvo, software and people transformations go together We are a global empathy-led technology services company with a core DNA of product innovation and mature software engineering We uphold the values of Respect, Fairness, Growth, Agility, and Inclusiveness in everything we do About The Role :We are looking for a Data & BI Solution Architect to lead data analytics initiatives in the retail domainThe candidate should be skilled in data modeling, ETL, visualization, and big data technologies Responsibilities:Architect end-to-end data and BI solutions for retail analytics Define data governance, security, and compliance frameworks Work with stakeholders to design dashboards and reports for business insights Implement data pipelines and integrate with cloud platforms Skills Required:Proficiency in SQL, Python, and Spark Experience with ETL tools (Informatica, Talend, AWS Glue) Knowledge of Power BI, Tableau, and Looker Hands-on experience with cloud data platforms (Snowflake, Redshift, BigQuery)
Posted 4 weeks ago
5.0 - 10.0 years
15 - 27 Lacs
Gurugram, Bengaluru
Work from Office
Data Engineer Exp : 4 years of Experience Years of Minimum Relevant : 3+ Years in data engineering Location- Gurgaon Role Summary: The Data Engineer will develop and maintain AWS-based data pipelines, ensuring optimal ingestion, transformation, and storage of clinical trial data. The role requires expertise in ETL, AWS Glue, Lambda functions, and Redshift optimization. Must have- AWS (Glue, Lambda, Redshift, Step Functions) Python, SQL, API-based ingestion Pyspark Redshift, SQL/PostgreSQL, Snowflake (optional) Redshift Query Optimization, Indexing IAM, Encryption, Row-Level Security
Posted 4 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Snowflake Data Warehouse Good to have skills : Python (Programming Language), AWS Glue, AWS Lambda Administration Minimum 5 year(s) of experience is required Educational Qualification : Graduate Summary :As a Snowflake Data Warehouse Architect, you will be responsible for leading the implementation of Infrastructure Services projects, leveraging our global delivery capability. Your typical day will involve working with Snowflake Data Warehouse, AWS Glue, AWS Lambda Administration, and Python programming language. Roles & Responsibilities: Lead the design and implementation of Snowflake Data Warehouse solutions for Infrastructure Services projects. Collaborate with cross-functional teams to ensure successful delivery of projects, leveraging AWS Glue and AWS Lambda Administration. Provide technical guidance and mentorship to junior team members. Stay updated with the latest advancements in Snowflake Data Warehouse and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in Snowflake Data Warehouse. Good To Have Skills:Proficiency in Python programming language, AWS Glue, and AWS Lambda Administration. Experience in leading the design and implementation of Snowflake Data Warehouse solutions. Strong understanding of data architecture principles and best practices. Experience in data modeling, data integration, and data warehousing. Experience in performance tuning and optimization of Snowflake Data Warehouse solutions. Additional Information: The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bengaluru office. Qualifications Graduate
Posted 4 weeks ago
12.0 - 17.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project Role : Cloud Migration Engineer Project Role Description : Provides assessment of existing solutions and infrastructure to migrate to the cloud. Plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Must have skills : AWS CloudFormation Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Migration Engineer, you will provide assessment of existing solutions and infrastructure to migrate to the cloud. You will plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Collaborate with stakeholders to understand business requirements and develop migration strategies. Assess existing infrastructure and applications to identify opportunities for cloud migration. Design and implement scalable and high-performance cloud solutions. Manage and monitor cloud migration projects to ensure successful delivery. Professional & Technical Skills: Must To Have Skills:Proficiency in AWS CloudFormation. Experience with cloud migration tools and services such as AWS Database Migration Service and AWS Server Migration Service. Strong understanding of cloud computing concepts and architectures. Knowledge of cloud security best practices and compliance standards. Experience with scripting languages such as Python or PowerShell. Additional Information: The candidate should have a minimum of 12 years of experience in AWS CloudFormation. This position is based at our Bengaluru office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane