Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 7.0 years
4 - 7 Lacs
Hyderabad
Work from Office
Design, develop, deploy ETL workflows mappings using Informatica PowerCenter Extract data from various source systems transform/load into target systems Troubleshoot ETL job failures resolve data issues promptly. Optimize and tune complex SQL queries Required Candidate profile Maintain detailed documentation of ETL design, mapping logic, and processes. Ensure data quality and integrity through validation and testing. Exp with Informatica PowerCenter Strong SQL knowledge Perks and benefits Perks and Benefits
Posted 2 weeks ago
5.0 - 10.0 years
11 - 21 Lacs
Kochi, Bengaluru
Work from Office
Hiring for Senior Python Developer Data Engineering with Mage.ai Mage.ai experience mandatory Location : Remote/Bangalore/Kochi Experience : 6+ Years | Role Type : Full-time About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , and cloud-based data engineering workflows , and will play a critical part in our data infrastructure modernization efforts. Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.). Experience integrating APIs, databases, and file-based sources into scalable pipelines. Strong problem-solving, debugging, and performance tuning skills. Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) and deploying pipelines on EMR, EKS, or GKE . Exposure to streaming data workflows (e.g., Kafka, Spark Streaming, etc.). Experience working in Agile teams , contributing to sprint planning and code reviews. Contributions to open-source projects or community engagement in the data engineering space. if interested apply for this role . With Regards , Rathna (rathna@trinityconsulting.asia)
Posted 2 weeks ago
5.0 - 10.0 years
22 - 35 Lacs
Kochi, Bengaluru
Hybrid
Greetings from Trinity!! We are looking for Python Developer with proficiency in Mage.AI Senior Python Developer Data Engineering Focus Location: Bangalore/Kochi/Remote Budget: 15-25 LPA for 5-7 years and 24-32 LPA for 7-9 years Mode of hiring: FTE About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.).
Posted 2 weeks ago
6.0 - 11.0 years
5 - 15 Lacs
Hyderabad
Hybrid
Dear Candidates, We are conducting Face to Face Drive on 7th June 2025. Whoever interested in F2F Drive kindly share the updated resume asap. Here are the JD Details: Role: Data Engineer with Python, Apache Spark, HDFS Experience: 6 to 12 Years Location: Hyderabad Shift Timings: General Shift Job Overview: Key Responsibilities: • Design, develop, and maintain scalable data pipelines using Python and Spark. • Ingest, process, and transform large datasets from various sources into usable formats. • Manage and optimize data storage using HDFS and MongoDB. • Ensure high availability and performance of data infrastructure. • Implement data quality checks, validations, and monitoring processes. • Collaborate with cross-functional teams to understand data needs and deliver solutions. • Write reusable and maintainable code with strong documentation practices. • Optimize performance of data workflows and troubleshoot bottlenecks. • Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role • Minimum 6 years of experience as a Data Engineer or similar role. • Strong proficiency in Python for data manipulation and pipeline development. • Hands-on experience with Apache Spark for large-scale data processing. • Experience with HDFS and distributed data storage systems. • Strong understanding of data architecture, data modeling, and performance tuning. • Familiarity with version control tools like Git. • Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. • Knowledge of cloud services (AWS, GCP, or Azure) is preferred. • Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: • Experience with containerization (Docker, Kubernetes). • Knowledge of real-time data streaming tools like Kafka. • Familiarity with data visualization tools (e.g., Power BI, Tableau). • Exposure to Agile/Scrum methodologies. Note: If Interested then please share me your updated resume to jamshira@srinav.net with below requried details asap. Details Needed: Full Name: Mail id : Contact Number : Current Exp Relevant Exp: CTC: Expected CTC/MONTH: Current Location: Relocation (Yes/No): Notice Period Official: LWD: Holding offer in hand: Tentative Doj: PAN ID: DOB (DD/MM/YYYY): LinkedIn profile Link:
Posted 2 weeks ago
3.0 - 8.0 years
8 - 16 Lacs
Bengaluru
Work from Office
Role & responsibilities ualifications Experience - 3-6 years Education - B.E/B.Tech/MCA/M.Tech Minimum Qualifications • Bachelor's Degree in Computer Science, CIS, or related field (or equivalent work experience in a related field) • 3 years of experience in software development or a related field • 2 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) You will be responsible for designing, building, and maintaining our data infrastructure, ensuring data quality, and enabling data-driven decision-making across the organization. The ideal candidate will have a strong background in data engineering, excellent problem-solving skills, and a passion for working with data. Responsibilities: • Design, build, and maintain our data infrastructure, including data pipelines, warehouses, and databases • Ensure data quality and integrity by implementing data validation, testing, and monitoring processes • Collaborate with cross-functional teams to understand data needs and translate them into technical requirements • Develop and implement data security and privacy policies and procedures • Optimize data processing and storage performance, ensuring scalability and reliability • Stay up-to-date with the latest data engineering trends and technologies • Provide mentorship and guidance to junior data engineers and analysts • Contribute to the development of data-driven solutions and products Requirements: • 3+ years of experience in data engineering, with a Bachelor's degree in Computer Science, Engineering, or a related field • Strong knowledge of data engineering tools and technologies, including SQL, and GCP • Experience with big data processing frameworks, such as Spark or Hadoop or Python • Experience with data warehousing solutions : BigQuery • Strong problem-solving skills, with the ability to analyze complex data sets and identify trends and insights • Excellent communication and collaboration skills, with the ability to work with cross-functional teams and stakeholders • Strong data security and privacy knowledge and experience • Experience with agile development methodologies is a plus Preferred candidate profile 3-4 yrs Max 12 LPA budget 4-6 yrs Max 14-16 LPA budget
Posted 2 weeks ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore WFO Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Virtual drive on 1st June '25 Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: • Develop and maintain data pipelines using Java. • Work with big data technologies such as Hadoop or Spark to process large datasets. • Optimize data workflows and ensure high performance and reliability. • Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: • Strong programming skills in Java. • Hands-on experience with Hadoop or Spark. • Experience with data ingestion, transformation, and storage solutions. • Familiarity with distributed systems and big data architecture. If interested send updated resume on rosalin.m@genxhire.in or 8976791986 Share the following details: Current CTC Expected CTC: Notice Period Age Reason for leaving last job
Posted 2 weeks ago
3.0 - 5.0 years
10 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Pipelines: Proven experience in building scalable and reliable data pipelines BigQuery: Expertise in writing complex SQL transformations; hands-on with indexing and performance optimization Ingestion: Skilled in data scraping and ingestion through RESTful APIs and file-based sources Orchestration: Familiarity with orchestration tools like Prefect, Apache Airflow (nice to have) Tech Stack: Proficient in Python, FastAPI, and PostgreSQL End-to-End Workflows: Capable of owning ingestion, transformation, and delivery processes Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
5.0 - 10.0 years
30 - 35 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Good hands on experience working as a GCP Data Engineer with very strong experience in SQL and PySpark. Also on BigQuery, Dataform, Dataplex, etc. Looking for only Immediate to currently serving candidates.
Posted 2 weeks ago
2.0 - 7.0 years
40 - 45 Lacs
Chandigarh
Work from Office
Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda Requirements Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda
Posted 2 weeks ago
2.0 - 7.0 years
3 - 7 Lacs
Thane, Navi Mumbai, Mumbai (All Areas)
Work from Office
Job Title: Data Analyst/Engineer Location: Mumbai Experience: 3-4 Years Job Summary: We are seeking a skilled Data Analyst/Engineer with expertise in AWS S3 and Python to manage and process large datasets in a cloud environment. The ideal candidate will be responsible for developing efficient data pipelines, managing data storage, and optimizing data workflows in AWS. Your role will involve using your Python skills to automate data tasks. Key Responsibilities: Python Scripting and Automation: • Develop Python scripts for automating data collection, transformation, and loading into cloud storage systems. • Create robust ETL pipelines to move data between systems and perform data transformation. • Use Python for interacting with AWS services, including S3 and other AWS resources. Data Workflow Optimization: • Design and implement efficient data workflows and pipelines in the AWS cloud environment. • Monitor and optimize data processing to ensure quick and accurate delivery of datasets. • Work closely with other teams to integrate data from various sources into S3 for analysis and reporting. Cloud Services & Data Integration: • Leverage other AWS services (e.g., Lambda, EC2, RDS) to manage and process data in a scalable. • Integrate data sources through APIs, ensuring real-time availability of critical data. Required Skills & Qualifications: • Technical Expertise: Strong experience managing and working with AWS S3 buckets and other AWS services. Advanced proficiency in Python, including experience with libraries such as boto3, Pandas, and others. Hands-on experience building and maintaining ETL pipelines for large datasets • Cloud Technologies: Solid understanding of AWS cloud architecture, including S3, Lambda, and EC2. Experience with AWS IAM (Identity and Access Management) for securing S3 buckets. • Problem Solving & Automation: Proven ability to automate data workflows using Python. Strong analytical and problem-solving skills, with a focus on optimizing data storage and processing. Preferred Qualifications: • Bachelors degree in Computer Science, Data Engineering. • Experience with other AWS services, such as Glue, Redshift, or Athena.
Posted 2 weeks ago
3.0 - 5.0 years
0 - 0 Lacs
Hyderabad, Pune, Bangalore Rural
Work from Office
Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!
Posted 2 weeks ago
4.0 - 9.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Senior Data Engineer - Python: Bachelors or master’s degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.
Posted 2 weeks ago
7.0 - 12.0 years
5 - 15 Lacs
Hyderabad
Remote
5+ years’ experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools
Posted 2 weeks ago
7.0 - 12.0 years
5 - 15 Lacs
Bengaluru
Remote
Role & responsibilities Responsibilities: Design, develop, and maintain Collibra workflows tailored to our project's specific needs. Collaborate with cross-functional teams to ensure seamless integration of Collibra with other systems. Educate team members on Collibra's features and best practices. (or) Educate oneself on Collibra's features and best practices. Engage with customers to gather requirements and provide solutions that meet their needs. Stay updated with the latest developments in Collibra and data engineering technologies. Must-Haves: Excellent communication skills in English (reading, writing, and speaking). Background in Data Engineering or related disciplines. Eagerness to learn and become proficient in Collibra and its features. Ability to understand and apply Collibra's use cases within the project scope. Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra. Preferred candidate profile Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra.
Posted 2 weeks ago
10.0 - 20.0 years
15 - 30 Lacs
Noida
Remote
Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- Azure Data Engineer Interviewer Application link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities
Posted 2 weeks ago
10.0 - 20.0 years
15 - 30 Lacs
Noida
Remote
Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- AWS Data Engineer Interviewer-Application Link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities
Posted 2 weeks ago
6.0 - 9.0 years
25 - 35 Lacs
Kochi, Chennai, Bengaluru
Work from Office
Experience Data Engineer (Python, Pyspark, Snowflake)
Posted 2 weeks ago
6.0 - 11.0 years
15 - 25 Lacs
Bengaluru
Remote
Location - Remote Experience - 6-12 years Immediate Joiners preferred Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field. 35 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features: Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL , Python , and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow , dbt , or Informatica . Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security , encryption , and data compliance best practices
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Artify Talent Studio is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 2 weeks ago
1.0 - 5.0 years
4 - 8 Lacs
Gurugram
Work from Office
Job Requirements Someone with 3-6 years of experience running medium to large scale production environments Proven programming/scripting skills in at least one of the language (i.e Python, Java, Scala, Javascript ) Experience with any one of the cloud-based services and infrastructure (AWS, GCP, Azure) Proficiency in writing analytical SQL queries. Experience in building analytical tools that utilize data pipelines to provide key actionable insights. Knowledge of big-data tools like Hadoop, Kafka, Spark etc would be a plus. A proactive approach to spotting problems, areas for improvement, and performance bottlenecks
Posted 2 weeks ago
1.0 - 5.0 years
3 - 7 Lacs
Chandigarh
Work from Office
Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.
Posted 2 weeks ago
8.0 - 13.0 years
25 - 30 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Position Overview We are looking for an experienced Data Engineer to join our dynamic team If you are passionate about building scalable software solutions, have expertise in system design and data structures, and are familiar with various databases, we would love to hear from you, ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation, Job Description Act as the first point of contact for data issues in the Master Data Management (MDM) system, Investigate and resolve data-related issues, such as duplicate data or missing records, ensuring timely and accurate updates, Coordinate with the Product Manager, QA Lead, and Technology Lead to prioritize and address tickets effectively, Work on Data related issues, ensuring compliance with regulations, Build and optimize data models to ensure efficient storage and query performance, including work with Snowflake tables, Write complex SQL queries for data manipulation and retrieval, Collaborate with other teams to diagnose and fix more complex issues that may require code changes or system updates, Utilize AWS resources like CloudWatch, Lambda, SQS, and Kinesis Streams for data storage, transformation, and analysis, Update and maintain the knowledge base to document common issues and their solutions, Monitor system logs and alerts to proactively identify potential issues before they affect customers, Participate in team meetings to provide updates on ongoing issues and contribute to process improvements, Maintain documentation of data engineering processes, data models, and system configurations, Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field, Minimum of 3 years of experience in data engineering, preferably related to MDM systems, Strong expertise in SQL and other database query languages, Hands-on experience with data warehousing solutions and relational database management systems (RDBMS), Proficiency in ETL tools and data pipeline construction, Familiarity with AWS services, Excellent programming skills, preferably in Python, Strong understanding of data privacy regulations like DSAR and CCPA, Good communication skills, both written and verbal, with the ability to articulate complex data concepts to non-technical stakeholders, Strong problem-solving skills and attention to detail, We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package The role is preferably hybrid, with 3 days per week spent in office We pride ourselves on the growth of our employees, offering extensive learning and development resources, PI267947624
Posted 2 weeks ago
1.0 - 5.0 years
3 - 7 Lacs
Gurugram
Work from Office
Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.
Posted 2 weeks ago
8.0 - 13.0 years
20 - 35 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brands—and we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. We’re harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, we’re calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing., People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of Lead Consultant, AWS DataLake! Responsibilities • Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. • Writing effective and scalable Python codes for automations, data wrangling and ETL. • Designing and implementing robust applications and work on Automations using python codes. • Debugging applications to ensure low-latency and high-availability. • Writing optimized custom SQL queries • Experienced in team and client handling • Having prowess in documentation related to systems, design, and delivery. • Integrate user-facing elements into applications • Having the knowledge of External Tables, Data Lake concepts. • Able to do task allocation, collaborate on status exchanges and getting things to successful closure. • Implement security and data protection solutions • Must be capable of writing SQL queries for validating dashboard outputs • Must be able to translate visual requirements into detailed technical specifications • Well versed in handling Excel, CSV, text, json other unstructured file formats using python. • Expertise in at least one popular Python framework (like Django, Flask or Pyramid) • Good understanding and exposure on any Git, Bamboo, Confluence and Jira. • Good in Dataframes and SQL ANSI using pandas. • Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications •BE/B Tech/ MCA •Excellent written and verbal communication skills •Good knowledge of Python, Pyspark Preferred Qualifications/ Skills Strong ETL knowledge on any ETL tool – good to have. Good to have knowledge on AWS cloud and Snowflake. Having knowledge of PySpark is a plus. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
3.0 - 8.0 years
20 - 30 Lacs
Chennai
Hybrid
Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2