Jobs
Interviews

453 Data Engineer Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

15 - 25 Lacs

Pune, Bengaluru

Work from Office

Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 3 months ago

Apply

4.0 - 9.0 years

3 - 8 Lacs

Pune

Work from Office

Design, develop, and maintain ETL pipelines using Informatica PowerCenter or Talend to extract, transform, and load data into EDW systems and data lake. Optimize and troubleshoot complex SQL queries and ETL jobs to ensure efficient data processing and high performance. Technologies - SQL, Informatica Power center, Talend, Big Data, Hive

Posted 3 months ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Remote

Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 3 months ago

Apply

6.0 - 9.0 years

15 - 20 Lacs

Chennai

Work from Office

Skills Required: Should have a minimum 6+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.

Posted 3 months ago

Apply

3.0 - 5.0 years

4 - 9 Lacs

Chennai

Work from Office

Skills Required: Should have a minimum 3+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.

Posted 3 months ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Hyderabad, Bengaluru

Hybrid

Looking for Snowflake developer for US client, this candidate should be strong with Snowflake & DBT & should be able to do impact analysis on the current ETLs (Informatica/ Data stage) and provide solutions based on the analysis. Exp: 7- 12yrs

Posted 3 months ago

Apply

2.0 - 7.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Design, develop, deploy ETL workflows mappings using Informatica PowerCenter Extract data from various source systems transform/load into target systems Troubleshoot ETL job failures resolve data issues promptly. Optimize and tune complex SQL queries Required Candidate profile Maintain detailed documentation of ETL design, mapping logic, and processes. Ensure data quality and integrity through validation and testing. Exp with Informatica PowerCenter Strong SQL knowledge Perks and benefits Perks and Benefits

Posted 3 months ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Kochi, Bengaluru

Work from Office

Hiring for Senior Python Developer Data Engineering with Mage.ai Mage.ai experience mandatory Location : Remote/Bangalore/Kochi Experience : 6+ Years | Role Type : Full-time About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , and cloud-based data engineering workflows , and will play a critical part in our data infrastructure modernization efforts. Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.). Experience integrating APIs, databases, and file-based sources into scalable pipelines. Strong problem-solving, debugging, and performance tuning skills. Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) and deploying pipelines on EMR, EKS, or GKE . Exposure to streaming data workflows (e.g., Kafka, Spark Streaming, etc.). Experience working in Agile teams , contributing to sprint planning and code reviews. Contributions to open-source projects or community engagement in the data engineering space. if interested apply for this role . With Regards , Rathna (rathna@trinityconsulting.asia)

Posted 3 months ago

Apply

5.0 - 10.0 years

22 - 35 Lacs

Kochi, Bengaluru

Hybrid

Greetings from Trinity!! We are looking for Python Developer with proficiency in Mage.AI Senior Python Developer Data Engineering Focus Location: Bangalore/Kochi/Remote Budget: 15-25 LPA for 5-7 years and 24-32 LPA for 7-9 years Mode of hiring: FTE About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.).

Posted 3 months ago

Apply

6.0 - 11.0 years

5 - 15 Lacs

Hyderabad

Hybrid

Dear Candidates, We are conducting Face to Face Drive on 7th June 2025. Whoever interested in F2F Drive kindly share the updated resume asap. Here are the JD Details: Role: Data Engineer with Python, Apache Spark, HDFS Experience: 6 to 12 Years Location: Hyderabad Shift Timings: General Shift Job Overview: Key Responsibilities: • Design, develop, and maintain scalable data pipelines using Python and Spark. • Ingest, process, and transform large datasets from various sources into usable formats. • Manage and optimize data storage using HDFS and MongoDB. • Ensure high availability and performance of data infrastructure. • Implement data quality checks, validations, and monitoring processes. • Collaborate with cross-functional teams to understand data needs and deliver solutions. • Write reusable and maintainable code with strong documentation practices. • Optimize performance of data workflows and troubleshoot bottlenecks. • Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role • Minimum 6 years of experience as a Data Engineer or similar role. • Strong proficiency in Python for data manipulation and pipeline development. • Hands-on experience with Apache Spark for large-scale data processing. • Experience with HDFS and distributed data storage systems. • Strong understanding of data architecture, data modeling, and performance tuning. • Familiarity with version control tools like Git. • Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. • Knowledge of cloud services (AWS, GCP, or Azure) is preferred. • Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: • Experience with containerization (Docker, Kubernetes). • Knowledge of real-time data streaming tools like Kafka. • Familiarity with data visualization tools (e.g., Power BI, Tableau). • Exposure to Agile/Scrum methodologies. Note: If Interested then please share me your updated resume to jamshira@srinav.net with below requried details asap. Details Needed: Full Name: Mail id : Contact Number : Current Exp Relevant Exp: CTC: Expected CTC/MONTH: Current Location: Relocation (Yes/No): Notice Period Official: LWD: Holding offer in hand: Tentative Doj: PAN ID: DOB (DD/MM/YYYY): LinkedIn profile Link:

Posted 3 months ago

Apply

3.0 - 8.0 years

8 - 16 Lacs

Bengaluru

Work from Office

Role & responsibilities ualifications Experience - 3-6 years Education - B.E/B.Tech/MCA/M.Tech Minimum Qualifications • Bachelor's Degree in Computer Science, CIS, or related field (or equivalent work experience in a related field) • 3 years of experience in software development or a related field • 2 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) You will be responsible for designing, building, and maintaining our data infrastructure, ensuring data quality, and enabling data-driven decision-making across the organization. The ideal candidate will have a strong background in data engineering, excellent problem-solving skills, and a passion for working with data. Responsibilities: • Design, build, and maintain our data infrastructure, including data pipelines, warehouses, and databases • Ensure data quality and integrity by implementing data validation, testing, and monitoring processes • Collaborate with cross-functional teams to understand data needs and translate them into technical requirements • Develop and implement data security and privacy policies and procedures • Optimize data processing and storage performance, ensuring scalability and reliability • Stay up-to-date with the latest data engineering trends and technologies • Provide mentorship and guidance to junior data engineers and analysts • Contribute to the development of data-driven solutions and products Requirements: • 3+ years of experience in data engineering, with a Bachelor's degree in Computer Science, Engineering, or a related field • Strong knowledge of data engineering tools and technologies, including SQL, and GCP • Experience with big data processing frameworks, such as Spark or Hadoop or Python • Experience with data warehousing solutions : BigQuery • Strong problem-solving skills, with the ability to analyze complex data sets and identify trends and insights • Excellent communication and collaboration skills, with the ability to work with cross-functional teams and stakeholders • Strong data security and privacy knowledge and experience • Experience with agile development methodologies is a plus Preferred candidate profile 3-4 yrs Max 12 LPA budget 4-6 yrs Max 14-16 LPA budget

Posted 3 months ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore WFO Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Virtual drive on 1st June '25 Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: • Develop and maintain data pipelines using Java. • Work with big data technologies such as Hadoop or Spark to process large datasets. • Optimize data workflows and ensure high performance and reliability. • Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: • Strong programming skills in Java. • Hands-on experience with Hadoop or Spark. • Experience with data ingestion, transformation, and storage solutions. • Familiarity with distributed systems and big data architecture. If interested send updated resume on rosalin.m@genxhire.in or 8976791986 Share the following details: Current CTC Expected CTC: Notice Period Age Reason for leaving last job

Posted 3 months ago

Apply

3.0 - 5.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Data Pipelines: Proven experience in building scalable and reliable data pipelines BigQuery: Expertise in writing complex SQL transformations; hands-on with indexing and performance optimization Ingestion: Skilled in data scraping and ingestion through RESTful APIs and file-based sources Orchestration: Familiarity with orchestration tools like Prefect, Apache Airflow (nice to have) Tech Stack: Proficient in Python, FastAPI, and PostgreSQL End-to-End Workflows: Capable of owning ingestion, transformation, and delivery processes Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 months ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Good hands on experience working as a GCP Data Engineer with very strong experience in SQL and PySpark. Also on BigQuery, Dataform, Dataplex, etc. Looking for only Immediate to currently serving candidates.

Posted 3 months ago

Apply

2.0 - 7.0 years

40 - 45 Lacs

Chandigarh

Work from Office

Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda Requirements Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda

Posted 3 months ago

Apply

2.0 - 7.0 years

3 - 7 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Job Title: Data Analyst/Engineer Location: Mumbai Experience: 3-4 Years Job Summary: We are seeking a skilled Data Analyst/Engineer with expertise in AWS S3 and Python to manage and process large datasets in a cloud environment. The ideal candidate will be responsible for developing efficient data pipelines, managing data storage, and optimizing data workflows in AWS. Your role will involve using your Python skills to automate data tasks. Key Responsibilities: Python Scripting and Automation: • Develop Python scripts for automating data collection, transformation, and loading into cloud storage systems. • Create robust ETL pipelines to move data between systems and perform data transformation. • Use Python for interacting with AWS services, including S3 and other AWS resources. Data Workflow Optimization: • Design and implement efficient data workflows and pipelines in the AWS cloud environment. • Monitor and optimize data processing to ensure quick and accurate delivery of datasets. • Work closely with other teams to integrate data from various sources into S3 for analysis and reporting. Cloud Services & Data Integration: • Leverage other AWS services (e.g., Lambda, EC2, RDS) to manage and process data in a scalable. • Integrate data sources through APIs, ensuring real-time availability of critical data. Required Skills & Qualifications: • Technical Expertise: Strong experience managing and working with AWS S3 buckets and other AWS services. Advanced proficiency in Python, including experience with libraries such as boto3, Pandas, and others. Hands-on experience building and maintaining ETL pipelines for large datasets • Cloud Technologies: Solid understanding of AWS cloud architecture, including S3, Lambda, and EC2. Experience with AWS IAM (Identity and Access Management) for securing S3 buckets. • Problem Solving & Automation: Proven ability to automate data workflows using Python. Strong analytical and problem-solving skills, with a focus on optimizing data storage and processing. Preferred Qualifications: • Bachelors degree in Computer Science, Data Engineering. • Experience with other AWS services, such as Glue, Redshift, or Athena.

Posted 3 months ago

Apply

3.0 - 5.0 years

0 - 0 Lacs

Hyderabad, Pune, Bangalore Rural

Work from Office

Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!

Posted 3 months ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Senior Data Engineer - Python: Bachelors or master’s degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.

Posted 3 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad

Remote

5+ years’ experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools

Posted 3 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Bengaluru

Remote

Role & responsibilities Responsibilities: Design, develop, and maintain Collibra workflows tailored to our project's specific needs. Collaborate with cross-functional teams to ensure seamless integration of Collibra with other systems. Educate team members on Collibra's features and best practices. (or) Educate oneself on Collibra's features and best practices. Engage with customers to gather requirements and provide solutions that meet their needs. Stay updated with the latest developments in Collibra and data engineering technologies. Must-Haves: Excellent communication skills in English (reading, writing, and speaking). Background in Data Engineering or related disciplines. Eagerness to learn and become proficient in Collibra and its features. Ability to understand and apply Collibra's use cases within the project scope. Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra. Preferred candidate profile Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra.

Posted 3 months ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Noida

Remote

Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- Azure Data Engineer Interviewer Application link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities

Posted 3 months ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Noida

Remote

Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- AWS Data Engineer Interviewer-Application Link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities

Posted 3 months ago

Apply

6.0 - 9.0 years

25 - 35 Lacs

Kochi, Chennai, Bengaluru

Work from Office

Experience Data Engineer (Python, Pyspark, Snowflake)

Posted 3 months ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Remote

Location - Remote Experience - 6-12 years Immediate Joiners preferred Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field. 35 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features: Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL , Python , and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow , dbt , or Informatica . Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security , encryption , and data compliance best practices

Posted 3 months ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Artify Talent Studio is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies