Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
8.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
distributed data technologies,Hadoop, MapReduce, Spark, Kafka, Flink etc.for building efficient, large-scale ‘big data’ pipelines,Java, Scala, Python or equivalent,stream-processing applications using Apache Flink, Kafka. AWS,Azure,Google Cloud
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are sought for the position of Data Engineer / PowerBI Developer at Solvios Technology with 3 to 5 years of experience. Your role will involve designing, constructing, and optimizing data pipelines to ensure the seamless flow of data across various systems. Your responsibilities will include: - Creating and implementing ETL Operations using Azure Data Factory or similar tools - Utilizing different REST APIs for data collection - Developing interactive dashboards and reports using Power BI - Crafting efficient SQL queries for data extraction and manipulation from relational databases - Establishing and managing data models, DAX calculations, and Power BI datasets - Conducting data analysis, validation, and ensuring report quality and accuracy - Connecting Power BI reports to various data sources like SQL Server, Azure SQL, Excel, APIs, Snowflake, and Databricks - Optimizing Power BI dashboards, SQL queries, and dataflows for enhanced performance and scalability - Collaborating with business stakeholders to gather reporting requirements and translate them into technical solutions - Troubleshooting data-related and report performance issues, ensuring timely resolution - Documenting report specifications, data models, business logic, and technical processes and staying updated with new features and best practices in Power BI, Azure, Snowflake, AWS ecosystems. Requirements for this role include experience in ETL operations, working knowledge of APIs for data collection, familiarity with data visualization best practices and UI/UX design principles, exposure to data warehouse concepts like Star Schema and Snowflake Schema, and experience in implementing Row-Level Security (RLS) in Power BI reports. Qualifications: - Minimum 3 years of experience with Power BI - Minimum 3 years of experience in Data Warehousing and ETL setup - Experience working with SQL Server, Azure SQL, and the Microsoft technology stack If you are a dynamic professional with a passion for data engineering and PowerBI development and meet the above requirements, we welcome you to join our team at Solvios Technology.,
Posted 1 week ago
6.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Person should have 6+ years in Azure Cloud. Should have experience in Data Engineer, Architecture. Experience in working on Azure Services like Azure Data Factory, Azure Function, Azure SQL, Azure Data Bricks, Azure Data Lake, Synapse Analytics etc.
Posted 1 week ago
3.0 - 8.0 years
1 - 3 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Immediate Joiners Only-0-15 days Only Considered 3+Years Mandatory Work Mode-Hybrid Work Loation: Hyderabad, Bengaluru,Chennai,Pune Mandatory Skills: Azure, ADF, Spark, Astronomer Data Engineering topics Kafka based ingestion API based ingestion Astronomer, Apache Airflow, dagster, etc. (orchestration tools) Familiarity with Apache Iceberg, Delta & Hudi table designs when to use, why to use & how to use Spark architecture Optimization techniques Performance issues and mitigation techniques Data Quality topics Data engineering without quality provides no value Great Expectations (https://docs.greatexpectations.io/docs/core/introduction/try_gx/) Pydeequ (https://pydeequ.readthedocs.io/en/latest/index.html) Databricks – DLT expectations (Spark based)
Posted 1 week ago
7.0 - 12.0 years
20 - 27 Lacs
Pune
Remote
We are seeking a highly skilled Senior Data Engineer 2 with extensive experience in analysing existing data/databases, designing, building, and optimizing high-volume data pipelines. The ideal candidate will have strong expertise in Python, Databases, Databricks on Azure Cloud services, DevOps, and CI/CD tools, along with a solid understanding of AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and Numerator technology standards Build suitable automation test suites within Azure DevOps Maintain and update automation test suites as required Carry out manual testing, load testing, exploratory testing as required Perform Technical Analysis and work closely with Business Analysts and Senior Data Developers to consistently deliver sprint goals Assist in estimation of sprint-by-sprint stories and tasks Pro-actively take a responsible approach to product delivery Requirements 7-10 years of experience in data engineering roles, handling large databases Good C# and Python skills Experience working with Microsoft Azure Cloud Experience in Agile methodologies (Scrum/Kanban) Experience with Apache Spark, PySpark, Databricks Experience working with Devops pipeline, preferably Azure DevOps Preferred Qualifications Bachelor's or master's degree in computer science, Information Technology, Data Science, or a related field. Experience working in a technical development/support focused role Knowledge/experience in AI ML techniques Knowledge/experience in Visual Basic 6 Certification in relevant Data Engineering discipline or related fields.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
5.0 - 15.0 years
13 - 37 Lacs
Kolkata, Pune, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, load data from various sources into Azure storage solutions such as Blob Storage, Cosmos DB, etc. Collaborate with cross-functional teams to gather requirements for new data engineering projects and ensure successful implementation of ADF workflows. Troubleshoot issues related to ADF pipeline failures by analyzing logs, debugging techniques, and working closely with stakeholders to resolve problems efficiently. Develop automated testing frameworks for ADF pipelines using PySpark or other tools to ensure high-quality delivery of projects. Job Requirements : 5-15 years of experience in Data Engineering with expertise in Azure Data Factory (ADF). Strong understanding of big data technologies like Hadoop ecosystem components including Hive, Pig, Spark etc. . Proficiency in writing complex SQL queries on Azure Databricks platform.
Posted 1 week ago
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Work from Office
We are looking for senior data engineer with 5-8 yrs of experience.
Posted 1 week ago
9.0 - 14.0 years
25 - 37 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities Job Overview: We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset . Required Skills & Qualifications: Primary Requirements: SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning . Programming Languages – Proficiency in Python or Java for data processing and automation. Data Modelling – Good understanding of entity-relationship modelling, star schema, and Kimball methodology . Cloud Data Engineering – Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake ETL Development – Experience building scalable ETL/ELT pipelines and data ingestion workflows. Ability to learn and apply Snowflake concepts as needed. Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions. Financial Services Domain (Optional) : Knowledge of financial services. Good to Have Skills: DevOps & CI/CD – Experience with Git, Jenkins, Docker, and automated deployments . Streaming Data Processing – Experience with Spark, Kafka, or real-time event-driven architectures . Data Governance & Security – Understanding of data security, compliance, and governance frameworks . Experience in AWS – Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.
Posted 1 week ago
2.0 - 7.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Dear Candidates, Greetings from ExxonMobil! Please copy and paste the below link into your browser to apply for the position in the company website. Link to apply: https://jobs.exxonmobil.com/job-invite/80614/ Please find the JD below, What role you will play in our team Design, build, and maintain data systems, architectures, and pipelines to extract insights and drive business decisions. Collaborate with stakeholders to ensure data quality, integrity, and availability. What you will do Support in developing and owning ETL pipelines within cloud data platforms Data extraction and transformation pipeline Automation using Python/Airflow/Azure Data Factory/Qlik/Fivetran Delivery of task monitoring and notification system for data pipeline status Supporting data cleansing, enrichment, and curation / enrichment activities to enable ongoing business use cases Developing and delivering data pipelines through a CI/CD delivery methodology Developing monitoring around pipelines to ensure uptime of data flows Optimization and refinement of current queries against Snowflake Working with Snowflake, MSSQL, Postgres, Oracle, Azure SQL, and other relational databases Work with different cloud databases such as Azure SQL, Azure PostgreSQL, Etc. Working with Change-Data-Capture ETL software to populate Snowflake such as Qlik and Fivetran Identification and remediation of failed and long running queries Development of large aggregate queries across a multitude of schemas About You Skills and Qualifications Experience with data processing / analytics, and ETL data transformation. Proficient in ingesting data to/from Snowflake, Azure storage account. Proficiency in at least one of the following languages: Python, C#, C++, F#, Java. Proficiency in SQL and NoSQL databases. Knowledge of SQL query development and optimization Demonstrated experience Snowflake, Qlik Replicate, Fivetran, Azure Data Explorer. Cloud azure experience (current/future) used (ADX, ADF, Databricks) Expertise with Airflow, Qlik, Fivetran, Azure Data Factory Management of Snowflake through DBT scripting Solid understanding of data strategies, including data management, data curation, and data governance Ability to quickly build relationships and credibility with business customers and agile teams A passion for learning about and experimenting with new technologies Confidence in creating and delivering technical presentations and training Excellent organization and planning skills Preferred Qualifications/ Experience Experience with data processing / analytics, and ETL data transformation. Proficient in ingesting data to/from Snowflake, Azure storage account. Proficiency in at least one of the following languages: Python, C#, C++, F#, Java. Proficiency in SQL and NoSQL databases. Knowledge of SQL query development and optimization Demonstrated experience Snowflake, Qlik Replicate, Fivetran, Azure Data Explorer. Cloud azure experience (current/future) used (ADX, ADF, Databricks) Expertise with Airflow, Qlik, Fivetran, Azure Data Factory Management of Snowflake through DBT scripting Solid understanding of data strategies, including data management, data curation, and data governance Ability to quickly build relationships and credibility with business customers and agile teams A passion for learning about and experimenting with new technologies Confidence in creating and delivering technical presentations and training Excellent organization and planning skills. Thanks & Regards, Anita
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Quality Engineer Data to ensure the reliability, accuracy, and performance of data pipelines and AI/ML models within our SmartFM platform . This role is critical to delivering trusted data and actionable insights that drive smart building optimization and operational efficiency. Key Responsibilities: Design and implement robust QA strategies for data pipelines , ML models , and agentic workflows . Test and validate data ingestion and streaming systems (e.g., StreamSets , Kafka ) for accuracy, completeness, and resilience. Ensure data integrity and schema validation within MongoDB and other data stores. Collaborate with Data Engineers to proactively identify and resolve data quality issues. Partner with Data Scientists to validate ML/DL/LLM model performance, fairness, and robustness. Automate testing processes using frameworks such as Pytest , Great Expectations , and Deepchecks . Monitor production pipelines for anomalies, data drift , and model degradation . Participate in code reviews , QA audits, and maintain comprehensive documentation of test plans and results. Continuously evaluate and improve QA processes based on industry best practices and emerging trends. Required Technical Skills: 510 years of QA experience with a focus on data validation and ML model testing . Strong command of SQL for complex data queries and integrity checks. Practical experience with StreamSets , Kafka , and MongoDB . Proficient in Python scripting for automation and testing. Familiarity with ML testing metrics , model validation techniques, and bias detection . Exposure to cloud platforms such as Azure , AWS , or GCP . Working knowledge of QA tools like Pytest , Great Expectations , and Deepchecks . Understanding of Node.js and React-based applications is an added advantage. Additional Qualifications: Excellent communication , documentation , and cross-functional collaboration skills. Strong analytical mindset and high attention to detail. Ability to work with cross-disciplinary teams including Engineering, Data Science, and Product. Passion for continuous learning and adoption of new QA tools and methodologies. Domain knowledge in facility management , IoT , or building automation systems is a strong plus.
Posted 1 week ago
4.0 - 6.0 years
12 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Data Engineer (Microsoft Fabric & Lakehouse), PySpark Data Lakehouse architectures,cloud platforms (Azure, AWS), on-prem databases, SaaS platforms Salesforce, Workday), REST/OpenAPI-based APIs,data governance, lineage, RBAC principles,PySpark, SQL
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Noida, New Delhi, Gurugram
Hybrid
Role & responsibilities Strategically partner with the Customer Cloud Sales Team to identify and qualify business opportunities and identify key customer technical objections. Develop strategies to resolve technical obstacles and architect client solutions to meet complex business and technical requirements Lead the technical aspects of the sales cycle, including technical trainings, client presentations, technical bid responses, product and solution briefings, and proof-of-concept technical work Identify and respond to key technical objections from client, providing prescriptive guidance for successful resolutions tailored to specific client needs May directly work with Customer's Cloud products to demonstrate, design and prototype integrations in customer/partner environments Develop and deliver thorough product messaging to highlight advanced technical value propositions, using techniques such as: whiteboard and slide presentations, technical product demonstrations, white papers, trial management and RFI response documents Assess technical challenges to develop and deliver recommendations on integration strategies, enterprise architectures, platforms and application infrastructure required to successfully implement a complete solution Leverage technical expertise to provide best practice counsel to optimize advanced technical products effectiveness THER CRITICAL FUNCTIONS AND RESPONSIBILTIES Ensure customer data is accurate and actionable using Salesforce.com (SFDC) systems Leverage 3rd party prospect and account intelligence tools to extract meaningful insights and support varying client needs Navigate, analyse and interpret technical documentation for technical products, often including Customer Cloud products Enhance skills and knowledge by using a Learning Management Solution (LMS) for training and certification Serve as a technical and subject matter expert to support advanced trainings for team members on moderate to highly complex technical subjects Offer thought leadership in the advanced technical solutions, such as cloud computing Coach and mentor team members and advise managers on creating business and process efficiencies in internal workflows and training materials Collect and codify best practices between sales, marketing, and sales engineers Preferred candidate profile Required Qualifications Bachelors degree in Computer Science or other technical field, or equivalent practical experience (preferred) 3-5 years of experience serving in a technical Sales Engineer in an advanced technical environment Prior experience with advanced technologies, such as: Big Data, PaaS, and IaaS technologies, etc. Proven strong communication skills with a proactive and positive approach to task management (written and verbal)Confident presenter with excellent presentation and persuasion skills Strong work ethic and ability to work independently Perks and benefits
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Data Engineer who will be responsible for leading the end-to-end migration of the data analytics and reporting environment to Looker at Frequence. Your role will involve designing scalable data models, translating business logic into LookML, and empowering teams across the organization with self-service analytics and actionable insights. You will collaborate closely with stakeholders from data, engineering, and business teams to ensure a smooth transition to Looker, establish best practices for data modeling, governance, and dashboard development. Your responsibilities will include: - Leading the migration of existing BI tools, dashboards, and reporting infrastructure to Looker - Designing, developing, and maintaining scalable LookML data models, dimensions, measures, and explores - Creating intuitive, actionable, and visually compelling Looker dashboards and reports - Collaborating with data engineers and analysts to ensure consistency across data sources - Translating business requirements into technical specifications and LookML implementations - Optimizing SQL queries and LookML models for performance and scalability - Implementing and managing Looker's security settings, permissions, and user roles in alignment with data governance standards - Troubleshooting issues and supporting end users in their Looker adoption - Maintaining version control of LookML projects using Git - Advocating for best practices in BI development, testing, and documentation You should have: - Proven experience with Looker and deep expertise in LookML syntax and functionality - Hands-on experience building and maintaining LookML data models, explores, dimensions, and measures - Strong SQL skills, including complex joins, aggregations, and performance tuning - Experience working with semantic layers and data modeling for analytics - Solid understanding of data analysis and visualization best practices - Ability to create clear, concise, and impactful dashboards and visualizations - Strong problem-solving skills and attention to detail in debugging Looker models and queries - Familiarity with Looker's security features and data governance principles - Experience using version control systems, preferably Git - Excellent communication skills and the ability to work cross-functionally - Familiarity with modern data warehousing platforms (e.g., Snowflake, BigQuery, Redshift) - Experience migrating from legacy BI tools (e.g., Tableau, Power BI, etc.) to Looker - Experience working in agile data teams and managing BI projects - Familiarity with dbt or other data transformation frameworks At Frequence, you will be part of a dynamic, diverse, innovative, and friendly work environment that values creativity and collaboration. The company embraces differences and believes they drive creativity and innovation. The team consists of individuals from varied backgrounds who are all trail-blazing team players, thinking big and aiming to make a significant impact. Please note that third-party recruiting agencies will not be involved in this search.,
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
Ahmedabad
Work from Office
Job Overviews Designation: Software Engineer Location: Ahmedabad Work Mode: Work from Office Vacancy: 1 Experience: 4.0 To 7.0 ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
6.0 - 11.0 years
9 - 19 Lacs
Bengaluru
Hybrid
Lead : 6-8 years Focus on production cost for the techniques and features • Mentoring the team on benchmarking costs, performance KPI’s • Guarding the focus on the team towards objectives •Advanced proficiency in Python and/or Scala for data engineering tasks. •Proficiency in PySpark and Scala Spark for distributed data processing, with hands-on experience in Azure Databricks. •Expertise in Azure Databricks for data engineering, including Delta Lake, MLflow, and cluster management. •Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their big data and data warehousing services (e.g., Azure Data Factory, AWS Redshift). •Expertise in data warehousing platforms such as Snowflake, Azure Synapse Analytics, or Redshift, including schema design, ETL/ELT processes, and query optimization. •Experience with Hadoop ecosystem (HDFS, Hive, HBase, etc.), Apache Airflow for workflow orchestration and scheduling. •Advanced knowledge of SQL for data warehousing and analytics, with experience in NoSQL databases (e.g., MongoDB) as a plus. •Experience with version control systems (e.g., Git) and CI/CD pipelines. •Familiarity with Java or other programming languages is a plus.
Posted 2 weeks ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Work from Office
We are looking for Data Engineer who are having experience of working as Quantexa developer.
Posted 2 weeks ago
5.0 - 10.0 years
30 - 35 Lacs
Kolkata, New Delhi, Bengaluru
Work from Office
Good hands on experience working as a GCP Data Engineer with very strong experience in SQL and PySpark. Also on BigQuery, Dataform, Dataplex, etc. Looking for only Immediate to currently serving candidates.
Posted 2 weeks ago
2.0 - 4.0 years
7 - 9 Lacs
Mangaluru
Work from Office
About the Role Were seeking a Data Engineering expert with a passion for teaching and building impactful learning experiences. This role goes beyond traditional instructionit's about designing engaging, industry-relevant content and delivering it in a way that sparks curiosity and problem-solving among young professionals. If youre someone who thrives in a startup-like, hands-on learning environment and loves to simplify complex technical concepts, we want you on our team. Role & responsibilities • Design and deliver an industry-relevant Data Engineering curriculum with a focus on solving complex, real-world problems. • Mentor students through the process of building product-grade data solutions, from identifying the problem to deploying a prototype. • Conduct hands-on sessions, coding labs, and data engineering workshops. • Assess student progress through assignments, evaluations, and project reviews. • Encourage innovation and entrepreneurship by helping students transform ideas into structured products. • Continuously improve content based on student outcomes and industry trends. • Be a role model who inspires, supports, and challenges learners to grow into capable tech professionals Preferred candidate profile Key Skills & Expertise Strong practical experience with data engineering tools and frameworks (e.g., SQL, Python, Spark, Kafka, Airflow, Hadoop). Ability to design course modules that emphasize application, scalability, and problem-solving. Demonstrated experience in mentoring, teaching, or conducting technical workshops. Passion for product thinkingguiding students to go beyond code and build real solutions. Excellent communication and leadership skills. Adaptability and a growth mindset. Contact: 91 97041 22348 / hr@singhtechservices.com
Posted 2 weeks ago
3.0 - 8.0 years
5 - 12 Lacs
Chennai
Work from Office
Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa
Posted 2 weeks ago
6.0 - 11.0 years
1 - 6 Lacs
Bengaluru
Work from Office
Location: Bengaluru (Hybrid) Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory, Databricks (PySpark), and Synapse Analytics. Implement and manage real-time data streaming solutions using Kafka. Build and optimize data lake architectures and ensure best practices for scalability and security. Develop efficient SQL and Python scripts for data transformation and cleansing. Collaborate with data scientists, analysts, and other stakeholders to ensure data is readily available and reliable. Utilize Azure DevOps and Git workflows for CI/CD and version control of data pipelines and scripts. Monitor and troubleshoot data ingestion, transformation, and loading issues. Must-Have Skills: Azure Data Factory (ADF) Azure Databricks (with PySpark) Azure Synapse Analytics Apache Kafka Strong proficiency in SQL, Python, and Spark Experience with Azure DevOps and Git workflows Strong understanding of Data Lake architectures and cloud data engineering best practices Good to Have: Experience with data governance tools or frameworks Exposure to Delta Lake, Parquet, or other data formats Knowledge of performance tuning in distributed data environments Familiarity with infrastructure-as-code (e.g., Terraform or ARM templates)
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Who We Are:- We are a digitally native company that helps organizations reinvent themselves and unleash their potential. We are the place where innovation, design and engineering meet scale. Globant is 20 years old, NYSE listed public organization with more than 33,000+ employees worldwide working out of 35 countries globally. www.globant.com Job location: Pune/Hyderabad/Bangalore Work Mode: Hybrid Experience: 5 to 10 Years Must have skills are 1) AWS (EC2 & EMR & EKS) 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch 9) No SQL Database - DynamoDB/MongoDB/ OR any We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications.
Posted 2 weeks ago
3.0 - 6.0 years
3 - 5 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Job Title: Data Engineer Snowflake & ETL Specialist Experience: 3–6 years (adjust as needed) Employment Type: Full-time Joiner-Immediate Location-Gurgaon Department: Data Engineering / Analytics Job Summary: We are seeking a skilled Data Engineer with strong hands-on experience in Snowflake, ETL development, and AWS Glue. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and data warehouse solutions that support enterprise-level analytics and reporting needs. Key Responsibilities: • Develop, optimize, and manage ETL pipelines using AWS Glue, Python, and Snowflake. • Design and implement data warehouse solutions and data models based on business requirements. • Work closely with data analysts, BI developers, and stakeholders to ensure clean, consistent, and reliable data delivery. • Monitor and troubleshoot performance issues related to data pipelines and queries in Snowflake. • Participate in code reviews, documentation, and knowledge-sharing activities. • Ensure data security, governance, and compliance with organizational
Posted 2 weeks ago
3.0 - 8.0 years
9 - 19 Lacs
Bengaluru
Hybrid
Data engineers: Help optimize the workflows handling millions of images with a minimum cost. •Every bit of optimization when scaled for million images we have lot of money to save. •You must be extremely good at programming for cloud workflows with a strong eye on optimization considering the scale Comptencies : First thing, belief that “right cost is everything for you” = optimized but fast on cloud •An attitude to make data flow through the workflows on cloud most efficiently, •Right cost can even mean – find the best of storage solution to retrieve cheap and fast •Extremely good at software development to not brag about language proficiency but rather I can learn ‘assembly language’ also if required but for now python mostly
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough