Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 20 Lacs
Pune
Remote
Role & responsibilities Key Responsibilities: At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Databricks with knowledge in Pyspark Database: Oracle or any other database Programming: Python with awareness of Streamlit
Posted 1 month ago
6.0 - 11.0 years
25 - 35 Lacs
Gurugram, Chennai, Bengaluru
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Bengaluru,IN; Gurgaon,IN; Chennai,IN Payroll: BCforward Work Mode: Hybrid JD GCP; PySpark; ETL - Big Data / Data Warehousing; SQL; Python Experienced data engineer with hands on experience on GCP offerings Experienced in BigQuery/ BigTable/ Pyspark Worked on prior data engineering projects leveraging GCP product offerings Strong SQL background Prior Bigdata experience Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best
Posted 1 month ago
4.0 - 9.0 years
11 - 17 Lacs
Bengaluru
Work from Office
Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a GCP Data Engineer with Tableau expertise, you will be responsible for designing, implementing, and maintaining data pipelines on Google Cloud Platform (GCP) to support various data analytics initiatives. Your role will involve working closely with stakeholders to understand their data requirements, developing scalable solutions to extract, transform, and load data from different sources into GCP, and ensuring the integrity and quality of the data. In this role, you will leverage your expertise in GCP services such as BigQuery, Dataflow, Pub/Sub, and Data Studio to build and optimize data pipelines for efficient data processing and analysis. You will also be required to create visualizations and dashboards using Tableau to present insights derived from the data to business users. The ideal candidate for this position should have a strong background in data engineering, with hands-on experience in building and optimizing data pipelines on GCP. Proficiency in SQL, Python, or Java for data processing and transformation is essential. Additionally, experience with Tableau for creating interactive visualizations and dashboards is highly preferred. If you are a data engineering professional with expertise in GCP and Tableau and are passionate about leveraging data to drive business decisions, this role offers an exciting opportunity to contribute to the success of data-driven initiatives within the organization.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Analyst with expertise in Market Research and Web Scraping, you will be responsible for analyzing large datasets to uncover trends and insights related to market dynamics and competitor performance. Your role will involve conducting thorough market research to track competitor activities, identify emerging trends, and understand customer preferences. Additionally, you will design and implement data scraping solutions to extract competitor data from various online sources while ensuring compliance with legal standards and website terms of service. Your key responsibilities will include developing dashboards, reports, and visualizations to communicate key insights effectively to stakeholders. You will collaborate with cross-functional teams to align data-driven insights with company objectives and support strategic decision-making in product development and marketing strategies. Furthermore, you will be involved in database management, data cleaning, and maintaining organized databases with accurate and consistent information for easy access and retrieval. To excel in this role, you should have a Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or a related field. Advanced degrees or certifications in data analytics or market research will be advantageous. Proficiency in SQL, Python, or R for data analysis, along with experience in data visualization tools like Tableau, Power BI, or D3.js, is essential. Strong analytical skills, the ability to interpret data effectively, and knowledge of statistical analysis techniques are key requirements for this position. Experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium, as well as familiarity with web analytics and SEO tools like Google Analytics or SEMrush, will be beneficial. Preferred skills include experience with e-commerce data analysis, knowledge of retail or consumer behavior analytics, and an understanding of machine learning techniques for data classification and prediction. Ethical data scraping practices and adherence to data privacy laws are essential considerations for this role. If you meet these qualifications and are excited about the opportunity to work in a dynamic environment where your analytical skills and market research expertise will be valued, we encourage you to apply by sending your updated resume along with your current salary details to jobs@glansolutions.com. For any inquiries, feel free to contact Satish at 8802749743 or visit our website at www.glansolutions.com to explore more job opportunities. Join us at Glan Solutions and leverage your data analysis skills to drive strategic decisions and contribute to our success in the fashion/garment/apparel industry! Note: This job was posted on 14th November 2024.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data pipelines that enable data analytics and machine learning solutions on the Azure platform. You will work closely with data scientists, analysts, and other stakeholders to understand their data requirements and develop efficient data processing solutions. Your primary focus will be on building and optimizing data pipelines using Azure data services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure HDInsight. You will also be responsible for integrating data from various sources, ensuring data quality and consistency, and implementing data security and compliance measures. In this role, you will leverage your expertise in SQL, Python, and other programming languages to transform and analyze large volumes of data. You will also collaborate with cross-functional teams to troubleshoot data issues, optimize performance, and implement best practices for data management and governance. The ideal candidate for this position has a strong background in data engineering, experience working with cloud-based data technologies, and a passion for driving insights from data. Strong communication skills, problem-solving abilities, and the ability to work in a fast-paced environment are also essential for success in this role.,
Posted 1 month ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
GLOINNT is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Design, develop, and maintain data infrastructure, databases, and data pipelinesDevelop and implement ETL processes to extract, transform, and load data from various sourcesEnsure data accuracy, quality, and accessibility, and resolve data-related issuesCollaborate with data analysts, data scientists, and other stakeholders to understand data needs and requirementsDevelop and maintain data models and data dictionariesDesign and implement data warehousing solutions to enable efficient and effective data analysis and reportingImplement and manage data security and access controls to protect data privacy and confidentialityStrong understanding of data architecture, data modeling, ETL processes, and data warehousingExcellent communication and collaboration skills
Posted 1 month ago
4.0 - 9.0 years
0 - 0 Lacs
Hyderabad, Chennai
Hybrid
Job Description: Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Noida
Work from Office
Senior Data Engineer Experience: 5+yrs Location: Noida 5 days Work from Office Shift:1pm to 10pm Job Summary We are seeking a highly skilled Senior Data Engineer / BI Developer with deep expertise in SQL Server database development and performance tuning, along with experience in ETL pipelines (SSIS), cloud-based data engineering (Azure Databricks), and data visualization (Power BI/Sigma). This role is critical in designing, optimizing, and maintaining enterprise grade data solutions that power analytics and business intelligence across the organization. Key Responsibilities Design, develop, and optimize SQL Server databases in Azure Cloud, including schema design, indexing strategies, and stored procedures. Perform advanced SQL performance tuning, query optimization, and troubleshooting of slow-running queries. Develop and maintain SSIS packages for complex ETL workflows, including error handling and logging. Build scalable data pipelines in Azure Databricks. Create and maintain Power BI and Sigma dashboards, ensuring data accuracy, usability, and performance. Implement and enforce data governance, security, and compliance best practices. Collaborate with cross-functional teams including data analysts, data scientists, and business stakeholders. Participate in code reviews, data modeling, and architecture planning for new and existing systems. Experience with backup and recovery strategies, high availability, and disaster recovery Required Skills & Experience 5 to 8 years of hands-on experience with Microsoft SQL Server (2016/2022 or later). - Strong expertise in T-SQL, stored procedures, functions, views, indexing, and query optimization. Proven experience with SSIS for ETL development and deployment. Experience with Azure Databricks, Spark, and Delta Lake for big data processing. Proficiency in Power BI and/or Sigma for data visualization and reporting. Solid understanding of data warehousing, star/snowflake schemas, and dimensional modeling. Familiarity with CI/CD pipelines, Git, and DevOps for data. Senior Data Engineer / BI Developer (SQL Server & Cloud Analytics) Strong communication and documentation skills. Preferred Qualifications Experience with Azure Data Factory, Synapse Analytics, or Azure SQL Database. Knowledge of NoSQL databases (e.g., MongoDB, Cosmos DB) is a plus. Familiarity with data lake architecture and cloud storage (e.g., ADLS Gen2). Experience in agile environments and working with JIRA or Azure DevOps
Posted 1 month ago
2.0 - 6.0 years
12 - 24 Lacs
Jaipur
Work from Office
Responsibilities: * Develop data pipelines using PySpark and SQL. * Collaborate with cross-functional teams on ML projects. * Optimize database performance through data modeling and visualization.
Posted 1 month ago
4.0 - 8.0 years
25 - 30 Lacs
Pune
Hybrid
Hi, Greetings!!! Role : Data Engineer Experience : 4+ years Location: Pune (Hybrid 3 days in office per week) Work Model : Hybrid Mail Skills: Data Engineer with Java, ETL, Apache, SQL Key Responsibilities: Design, implement, and optimize ETL/ELT pipelines using DBT for data modeling and transformation. Develop backend components and data processing logic using Java. Build and maintain DAGs in Apache Airflow for orchestration and automation of data workflows. Ensure the reliability, scalability, and efficiency of data pipelines for ingestion, transformation, and storage. Work with cross-functional teams to understand data needs and deliver high-quality solutions. Troubleshoot and resolve data pipeline issues in production environments. Apply data quality and governance best practices, including validation, logging, and monitoring. Collaborate on CI/CD deployment pipelines for data infrastructure. Required Skills & Qualifications: 4+ years of hands-on experience in data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java, especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, Big Query). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure.
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 month ago
8.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
distributed data technologies,Hadoop, MapReduce, Spark, Kafka, Flink etc.for building efficient, large-scale ‘big data’ pipelines,Java, Scala, Python or equivalent,stream-processing applications using Apache Flink, Kafka. AWS,Azure,Google Cloud
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are sought for the position of Data Engineer / PowerBI Developer at Solvios Technology with 3 to 5 years of experience. Your role will involve designing, constructing, and optimizing data pipelines to ensure the seamless flow of data across various systems. Your responsibilities will include: - Creating and implementing ETL Operations using Azure Data Factory or similar tools - Utilizing different REST APIs for data collection - Developing interactive dashboards and reports using Power BI - Crafting efficient SQL queries for data extraction and manipulation from relational databases - Establishing and managing data models, DAX calculations, and Power BI datasets - Conducting data analysis, validation, and ensuring report quality and accuracy - Connecting Power BI reports to various data sources like SQL Server, Azure SQL, Excel, APIs, Snowflake, and Databricks - Optimizing Power BI dashboards, SQL queries, and dataflows for enhanced performance and scalability - Collaborating with business stakeholders to gather reporting requirements and translate them into technical solutions - Troubleshooting data-related and report performance issues, ensuring timely resolution - Documenting report specifications, data models, business logic, and technical processes and staying updated with new features and best practices in Power BI, Azure, Snowflake, AWS ecosystems. Requirements for this role include experience in ETL operations, working knowledge of APIs for data collection, familiarity with data visualization best practices and UI/UX design principles, exposure to data warehouse concepts like Star Schema and Snowflake Schema, and experience in implementing Row-Level Security (RLS) in Power BI reports. Qualifications: - Minimum 3 years of experience with Power BI - Minimum 3 years of experience in Data Warehousing and ETL setup - Experience working with SQL Server, Azure SQL, and the Microsoft technology stack If you are a dynamic professional with a passion for data engineering and PowerBI development and meet the above requirements, we welcome you to join our team at Solvios Technology.,
Posted 1 month ago
6.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Person should have 6+ years in Azure Cloud. Should have experience in Data Engineer, Architecture. Experience in working on Azure Services like Azure Data Factory, Azure Function, Azure SQL, Azure Data Bricks, Azure Data Lake, Synapse Analytics etc.
Posted 1 month ago
3.0 - 8.0 years
1 - 3 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Immediate Joiners Only-0-15 days Only Considered 3+Years Mandatory Work Mode-Hybrid Work Loation: Hyderabad, Bengaluru,Chennai,Pune Mandatory Skills: Azure, ADF, Spark, Astronomer Data Engineering topics Kafka based ingestion API based ingestion Astronomer, Apache Airflow, dagster, etc. (orchestration tools) Familiarity with Apache Iceberg, Delta & Hudi table designs when to use, why to use & how to use Spark architecture Optimization techniques Performance issues and mitigation techniques Data Quality topics Data engineering without quality provides no value Great Expectations (https://docs.greatexpectations.io/docs/core/introduction/try_gx/) Pydeequ (https://pydeequ.readthedocs.io/en/latest/index.html) Databricks – DLT expectations (Spark based)
Posted 1 month ago
7.0 - 12.0 years
20 - 27 Lacs
Pune
Remote
We are seeking a highly skilled Senior Data Engineer 2 with extensive experience in analysing existing data/databases, designing, building, and optimizing high-volume data pipelines. The ideal candidate will have strong expertise in Python, Databases, Databricks on Azure Cloud services, DevOps, and CI/CD tools, along with a solid understanding of AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and Numerator technology standards Build suitable automation test suites within Azure DevOps Maintain and update automation test suites as required Carry out manual testing, load testing, exploratory testing as required Perform Technical Analysis and work closely with Business Analysts and Senior Data Developers to consistently deliver sprint goals Assist in estimation of sprint-by-sprint stories and tasks Pro-actively take a responsible approach to product delivery Requirements 7-10 years of experience in data engineering roles, handling large databases Good C# and Python skills Experience working with Microsoft Azure Cloud Experience in Agile methodologies (Scrum/Kanban) Experience with Apache Spark, PySpark, Databricks Experience working with Devops pipeline, preferably Azure DevOps Preferred Qualifications Bachelor's or master's degree in computer science, Information Technology, Data Science, or a related field. Experience working in a technical development/support focused role Knowledge/experience in AI ML techniques Knowledge/experience in Visual Basic 6 Certification in relevant Data Engineering discipline or related fields.
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 month ago
5.0 - 15.0 years
13 - 37 Lacs
Kolkata, Pune, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, load data from various sources into Azure storage solutions such as Blob Storage, Cosmos DB, etc. Collaborate with cross-functional teams to gather requirements for new data engineering projects and ensure successful implementation of ADF workflows. Troubleshoot issues related to ADF pipeline failures by analyzing logs, debugging techniques, and working closely with stakeholders to resolve problems efficiently. Develop automated testing frameworks for ADF pipelines using PySpark or other tools to ensure high-quality delivery of projects. Job Requirements : 5-15 years of experience in Data Engineering with expertise in Azure Data Factory (ADF). Strong understanding of big data technologies like Hadoop ecosystem components including Hive, Pig, Spark etc. . Proficiency in writing complex SQL queries on Azure Databricks platform.
Posted 1 month ago
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Work from Office
We are looking for senior data engineer with 5-8 yrs of experience.
Posted 1 month ago
9.0 - 14.0 years
25 - 37 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities Job Overview: We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset . Required Skills & Qualifications: Primary Requirements: SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning . Programming Languages – Proficiency in Python or Java for data processing and automation. Data Modelling – Good understanding of entity-relationship modelling, star schema, and Kimball methodology . Cloud Data Engineering – Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake ETL Development – Experience building scalable ETL/ELT pipelines and data ingestion workflows. Ability to learn and apply Snowflake concepts as needed. Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions. Financial Services Domain (Optional) : Knowledge of financial services. Good to Have Skills: DevOps & CI/CD – Experience with Git, Jenkins, Docker, and automated deployments . Streaming Data Processing – Experience with Spark, Kafka, or real-time event-driven architectures . Data Governance & Security – Understanding of data security, compliance, and governance frameworks . Experience in AWS – Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.
Posted 1 month ago
2.0 - 7.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Dear Candidates, Greetings from ExxonMobil! Please copy and paste the below link into your browser to apply for the position in the company website. Link to apply: https://jobs.exxonmobil.com/job-invite/80614/ Please find the JD below, What role you will play in our team Design, build, and maintain data systems, architectures, and pipelines to extract insights and drive business decisions. Collaborate with stakeholders to ensure data quality, integrity, and availability. What you will do Support in developing and owning ETL pipelines within cloud data platforms Data extraction and transformation pipeline Automation using Python/Airflow/Azure Data Factory/Qlik/Fivetran Delivery of task monitoring and notification system for data pipeline status Supporting data cleansing, enrichment, and curation / enrichment activities to enable ongoing business use cases Developing and delivering data pipelines through a CI/CD delivery methodology Developing monitoring around pipelines to ensure uptime of data flows Optimization and refinement of current queries against Snowflake Working with Snowflake, MSSQL, Postgres, Oracle, Azure SQL, and other relational databases Work with different cloud databases such as Azure SQL, Azure PostgreSQL, Etc. Working with Change-Data-Capture ETL software to populate Snowflake such as Qlik and Fivetran Identification and remediation of failed and long running queries Development of large aggregate queries across a multitude of schemas About You Skills and Qualifications Experience with data processing / analytics, and ETL data transformation. Proficient in ingesting data to/from Snowflake, Azure storage account. Proficiency in at least one of the following languages: Python, C#, C++, F#, Java. Proficiency in SQL and NoSQL databases. Knowledge of SQL query development and optimization Demonstrated experience Snowflake, Qlik Replicate, Fivetran, Azure Data Explorer. Cloud azure experience (current/future) used (ADX, ADF, Databricks) Expertise with Airflow, Qlik, Fivetran, Azure Data Factory Management of Snowflake through DBT scripting Solid understanding of data strategies, including data management, data curation, and data governance Ability to quickly build relationships and credibility with business customers and agile teams A passion for learning about and experimenting with new technologies Confidence in creating and delivering technical presentations and training Excellent organization and planning skills Preferred Qualifications/ Experience Experience with data processing / analytics, and ETL data transformation. Proficient in ingesting data to/from Snowflake, Azure storage account. Proficiency in at least one of the following languages: Python, C#, C++, F#, Java. Proficiency in SQL and NoSQL databases. Knowledge of SQL query development and optimization Demonstrated experience Snowflake, Qlik Replicate, Fivetran, Azure Data Explorer. Cloud azure experience (current/future) used (ADX, ADF, Databricks) Expertise with Airflow, Qlik, Fivetran, Azure Data Factory Management of Snowflake through DBT scripting Solid understanding of data strategies, including data management, data curation, and data governance Ability to quickly build relationships and credibility with business customers and agile teams A passion for learning about and experimenting with new technologies Confidence in creating and delivering technical presentations and training Excellent organization and planning skills. Thanks & Regards, Anita
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Quality Engineer Data to ensure the reliability, accuracy, and performance of data pipelines and AI/ML models within our SmartFM platform . This role is critical to delivering trusted data and actionable insights that drive smart building optimization and operational efficiency. Key Responsibilities: Design and implement robust QA strategies for data pipelines , ML models , and agentic workflows . Test and validate data ingestion and streaming systems (e.g., StreamSets , Kafka ) for accuracy, completeness, and resilience. Ensure data integrity and schema validation within MongoDB and other data stores. Collaborate with Data Engineers to proactively identify and resolve data quality issues. Partner with Data Scientists to validate ML/DL/LLM model performance, fairness, and robustness. Automate testing processes using frameworks such as Pytest , Great Expectations , and Deepchecks . Monitor production pipelines for anomalies, data drift , and model degradation . Participate in code reviews , QA audits, and maintain comprehensive documentation of test plans and results. Continuously evaluate and improve QA processes based on industry best practices and emerging trends. Required Technical Skills: 510 years of QA experience with a focus on data validation and ML model testing . Strong command of SQL for complex data queries and integrity checks. Practical experience with StreamSets , Kafka , and MongoDB . Proficient in Python scripting for automation and testing. Familiarity with ML testing metrics , model validation techniques, and bias detection . Exposure to cloud platforms such as Azure , AWS , or GCP . Working knowledge of QA tools like Pytest , Great Expectations , and Deepchecks . Understanding of Node.js and React-based applications is an added advantage. Additional Qualifications: Excellent communication , documentation , and cross-functional collaboration skills. Strong analytical mindset and high attention to detail. Ability to work with cross-disciplinary teams including Engineering, Data Science, and Product. Passion for continuous learning and adoption of new QA tools and methodologies. Domain knowledge in facility management , IoT , or building automation systems is a strong plus.
Posted 1 month ago
4.0 - 6.0 years
12 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Data Engineer (Microsoft Fabric & Lakehouse), PySpark Data Lakehouse architectures,cloud platforms (Azure, AWS), on-prem databases, SaaS platforms Salesforce, Workday), REST/OpenAPI-based APIs,data governance, lineage, RBAC principles,PySpark, SQL
Posted 1 month ago
3.0 - 8.0 years
10 - 20 Lacs
Noida, New Delhi, Gurugram
Hybrid
Role & responsibilities Strategically partner with the Customer Cloud Sales Team to identify and qualify business opportunities and identify key customer technical objections. Develop strategies to resolve technical obstacles and architect client solutions to meet complex business and technical requirements Lead the technical aspects of the sales cycle, including technical trainings, client presentations, technical bid responses, product and solution briefings, and proof-of-concept technical work Identify and respond to key technical objections from client, providing prescriptive guidance for successful resolutions tailored to specific client needs May directly work with Customer's Cloud products to demonstrate, design and prototype integrations in customer/partner environments Develop and deliver thorough product messaging to highlight advanced technical value propositions, using techniques such as: whiteboard and slide presentations, technical product demonstrations, white papers, trial management and RFI response documents Assess technical challenges to develop and deliver recommendations on integration strategies, enterprise architectures, platforms and application infrastructure required to successfully implement a complete solution Leverage technical expertise to provide best practice counsel to optimize advanced technical products effectiveness THER CRITICAL FUNCTIONS AND RESPONSIBILTIES Ensure customer data is accurate and actionable using Salesforce.com (SFDC) systems Leverage 3rd party prospect and account intelligence tools to extract meaningful insights and support varying client needs Navigate, analyse and interpret technical documentation for technical products, often including Customer Cloud products Enhance skills and knowledge by using a Learning Management Solution (LMS) for training and certification Serve as a technical and subject matter expert to support advanced trainings for team members on moderate to highly complex technical subjects Offer thought leadership in the advanced technical solutions, such as cloud computing Coach and mentor team members and advise managers on creating business and process efficiencies in internal workflows and training materials Collect and codify best practices between sales, marketing, and sales engineers Preferred candidate profile Required Qualifications Bachelors degree in Computer Science or other technical field, or equivalent practical experience (preferred) 3-5 years of experience serving in a technical Sales Engineer in an advanced technical environment Prior experience with advanced technologies, such as: Big Data, PaaS, and IaaS technologies, etc. Proven strong communication skills with a proactive and positive approach to task management (written and verbal)Confident presenter with excellent presentation and persuasion skills Strong work ethic and ability to work independently Perks and benefits
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |