Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Power Bi and AAS expert (Strong SC or Specialist Senior) Should have hands-on experience of Data Modelling in Azure SQL Data Warehouse and Azure Analysis Service Should be able to write and test Dex queries. Should be able generate Paginated Reports in Power BI Should have minimum 3 Years working experience in delivering projects in Power Bi Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. v Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: azure,data modelling,power bi,aas,azure sql data warehouse,azure analysis services,dex queries,data warehouse,paginated reports
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Ability to take full ownership and deliver component or functionality. Supporting the team to deliver project features with high quality and providing technical guidance. Responsible to work effectively individually and with team members toward customer satisfaction and success Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage
Posted 1 month ago
8.0 - 12.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Role : MLOps Engineer Location – PAN India Notice: Immediate to 60 days Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Hyderabad
Work from Office
One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: adf,sql,adls,azure,azure sql dw
Posted 1 month ago
4.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Hybrid
Role: Azure Data Engineer Job Type: Full Time Job Location: Hyderabad Level of Experience: 5-8 Years Job Description : Experience in understanding on Azure DataBricks, Azure Synapse, Azure SQL and Azure Data Lake is required. • Experience in Creating: designing and developing data models for scalable, multi-terabyte data marts. • Experience in designing and hands-on development in cloud-based analytics solutions. • Should be able to analyze and understand complex data. • Thorough understanding of Azure Cloud Infrastructure. • Designing and building of data pipelines and Streaming ingestion methods. • Knowledge of Dev-Ops processes (Including CI/CD) and Infrastructure as code is essential. • Strong experience in common data warehouse modelling principles. • Knowledge in Power Bl is desirable. • Knowledge on PowerShell and work experience in Python or equivalent programming language is desirable. • Exposure or knowledge on Kusto(KQL) is an added advantage. • Exposure or knowledge of LLM models is an added advantage. Technical Soft Skills; Strong customer engagement skills to understand customer needs for Analytics solutions fully. • Experience in working in a fast-paced agile environment. • Ability to grasp the new technologies fast and start delivering projects quickly. • Strong problem solving and troubleshooting skills.
Posted 1 month ago
4.0 - 7.0 years
7 - 14 Lacs
Pune, Mumbai (All Areas)
Work from Office
Job Profile Description Create and maintain highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Databricks and Apache Spark/Scala Responsible for managing a growing cloud-based data ecosystem and reliability of our Corporate datalake and analytics data mart Contribute to the continued evolution of Corporate Analytics Platform and Integrated data model. Be part of Data Engineering team in all phases of work including analysis, design and architecture to develop and implement cutting-edge solutions. Negotiate and influence changes outside of the team that continuously shape and improve the Data strategy 4+ years of experience implementing analytics data Solutions leveraging Azure Data Factory, Databricks, Logic Apps, ML Studio, Datalake and Synapse Working experience with Scala, Python or R Bachelors degree or equivalent experience in Computer Science, Information Systems, or related disciplines.
Posted 1 month ago
3.0 - 5.0 years
9 - 17 Lacs
Bengaluru
Remote
Role Overview: We are looking for a highly skilled Azure Data Engineer or Power BI Analyst with 3 to 5 years of experience in building end-to-end data solutions on the Microsoft Azure platform. The ideal candidate should be proficient in data ingestion, transformation, modeling, and visualization using tools such as Azure Data Factory, Azure Databricks, SQL, Power BI, and Fabric. Role & responsibilities Design, develop, and maintain robust ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks Perform data ingestion from various on-prem/cloud sources to Azure Data Lake / Synapse / SQL Implement transformation logic using PySpark , SQL , and Data Frames Create Power BI dashboards and reports using DAX and advanced visualization techniques Develop and manage tabular models , semantic layers , and data schemas (Star/Snowflake) Optimize Power BI datasets and performance tuning (e.g., dataset refresh time, PLT) Collaborate with stakeholders to gather reporting requirements and deliver insights Ensure data accuracy, security, and compliance across all stages Leverage Azure DevOps for version control and CI/CD pipelines Participate in Agile ceremonies (scrum, sprint reviews, demos) Preferred candidate profile 3+ years of experience with Azure Data Factory, Databricks, Data Lake Proficient in Power BI , DAX , SQL , and Python Experience in building and optimizing tabular models and semantic layers Hands-on with Azure Synapse , Fabric , and DevOps Solid understanding of data modeling, ETL, data pipelines, and business logic implementation Strong communication skills and ability to work in Agile teams
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Pune
Work from Office
Azure Cloud Data Lead Job Title: Azure Cloud Data Lead Location: Pune, India Experience: 7 - 12 Years Work Mode: Full-time, Office-based Company Overview : Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for On-premises, Private, and Public Cloud Platforms. Visit us at: https://smart-analytica.com Job Summary: We are looking for a highly experienced Azure Cloud Data Lead to oversee the architecture, design, and delivery of enterprise-scale cloud data solutions. This role demands deep expertise in Azure Data Services , strong hands-on experience with data engineering and governance , and a strategic mindset to guide cloud modernization initiatives across complex environments. Key Responsibilities: Architect and design data lakehouses , data warehouses , and analytics platforms using Azure Data Services . Lead implementations using Azure Data Factory (ADF) , Azure Synapse Analytics , and Azure Fabric (OneLake ecosystem). Define and implement data governance frameworks including cataloguing, lineage, security, and quality controls. Collaborate with business stakeholders, data engineers, and developers to translate business requirements into scalable Azure architectures. Ensure platform design meets performance, scalability, security, and regulatory compliance needs. Guide migration of on-premises data platforms to Azure Cloud environments. Create architectural artifacts: solution blueprints, reference architectures, governance models, and best practice guidelines. Collaborate with Sales / presales to customer meetings to understand the business requirement, the scope of work and propose relevant solutions. Drive the MVP/PoC and capability demos to prospective customers / opportunities Must-Have Skills: 7 - 12 years of experience in data architecture, data engineering, or analytics solutions. Hands-on expertise in Azure Cloud services: ADF , Synapse , Azure Fabric (OneLake) , and Databricks (good to have). Strong understanding of data governance , metadata management, and compliance frameworks (e.g., GDPR, HIPAA). Deep knowledge of relational and non-relational databases (SQL, NoSQL) on Azure. Experience with security practices (IAM, RBAC, encryption, data masking) in cloud environments. Strong client-facing skills with the ability to present complex solutions clearly. Preferred Certifications: Microsoft Certified: Azure Solutions Architect Expert Microsoft Certified: Azure Data Engineer Associate
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
Hyderabad, Coimbatore, Bengaluru
Work from Office
Job Overview: We are seeking an experienced Senior Developer with strong expertise in Python, Node.js, Azure, and a proven track record of migrating cloud applications from AWS to Azure. This role requires hands-on experience in PySpark, Databricks, and Azure data services such as ADF and Synapse Spark. The ideal candidate will lead end-to-end modernization and migration initiatives, code remediation, and deployment of serverless and microservices-based applications in Azure. Key Responsibilities: Lead the migration of Python and Node.js applications from AWS to Azure. Analyze legacy AWS architecture, source code, and cloud service dependencies to identify and implement code refactoring and remediation. Develop and modernize applications using PySpark (Python API), Databricks, ADF Mapping Data Flows, and Synapse Spark. Implement and deploy serverless solutions using Azure Functions, replacing AWS Lambda where applicable. Handle migration of storage and data connectors (e.g., S3 to Azure Blob, Confluent Kafka AWS S3 Sync Connector). Convert AWS SDK usage to corresponding Azure SDK implementations. Design and implement CI/CD pipelines, deployment scripts, and configuration for containerized applications using Kubernetes, Helm charts, App Services, APIM, and AKS. Perform unit testing, application troubleshooting, and support within Azure environments. Technical Skills: Must-Have: Python and Node.js development (8+ years total experience) PySpark (Python API) Azure Functions, AKS, App Services, Azure Blob Storage AWS Lambda to Azure Functions migration (Serverless architecture) AWS to Azure SDK conversion ADF (Azure Data Factory): Mapping Data Flows Synapse Spark, Azure Databricks Containerization: Docker, Kubernetes, Helm charts CI/CD Pipelines and deployment scripting Unit testing and application debugging on Azure Proven AWS to Azure application migration experience Nice-to-Have: Confluent Kafka AWS S3 Sync Connector APIM (Azure API Management) Experience working with both PaaS and Serverless Azure infrastructures Tech Stack Highlights: Programming: Python, Node.js, PySpark Cloud Platforms: AWS, Azure Data Services: Azure Blob Storage, ADF, Synapse Spark, Databricks Serverless: AWS Lambda, Azure Functions Migration Tools: AWS SDK to Azure SDK conversion DevOps: CI/CD, Azure DevOps, Helm, Kubernetes Other: App Services, APIM, Confluent Kafka Location: Hyderabad/ Bangalore/ Coimbatore/ Pune
Posted 1 month ago
8.0 - 12.0 years
10 - 14 Lacs
Mumbai
Hybrid
About the role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development : Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data EngineeringAzure Data Factory (ADF), Azure Databricks. Cloud PlatformMicrosoft Azure (Data Lake Storage, Cosmos DB). Data ModelingNoSQL data modeling, Data warehousing concepts. Performance OptimizationData pipeline performance tuning and cost optimization. Programming Languages Python, SQL, PySpark Secondary Skills DevOps and CI/CDAzure DevOps, CI/CD pipeline design and automation. Security and ComplianceImplementing data security and governance standards. Agile MethodologiesExperience in Agile/Scrum environments. Leadership and MentoringStrong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft CertifiedAzure Data Engineer Associate Microsoft CertifiedAzure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About the Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications.
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
6.0 - 10.0 years
10 - 15 Lacs
Mumbai
Work from Office
Data Scientist (Cloud Management, SQL, Building cloud data pipelines, Python, Power BI, GCP) Job Summary UPS Marketing team is looking for a talented and driven Data Scientist to drive its strategic objectives in the areas of pricing, revenue management, market analysis and evidence/data-based decision making. This role will work across multiple channels and teams to drive tangible results in the organization. You will focus on developing metrics for multiple channels and markets, applying advanced statistical modeling where appropriate and pioneering new analytical methods in a variety of fast paced and rapidly evolving consumer channels. This high visibility position will work with multiple levels of the organization, including senior leadership to bring analytical capabilities to the forefront of pricing, rate setting, and optimization of our go-to-market offers. You will contribute to rapidly evolving UPS Marketing analytical capabilities by working amongst a collaborative team of Data Scientists, Analysts and multiple business stakeholders. Responsibilities: Become a subject matter expert on UPS business processes, data and analytical capabilities to help define and solve business needs using data and advanced statistical methods Analyze and extract insights from large-scale structured and unstructured data utilizing multiple platforms and tools. Understand and apply appropriate methods for cleaning and transforming data Work across multiple stake holders to develop, maintain and improve models in production Take the initiative to create and execute analyses in a proactive manner Deliver complex analytical and visualizations to broader audiences including upper management and executives Deliver analytics and insights to support strategic decision making Understand the application of AI/ML when appropriate to solve complex business problems Qualifications Expertise in R, SQL, Python. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to high level analytics solution approach. Expertise with statistical techniques, machine learning or operations research and their application in business applications. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production at scale. Proficient in Azure, Google Cloud environment Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masters Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications Experience with pricing methodologies and revenue management Experience using PySpark, Azure Databricks, Google BigQuery and Vertex AI Creating and implementing NLP/LLM projects Experience utilizing and applying neurals networks and other AI methodologies Familiarity with Data architecture and engineering
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Kochi
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
9.0 - 12.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Job Opportunity with Hexaware Technologies Ltd ! I am hiring for Big Data Lead (Azure Databricks with Pyspark, SQL), interested please share your details to manojkumark2@hexaware.com Exp: 9 to 12 years Total It Exp: Exp in Databricks: Exp in Pyspark: CCTC & ECTC : Notice Period /LWD: Location:
Posted 1 month ago
6.0 - 11.0 years
12 - 17 Lacs
Gurugram
Work from Office
Job Responsibilities Skillet Needed from the resource Data Architecture and Management : Understanding of Azure SQL technology, including SQL databases, operational data stores, and data transformation processes. Azure Data Factory : Expertise in using Azure Data Factory for ETL processes, including creating and managing pipelines. Python Programming : Proficiency in writing Python scripts, particularly using the pandas library, for data cleaning and transformation tasks. Azure Functions : Experience with Azure Functions for handling and processing Excel files, making them suitable for database import. API Integration : Skills in integrating various data sources, including APIs, into the data warehouse. BPO Exp Mandatory
Posted 1 month ago
12.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 1 month ago
4.0 - 6.0 years
15 - 22 Lacs
Pune, Chennai
Hybrid
NIQ is the worlds leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insightsdelivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ, is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com. Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such asSpark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. WHAT YOU’LL DO: Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: 4-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP)
Posted 1 month ago
8.0 - 12.0 years
12 - 16 Lacs
Pune
Work from Office
Roles & Responsibilities: Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities. Requirements: Hands-on experience with Microsoft Fabric , including Lakehouse, Data Factory, and Synapse . Strong expertise in PySpark and Python for large-scale data processing and transformation. Deep knowledge of Azure data services (ADLS Gen2, Azure Databricks, Synapse, ADF, Azure SQL, etc.). Experience in designing, implementing, and optimizing end-to-end data pipelines on Azure. Understanding of Azure infrastructure setup (networking, security, and access management) is good to have. Healthcare domain knowledge is a plus but not mandatory.
Posted 1 month ago
5.0 - 9.0 years
1 - 1 Lacs
Visakhapatnam, Hyderabad, Vizianagaram
Work from Office
Role & responsibilities 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF) . Proven experience with Databricks , including development in PySpark or Scala . Proficiency in DBT for data modeling and transformation. Expert in Analytics and reporting Power BI expert who can Develop power BI models and develop interactive BI reports Setting up RLS in Power BI reports Expertise in SQL and performance tuning techniques. Strong understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred candidate profile Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS) , Azure Key Vault , and Azure DevOps. Experience with data quality frameworks or tools.
Posted 1 month ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 1 month ago
9.0 - 14.0 years
8 - 18 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities ADF ADB ADB JD Overall 9-12 yrs of IT experience preferably in cloud Min 4 years in Azure Databricks on development projects Should be 100% hands on in Pyspark coding Should have strong SQL expertise in writing advanced/complex SQL queries DWH experience is a must for this role Experience in programming using Python is an advantage Experience in data ingestion, preparation, integration, and operationalization techniques in optimally addressing the data requirements Should be able to understand system architecture which involves Data Lakes, Data Warehouses and Data Marts Experience to own end-to-end development, including coding, testing, debugging and deployment Excellent communication is required for this role
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages Py-Spark, PL/SQL, Spark SQL Database SQL DB Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages Py-Spark, PL/SQL, Spark SQL Database SQL DB Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc
Posted 1 month ago
6.0 - 7.0 years
8 - 9 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
6.0 - 7.0 years
8 - 9 Lacs
Pune
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France