Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SMEs to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.
Posted 2 months ago
7.0 - 9.0 years
25 - 35 Lacs
Pune
Hybrid
Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI Immediate joiners Send your resumes to carrers@dataceria.com ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Pune (hybrid) , Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com
Posted 2 months ago
3.0 - 7.0 years
3 - 8 Lacs
Tirupati
Work from Office
Job Summary: We are looking for a Senior Data Engineer who is creative, collaborative, and adaptable to join our agile team of data scientists, engineers, and UX developers. The role focuses on building and maintaining robust data pipelines to support advanced analytics, data science, and BI solutions. As a Senior Data Engineer, you will work with internal and external data, collaborate with data scientists, and contribute to the design, development, and deployment of innovative solutions. Key Responsibilities: Design, develop, test, and maintain optimal data pipeline and ETL architectures. Map out data systems and define/design required integrations, ETL, BI, and AI systems/processes. Prepare and optimize data for predictive and prescriptive modeling. Collaborate with teams to integrate ERP data into the enterprise data lake, ensuring seamless flow and quality. Enhance cloud data infrastructure on AWS or Azure for scalability and performance. Utilize big data tools and frameworks to optimize data acquisition and preparation. Build architectures to move data to/from data lakes and data warehouses for advanced analytics. Develop and curate data models for analytics, dashboards, and reports. Conduct code reviews, maintain production-level code, and implement testing approaches. Monitor, troubleshoot, and resolve data ingestion workflows to maintain reliability and uptime. Drive innovation and implement efficient new approaches to data engineering tasks. Required Skills and Experience: Bachelors degree in Computer Science, Mathematics, Engineering, or a related field. 5+ years of experience working with enterprise data platforms, including building and managing data lakes. 3-5 years of experience designing and implementing data warehouse solutions. Expertise in SQL, developing stored procedures (SP) and advanced data design concepts. Proficiency in Spark (Python/Scala) and Spark Streaming for real-time data pipelines. Experience with AWS or Azure services (e.g., AWS Glue, Azure Data Factory, Redshift, Snowflake, etc.). Familiarity with big data tools such as Apache Kafka, Apache Spark, or Flink. Hands-on experience with orchestration tools (e.g., Apache Airflow, Prefect). Knowledge of CI/CD processes, version control (e.g., Git, Jenkins), and deployment automation. Experience in integrating ERP data into data lakes is a plus. Experience with traditional ETL tools (e.g., Talend, Pentaho) is an advantage. Strong problem-solving, communication, and collaboration skills. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning. Competitive salary and benefits package.Role & responsibilities
Posted 2 months ago
8.0 - 12.0 years
20 - 27 Lacs
Hyderabad, Bengaluru
Hybrid
Role Data Engineer Years of Experience 8-12Yrs Preferred Location HYD 1 Shift Timing (IST) 11:00 AM - 08:30 PM Short Description Data Engineer with ADF, Databricks Anticipated Onboarding Date 1-Apr-2025 Engagement & Project Overview We have multiple Data applications under Financial Accounting span, which are planned to migrate from on-prem (Mainframe or DataStage) to Azure Cloud. Primary Responsibilities Should have relevant Experience at least 8 to 12 years of experience in Data Engineering technologies Should be able to Design, implement and maintain Data applications across all phases of Software Development Should be able to interact with business for requirements and convert into design documents Good to have Healthcare domain knowledge Strong analytical and problem-solving skills Well versed with agile processes and open to work on application support and Flexi working hours Must Have Skills Azure Data Factory (ADF), Databricks & PySpark Should have experience in any Databases Excellent Communication Nice To Have Skills Snowflake Healthcare domain knowledge Any cloud platforms, preferably Azure
Posted 2 months ago
10.0 - 14.0 years
8 - 12 Lacs
Pune, Chennai, Bengaluru
Work from Office
We are seeking an experienced AI Lead with a minimum of 10 years of industry experience in the field of Artificial Intelligence. The ideal candidate will have a proven track record of successfully leading and delivering multiple projects in Machine Learning, Deep Learning, and Natural Language Processing (NLP). Additionally, they should possess sound knowledge in Generative AI and Large Language Models (LLMs). Responsibilities: Project Leadership: Lead end-to-end AI projects, from conception to delivery, ensuring alignment with business objectives and stakeholder requirements. Technical Expertise: Serve as a subject matter expert in AI, providing guidance and support to the team in the development and implementation of advanced machine learning and deep learning models. Research and Development: Stay abreast of the latest advancements in AI technologies and methodologies, conducting research and experiments to drive innovation and enhance project outcomes. Team Management: Manage and mentor a team of junior data scientists and machine learning engineers, providing technical guidance, coaching, and professional development opportunities. Collaboration: Collaborate closely with cross-functional teams, including product management, engineering, and business development, to identify opportunities for leveraging AI to solve business challenges and drive growth. Quality Assurance: Ensure the quality and accuracy of AI solutions through rigorous testing, validation, and performance monitoring. Strategic Planning: Contribute to the development of AI strategies and roadmaps, outlining key initiatives and milestones to support business objectives. Qualifications: Bachelors degree in computer science, Engineering, Mathematics, or related field. Advanced degree (Master's or Ph.D.) preferred. Minimum of 10 years of industry experience in artificial intelligence, with a focus on machine learning, deep learning, and NLP. Proven track record of leading and delivering multiple AI projects from conception to completion. Expertise in machine learning frameworks and libraries such as TensorFlow, PyTorch, or Scikit-learn. Expertise in MLOps tools, Databricks, and AI services offered by cloud platforms such as Azure, AWS, and GCP. Experience with Generative AI and Large Language Models (LLMs) such as GPT, BERT, or Transformer models. Excellent leadership and communication skills, with the ability to effectively manage and mentor a team of junior data scientists and ML engineers. Strong analytical and problem-solving skills, with a keen attention to detail and a commitment to delivering high-quality results. Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements. Location - Noida,Mumbai,Hyderabad,Kochi.
Posted 2 months ago
6.0 - 11.0 years
0 Lacs
Navi Mumbai, Pune, Bengaluru
Hybrid
Hello Folks, We are hiring for "ADB Data Engineer" for one of the Service Based Company. Job Description: Job type:- Permanent / Full Time Exp 6 to 16 yrs Loc - Mumbai/Bangalore/Pune/Chennai (Hybrid) NP : Immediate to June Serving Only Key Skills : ADB + Python/Pyspark + Database Any references would be greatly appreciated !! Cheers & warm regards, Kajal Gupta Lead - Talent Acquisition Specialist kajal.gupta@prodcon.com
Posted 2 months ago
6.0 - 11.0 years
0 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hello Folks, We are hiring for "ADB Data Engineer" for one of the Service Based Company. Job Description: Job type:- Permanent / Full Time Exp 6 to 16 yrs Loc - Mumbai/Bangalore/Pune/Chennai (Hybrid) NP : Immediate to June Serving Only Key Skills : ADB + Python/Pyspark + Database Any references would be greatly appreciated !! Cheers & warm regards, Kajal Gupta Lead - Talent Acquisition Specialist kajal.gupta@prodcon.com
Posted 2 months ago
7.0 - 11.0 years
20 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Description - External Automate the provisioning, modification and maintenance of cloud infrastructure services using Terraform or GitHub actions to enable automated and repeatable deployments Automate the software deployments of Cloud based applications and services using Ansible, GitHub actions or internal corporate CI/CD tools Participate into resolving security vulnerabilities by making the required modifications into the infrastructure and deployment automation Identify and drive improvements in the infrastructure and system reliability, performance, monitoring, and overall stability of the applications and platforms Deploy and support critical cloud infrastructure, systems, and applications Build and maintain tools and automation that eliminate repetitive tasks and prevent incident occurrences Create and maintain documentation for infrastructure and software deployments and automations Qualifications - External - Undergraduate degree or equivalent experience. • At least 5 years of production applications and systems support, preferably in a Site Reliability or Operations context 3+ years of experience with continuous integration and deployment automation tools such as GitLab/GitHub, or Jenkins, Harness, AWS CloudFormation, Salt, or Puppet, Chef, Ansible 2+ years in implementing Infrastructure as a Service using cloud tools as AWS CloudFormation and HashiCorp Terraform 3+ years configuring and using Azure Cloud services like Storage Accounts, Azure DataBricks, and Azure Pipelines in a highly available and scalable production environment Similar experience with Amazon Web Services (AWS) like EC2, EBS, ELB, S3, Route 53, RDS, Lambda, and EMR in a highly available and scalable production environment, is a big plus Experience with relational RDBMS (MySQL, PostgreSQL) and SQL Scripting experience with Shell, Python preferred Experience with source control tools such as Git/GitHub/GitLab Experience supporting, analyzing, and troubleshooting large-scale distributed mission critical systems Experience with configuring, managing, and supporting both Azure and AWS environments Systematic problem-solving approach and strong sense of ownership to drive problems to resolution Experience with translating business requirements into reports, data visualization and supporting Business and Product teams with their analytics needs Excellent verbal and written communication skills Network knowledge (TCP/IP, UDP, DNS, Load balancing) and prior network administration experience is a plus Experience with container-based architectures (Kubernetes, AWS ECS) is a plus Careers with Optum. Here's the idea. We built an entire organization around one giant objective; make the health system work better for everyone. So when it comes to how we use the world's large accumulation of health-related information, or guide health and lifestyle choices or manage pharmacy benefits for millions, our first goal is to leap beyond the status quo and uncover new ways to serve. Optum, part of the UnitedHealth Group family of businesses, brings together some of the greatest minds and most advanced ideas on where health care has to go in order to reach its fullest potential. For you, that means working on high performance teams against sophisticated challenges that matter. Optum, incredible ideas in one incredible company and a singular opportunity to do your life's best work.SM Diversity creates a healthier atmosphere: UnitedHealth Group is an Equal Employment Opportunity/Affirmative Action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin, protected veteran status, disability status, sexual orientation, gender identity or expression, marital status, genetic information, or any other characteristic protected by law. UnitedHealth Group is a drug-free workplace. Candidates are required to pass a drug test before beginning employment.
Posted 2 months ago
6.0 - 10.0 years
5 - 15 Lacs
Bengaluru
Hybrid
Job position - Azure Databricks Architect Experience - 6 -10 Years Notice Period - only serving candidate can apply ( max 20 june 2025 ) Working Mode - Hybrid ( 2 days in week Rotational Shift ) Employment Type - Permanents with Brainworks Business Solution Mandatory Skills - Azure Databricks, Azure Cloud Architecture, PoC development, Cloud Migrations, Infrastructure Design, Application Services, Security Integration, Azure Networking. Interested candidate please share your CV's on rutuja.s@bwbsol.com / 9850368787
Posted 2 months ago
2.0 - 7.0 years
8 - 17 Lacs
Bengaluru
Hybrid
Project Role: Big Data Testing Work Experience : 2 to 8 Years Work location: Bengaluru Work Mode: Hybrid Must Have Skills: Big Data testing, Azure, SQL Job Overview: Designs/develops software solutions requiring general domain knowledge and developing business experience. Analyzes user requirements/needs and makes decisions within limited parameters under regular supervision. KNOWLEDGE AND EXPERIENCE: 2 to 8 years relevant work experience in software testing primarily on Database /ETL and exposure towards Big Data Testing Hands on experience in Testing Big Data Application on: Azure , Cloudera Understanding of more query languages: Pig, HiveQL, etc. Excellent skills in writing SQL queries and good knowledge on database [Oracle/ Netezza/SQL ] Handson on any one scripting languages: Java/Scala/Python etc. Good to have experience in Unix shell scripting Experience in Agile development, knowledge on using Jira Good analytical skills and communications skills. Prior Health Care industry experience is a plus. Flexible to work / Adopt quickly with different technologies and tools Educational Qualification: Btech/BE/BCA/BSc/Mtech/MSc/MCA
Posted 2 months ago
5.0 - 8.0 years
6 - 24 Lacs
Hyderabad
Work from Office
Notice 30 to 45 days. * Design, develop & maintain data pipelines using PySpark, Databricks ,Unity Catalog & cloud. * Collaborate with cross-functional teams on ETL processes & report development. Share resume : garima.arora@anetcorp.com
Posted 2 months ago
6.0 - 8.0 years
18 - 20 Lacs
Bengaluru
Work from Office
The Development Lead will oversee the design, development, and delivery of advanced data solutions using Azure Databricks, SQL, and data visualization tools like Power BI. The role involves leading a team of developers, managing data pipelines, and creating insightful dashboards and reports to drive data-driven decision-making across the organization. The individual will ensure best practices are followed in data architecture, development, and reporting while maintaining alignment with business objectives. Key Responsibilities: Data Integration & ETL Processes: Design, build, and optimize ETL pipelines to manage the flow of data from various sources into data lakes, data warehouses, and reporting platforms. Data Visualization & Reporting: Lead the development of interactive dashboards and reports using Power BI, ensuring that business users have access to actionable insights and performance metrics. SQL Development & Optimization: Write, optimize, and review complex SQL queries for data extraction, transformation, and reporting, ensuring high performance and scalability across large datasets. Azure Cloud Solutions: Implement and manage cloud-based solutions using Azure services (Azure Databricks, Azure SQL Database, Data Lake) to support business intelligence and reporting initiatives. Collaboration with Stakeholders: Work closely with business leaders and cross-functional teams to understand reporting and analytics needs, translating them into technical requirements and actionable data solutions. Quality Assurance & Best Practices: Implement and maintain best practices in development, ensuring code quality, version control, and adherence to data governance standards. Performance Monitoring & Tuning: Continuously monitor the performance of data systems, reporting tools, and dashboards to ensure they meet SLAs and business requirements. Documentation & Training: Create and maintain comprehensive documentation for all data solutions, including architecture diagrams, ETL workflows, and data models. Provide training and support to end-users on Power BI reports and dashboards. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Development Lead or Senior Data Engineer with expertise in Azure Databricks, SQL, Power BI, and data reporting/visualization. Hands-on experience in Azure Databricks for large-scale data processing and analytics, including Delta Lake, Spark SQL, and integration with Azure Data Lake. Strong expertise in SQL for querying, data transformation, and database management. Proficiency in Power BI for developing advanced dashboards, data models, and reporting solutions. Experience in ETL design and data integration across multiple systems, with a focus on performance optimization. Knowledge of Azure cloud architecture, including Azure SQL Database, Data Lake, and other relevant services. Experience leading agile development teams, with a strong focus on delivering high-quality, scalable solutions. Strong problem-solving skills, with the ability to troubleshoot and resolve complex data and reporting issues. Excellent communication skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Knowledge of additional Azure services (e.g., Azure Synapse, Data Factory, Logic Apps) is a plus. Experience in Power BI for data visualization and custom calculations. Keywords Data Factory,Power BI*,Spark SQL,Logic Apps,Azure Databricks*,ETL design,agile development,SQL*,Synapse,data reporting*,Delta Lake,Azure Data Lake,Azure cloud architecture Mandatory Key Skills Data Factory,Power BI*,Spark SQL,Logic Apps,Azure Databricks*,ETL design,agile development,SQL*,Synapse,data reporting*,Delta Lake,Azure Data Lake,Azure cloud architecture
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Work from Office
Greeting from Cognitud. We have Job openings with top leading CMMI Level 5 company across india for Permanent role. Looking for Azure data engineer Location : Chennai, Coimbatore, Bangalore, pune If you are Interested, Please Revert back to us with your Updated CV and the following Details: If you are not interested, request to refer us with your contacts. Information Need with Updated Resume :: Passport Size Photo(Mandatory in portal) : Name : Mobile Number : Email : Current skill your looking : Current company : Total experience : Relevant experience : Current CTC : Expected CTC : Offer in Hand : Current Location : Preferred Location : Notice Period(Last working day, please mention date ) : Any Educational/career Gap (Mention How much Gap) : PAN (Its Mandatory in portal) : Highest qualification : DOB : please reach me out ( aishwarya@cognitud.in ) For more detailed conversation and upcoming amazing opportunities please connect with me through: linkedin.com/in/aishwarya-m-b12021233
Posted 2 months ago
5.0 - 8.0 years
15 - 20 Lacs
Mohali
Remote
In this Role, Your Responsibilities Will Be: Collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data requirements and deliver effective solutions. Leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a strong focus on data warehouse and lakehouse design. Design and implement data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar tools to prepare data for analysis and reporting. Implement data quality checks and governance practices to ensure accuracy, consistency, and security of data assets. Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms. Experience with data integration and ETL tools like Azure Data Factory. Proven expertise in Microsoft Fabric or similar data platforms. In-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong problem-solving skills with a track record of resolving complex technical challenges. Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively within a team environment. Microsoft certifications in data-related fields are preferred. DP-700 (Microsoft Certified: Fabric Data Engineer Associate) is a plus.
Posted 2 months ago
14.0 - 24.0 years
35 - 55 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.
Posted 2 months ago
5.0 - 9.0 years
10 - 14 Lacs
Mumbai, Chennai, Bengaluru
Work from Office
Classic pipeline Powershell Yaml Biceps Arm Templateterraform/ Biceps CI/CD Experience with data lake and analytics technologies in Azure (e.g., Azure Data Lake Storage, Azure Data Factory, Azure Databricks)- most important Data background with Azure & Powershell. Location: Chennai, Hyderabad, Kolkata, Pune, Ahmedabad, Remote.
Posted 2 months ago
4.0 - 5.0 years
11 - 12 Lacs
Pune, Chennai, Bengaluru
Hybrid
Azure Data Engineer with QA Salary Range: 11-12 LPA Experience Level: 4-5 yrs Job Location: Chennai, Bangalore, Pune, Mumbai Job Location: WFO or Hybrid Shift Timings: 3:30 PM IST to 12:30 AM IST Hybrid Must Have Azure Data Bricks, Azure Data Factory, Spark SQL Years 4-5 years of development experience in Azure Data Bricks Strong experience in SQL, along with performing Azure Data bricks Quality Assurance. Understand complex data systems by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake, or other data storage locations. Must Required Skills: Strong experience in SQL, along with performing Azure Data bricks Testing Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)
Posted 2 months ago
3.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Ability to take full ownership and deliver component or functionality. Supporting the team to deliver project features with high quality and providing technical guidance. Responsible to work effectively individually and with team members toward customer satisfaction and success Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage
Posted 2 months ago
6.0 - 9.0 years
10 - 18 Lacs
Bengaluru
Work from Office
Job Position : Azure Databricks Architect Experience- 6- 9 years Location - Bangalore Notice period - only serving candidate apply ( max 20 June 2025 ) Employment type - Permanent with Brainworks Business Solution Mandatory Skills - Azure Databricks, Azure Cloud Architecture, PoC development, Cloud Migrations, Infrastructure Design, Application Services, Security Integration, Azure Networking. Interested candidate please share your CV's on rutuja.s@bwbsol.com / 9850368787
Posted 2 months ago
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer ML & Azure Platform Location: Bangalore Experience: 5 - 10 years Joining Timeframe: Only candidates who can join within 1 month will be considered. Job Description: We are seeking a skilled Senior Data Engineer to work on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, and experience building scalable and efficient data pipelines on Azure. Primary Skills: Azure Data Platform Data Factory, Databricks Strong experience in SQL and Python or Scala Experience with ETL/ELT pipelines and transformations Knowledge of Spark , Delta Lake , Parquet , and Big Data technologies Familiarity with MLOps , CI/CD pipelines, model monitoring, versioning Performance tuning and pipeline optimization Data quality checks and feature engineering Nice-to-Have Skills: Exposure to NLP , time-series forecasting , anomaly detection Knowledge of data governance frameworks Understanding of retail or workforce analytics domains Note: Please apply only if you're available to join within 1 month. To Apply: Kindly share your updated resume , current CTC , expected CTC and notice period to vijay.s@xebia.com.
Posted 2 months ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 2 months ago
12.0 - 20.0 years
22 - 37 Lacs
Bengaluru
Hybrid
12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com
Posted 2 months ago
10.0 - 12.0 years
25 - 27 Lacs
Indore, Hyderabad, Pune
Work from Office
We are seeking a skilled Lead Data Engineer with extensive experience in Snowflake, ADF, SQL, and other relevant data technologies to join our team. As a key member of our data engineering team, you will play an instrumental role in designing, developing, and managing data pipelines, working closely with cross-functional teams to drive the success of our data initiatives. Key Responsibilities: Design, implement, and maintain data solutions using Snowflake, ADF, and SQL Server to ensure data integrity, scalability, and high performance. Lead and contribute to the development of data pipelines, ETL processes, and data integration solutions, ensuring the smooth extraction, transformation, and loading of data from diverse sources. Work with MSBI, SSIS, and Azure Data Lake Storage to optimize data flows and storage solutions. Collaborate with business and technical teams to identify project needs, estimate tasks, and set intermediate milestones to achieve final outcomes. Implement industry best practices related to Business Intelligence and Data Management, ensuring adherence to usability, design, and development standards. Perform in-depth data analysis to resolve data issues and improve overall data quality. Mentor and guide junior data engineers, providing technical expertise and supporting the development of their skills. Effectively collaborate with geographically distributed teams to ensure project goals are met in a timely manner. Required Technical Skills: T-SQL, SQL Server, MSBI (SQL Server Integration Services, Reporting Services), Snowflake, Azure Data Factory (ADF), SSIS, Azure Data Lake Storage. Proficient in designing and developing data pipelines, data integration, and data management workflows. Strong understanding of Cloud Data Solutions, with a focus on Azure-based tools and technologies. Nice to Have: Experience with Power BI for data visualization and reporting. Familiarity with Azure Databricks for data processing and advanced analytics. Mandatory Key Skills Azure Data Lake Storage,Business Intelligence,Data Management,T-SQL,Power BI,Azure Databricks,Cloud Data Solutions,Snowflake*,ADF*,SQL Server*,MSBI*,SSIS*
Posted 2 months ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call Hemanth 9715166618 for more info Thanks, Hemanth 9715166618
Posted 2 months ago
3.0 - 6.0 years
12 - 20 Lacs
Pune
Hybrid
Required Data Engineer for our client company Product base IT Company Job location Pune Salary upto 20 LPA Exp. 3 to 6 years Need immediate joiner only work mode hybrid apply here Required Candidate profile Required Candidate profile candidate should be good in Communication need immediate joiners only (within 15 Days) Excellent communication skill must
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France