Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
We are seeking an experienced Azure Data Engineer with 36 years of experience for a 6-month remote contract. The candidate will be responsible for developing and supporting IT solutions using technologies like Azure Data Factory, Azure Databricks, Azure Synapse, Python, PySpark, Teradata, and Snowflake. The role involves designing ETL pipelines, developing Databricks notebooks, handling CI/CD pipelines via Azure DevOps, and working on data warehouse modeling and integration. Strong skills in SQL, data lake storage, and deployment/monitoring are required. Prior experience in Power BI and DP-203 certification is a plus. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
7.0 - 12.0 years
24 - 30 Lacs
Bengaluru
Work from Office
Expertise in Azure Data Factory (ADF) and Azure Synapse Strong proficiency in SQL and Data Modeling Experience with Data Lakes , Power BI (preferred) MS SQL Server experience Familiarity with CI/CD deployment for data pipelines
Posted 1 month ago
9.0 - 14.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.
Posted 1 month ago
6.0 - 8.0 years
6 - 15 Lacs
Hyderabad, Pune, Chennai
Work from Office
Exp- 6-8 yrs NP:15 days (Max)
Posted 1 month ago
5.0 - 10.0 years
20 - 30 Lacs
Pune, Bengaluru
Hybrid
AI/ML & GenAI Development • Proven experience in building and deploying AI/ML and Generative AI applications • Hands of experience with RAG and Agentic AI based applications • Solid understanding of deep learning and Natural Language Processing (NLP) • Strong experience with Large Language Models (LLMs) and frameworks (e.g., Langchain, LlamaIndex) • Expertise in machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, or similar • Expertise in building enterprise-grade, secure data ingestion for structured/unstructured data, including indexing, search, and advanced retrieval --- Software Development & Cloud Platforms • Strong proficiency in Python programming language • Hands-on experience with cloud platforms such as Azure and Databricks (preferred) • Strong experience in building CI/CD pipeline templates for AI software development lifecycle (SDLC), including ML Ops, RAG pipelines, and data ingestion pipelines
Posted 1 month ago
2.0 - 7.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Azure Platform Engineer (Databricks) Platform Design Define best practices for end-to-end databricks platform. Work with databricks and internal teams on evaluation of new features of databricks (private preview/ public preview) Ongoing discussions with databricks product teams on product features Platform Infra Create new databricks workspaces (premium, standard, serverless) and clusters including right sizing Drop unused workspaces Delta SharingWork with enterprise teams on connected data (data sharing) User Management Create new security groups and add/delete users Assign Unity Catalog permissions to respective groups/teams Manage Quantum Collaboration platform sandbox for enterprise teams for ideation and innovation. Troubleshooting Issues in Databricks Investigate and diagnose performance issues or errors within Databricks. Review and analyze Databricks logs and error messages. Identify and address problems related to cluster configuration or job failures. Optimize Databricks notebooks and jobs for performance. Coordinate with Databricks support for unresolved or complex issues. Document common troubleshooting steps and solutions. Develop and test Databricks clusters to ensure stability and scalability. Governance Create dashboards to monitor job performance, cluster utilization, and cost. Design dashboards to cater to various user roles (e.g., data scientists, admins). Use Databricks APIs or integration with monitoring tools for up-to-date metrics.
Posted 1 month ago
4.0 - 9.0 years
13 - 23 Lacs
Pune, Chennai, Bengaluru
Hybrid
Primary: Azure, Databricks, ADF, Pyspark/Python Secondary: Datawarehouse, SAS/Alteryx Must Have • 4- 15 Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Azure Synapse Analytics. Experience5-8 Years.
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Azure Data Factory. Experience3-5 Years.
Posted 1 month ago
8.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Purpose As a Staff Infrastructure Engineer at LogixHealth, you will work with a globally distributed team of engineers to design and build cutting edge solutions that directly improve the healthcare industry. Youll contribute to our fast-paced, collaborative environment and bring your expertise to continue delivering innovative technology solutions, while mentoring others. Duties and Responsibilities 1. Lead and contribute to the creation of a cloud platform for delivering complex and challenging projects across software and data 2. Design and build cloud-native infrastructure using industry leading practices and tools 3. Establish CI/CD processes, test frameworks, infrastructure-as-code tools, and monitoring/alerting (Git, Terraform, Azure DevOps / GitHub Actions / Jenkins, Azure Monitor / Datadog) 4. Ensure compute, network, storage, and security best principles for a highly available, reliable, secure and cost-efficient solution 5. Adhere to the Code of Conduct and be familiar with all compliance policies and procedures stored in LogixGarden relevant to this position Qualifications To perform this job successfully, an individual must be able to perform each duty satisfactorily. The requirements listed below are representative of the knowledge, skills, and/or ability required. Reasonable accommodation may be made to enable individuals with disabilities perform the duties. Experience 1. 8+ years infrastructure engineering experience 2. 3+ years in a senior, staff or principal engineer role 3. Experience designing and building cloud-native solutions (Azure, AWS, Google Cloud Platform) 4. Experience with Infrastructure as Code tools (Terraform, Pulumi, cloud-specific IaC tools) 5. Experience with configuration management tools (Ansible, Chef, Salt Stack) 6. Experience with containerization and orchestration technologies (Docker, Kubernetes) 7. Experience leading projects within a team and across teams 8. Azure experience preferred 9. Azure Databricks implementation experience preferred 10. Experience with one or more CNCF projects preferred 12. Experience designing and implementing infrastructure security and governance platform adhering to compliance standards (HIPPA, SOC 2) preferred Specific Job Knowledge, Skill and Ability 1. Possess a passion for mentoring and guiding others 2. Strong programming skills in Python, TypeScript, Golang, C#, or other languages (Bash, PowerShell) 3. Strong written and verbal communication skills 4 Expert knowledge in architecting, designing and implementing infrastructure solutions to serve the needs of our data processes and software products 5 Ability to keep security, maintainability, and scalability in mind with the solutions built 6Possess excellent interpersonal communication skills and an aptitude for continued learning
Posted 1 month ago
8.0 - 10.0 years
20 - 25 Lacs
Hyderabad, Pune, Chennai
Hybrid
Please Note - NP should be 0-15 days. Primary Responsibilities - Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data security and compliance Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures Required skills: Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams Azure DevOps Apache Spark, Python SQL proficiency Azure Databricks knowledge Big data technologies
Posted 1 month ago
8.0 - 13.0 years
15 - 25 Lacs
Hyderabad
Hybrid
Role: Data Engineer Location: Hyderabad Role Overview: The Azure Big Data Engineer is a hands-on technical role focused on designing, implementing, and maintaining robust, scalable data pipelines and storage solutions on the Azure platform. The engineer plays a critical role in enabling data-driven decision-making across the organization by building and managing high-performance data infrastructure. Key Responsibilities: Data Engineering & Pipeline Development Develop, test, and maintain ETL/ELT pipelines using tools such as Azure Data Factory, Azure Databricks, Synapse Analytics, and Microsoft Fabric. Ingest, wrangle, transform, and join data from various sources ensuring data quality and consistency. Implement data storage solutions using Azure Data Lake Storage, Lakehouse, Delta Lake, and data warehousing technologies (e.g., Synapse, Azure SQL Database). Performance Optimization & Monitoring Optimize data pipelines for cost, performance, scalability, and reliability. Monitor data flows and proactively troubleshoot pipeline and performance issues. Apply best practices for performance tuning and infrastructure alignment. Security & Compliance Implement data security measures including RBAC, data encryption, and compliance with organizational policies and regulatory standards. Address data governance requirements and audit readiness. Collaboration & Stakeholder Communication Work closely with data scientists, architects, and analysts to gather requirements and deliver solutions aligned with business needs. Translate business requirements into technical designs and documentation. Present architecture and design options to stakeholders and conduct technical demos. Expected Outcomes: Delivery & Execution Code pipelines and solutions following best practices in scalability, performance, and maintainability. Complete documentation for architecture, source-target mappings, test cases, and performance benchmarks. Perform unit testing, debugging, and performance validation of data pipelines. Quality & Process Adherence Ensure adherence to coding standards, project timelines, SLAs, and compliance protocols. Quickly resolve production bugs and reduce recurring issues through RCA. Improve pipeline efficiency (e.g., faster run times, reduced costs). Knowledge & Certifications Maintain up-to-date technical certifications and training. Contribute to reusable documentation, knowledge bases, and process improvements. Skills & Expertise: Technical Skills Programming: Proficiency in SQL, Python, and PySpark; familiarity with Scala. ETL Tools: Azure Data Factory, Azure Databricks, Microsoft Fabric, Synapse Analytics, Informatica, Glue, DataProc. Cloud Platforms: Expertise in Azure (including ADLS, Key Vault, Azure SQL, Synapse), familiarity with AWS or GCP a plus. Data Warehousing: Snowflake, BigQuery, Delta Lake, Lakehouse. Data Modeling: Strong understanding of dimensional modeling, schema design, and optimization for large datasets. Security: Knowledge of data security and compliance best practices. Soft Skills Strong analytical and troubleshooting skills. Excellent communication and collaboration capabilities. Ability to estimate and manage workload effectively. Customer-oriented approach with a focus on value delivery. Preferred Qualifications: Experience with Microsoft Fabric and modern data lakehouse architectures. Exposure to ML/AI concepts and integration with data pipelines. Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect). Performance Metrics (KPIs): Adherence to engineering standards and timelines. Pipeline performance (run time, resource usage). Reduction in post-release defects and production incidents. Time to resolution for pipeline issues. Completion of certifications and mandatory training. Number of reusable assets created/shared. Compliance with data governance and security policies. Certifications: Azure Data Engineer Associate (preferred) Relevant domain certifications in data engineering, cloud, or big data technologies Tools & Technologies: ETL & Orchestration: Azure Data Factory, Databricks, Apache Airflow, Glue, Talend Cloud Services: Azure Synapse, ADLS, Azure SQL, Key Vault, Microsoft Fabric Programming: Python, SQL, PySpark, Scala (optional) Data Platforms: Snowflake, BigQuery, Delta Lake, Azure Lakehouse
Posted 1 month ago
5.0 - 9.0 years
10 - 20 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Role: Azure Databricks Data Engineer Location: Hyderabad, Bangalore, Chennai, Mumbai, Pune, Kolkata, Gurgaon Experience: 5-8 years Work Mode: Hybrid Job Summary: We are seeking an experienced Azure Databricks Data Engineer who will play a key role in designing, building, and maintaining scalable data solutions using Databricks on Azure . This role demands expertise in cloud services, big data engineering, and data architecture to drive business insights through advanced analytics solutions. Key Responsibilities: Design and implement scalable, high-performance data solutions using Databricks on Azure to support business analytics and reporting needs. Collaborate with cross-functional teams (data science, BI, product, infrastructure) to integrate big data solutions with the existing IT ecosystem. Develop, optimize, and manage ETL/ELT pipelines , data lakes, and data warehouses ensuring efficiency and robustness. Perform data modeling , validation, and ensure data accuracy, integrity, and reliability across platforms. Provide subject matter expertise on data storage solutions and manage large-scale data ingestion and transformation workflows. Implement CI/CD practices using tools such as Azure DevOps, Jenkins, TFS, PowerShell , and automate deployment pipelines. Ensure adherence to data security, privacy policies , and compliance requirements. Mentor and guide junior engineers, lead project segments, and contribute to architectural reviews and standards. Must-Have Skills: Strong hands-on experience with Azure Databricks and associated Azure cloud services (e.g., Data Lake, Synapse , Blob Storage ) Proficiency in Spark, PySpark , and SQL for data transformation and analytics Solid understanding of big data architecture and distributed computing principles Experience in CI/CD automation using Azure DevOps, Jenkins, TFS, or equivalent tools Ability to design and manage large-scale data pipelines and complex data workflows Strong communication, collaboration, and problem-solving skills Good to Have: Exposure to machine learning workflows on Databricks Familiarity with Delta Lake, Azure Machine Learning, or Power BI integration Certifications in Azure or Databricks
Posted 1 month ago
4.0 - 6.0 years
12 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Strong hands-on experience with Azure Databricks, PySpark, and ADF Advanced expertise in Azure SQL DB, Synapse Analytics, and Azure Data Lake Familiar with Azure Analysis Services, Azure SQL, and CI/CD (Azure DevOps) Proficient in data modeling, SQL Server best practices, and BI/Data Warehousing architecture Agile methodologies: ADO, Scrum, Kanban, Lean Collaborate with business/technical teams to design scalable data solutions Architect and implement data pipelines and models Provide technical leadership, code reviews, and best practice guidance Support end-to-end lifecycle: estimation, design, development, deployment Risk/issue management and solution recommendation. Location- Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 1 month ago
3.0 - 8.0 years
11 - 21 Lacs
Hyderabad
Work from Office
About Position: We are conducting an in-person hiring drive on 28th june 2025, for Azure Data Engineer in Hyderabad. In Person Drive Location: Persistent Systems (6th Floor), Gate 11, SALARPURIA SATTVA ARGUS, SALARPURIA SATTVA KNOWLEDGE CITY, beside T hub, Shilpa Gram Craft Village, Madhapur, Rai Durg, Hyderabad, Telangana 500081 We are hiring Azure Data Engineer with skills in Azure Databricks, Azure DataFactory, Pyspark, SQL. Role: Azure Data Engineer Location: Hyderabad Experience: 3-8 Years Job Type: Full Time Employment What You'll Do: Design and implement robust ETL/ELT pipelines using PySpark on Databricks. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Optimize data workflows for performance and scalability. Manage and monitor data pipelines in production environments. Ensure data quality, integrity, and security across all stages of data processing. Integrate data from various sources including APIs, databases, and cloud storage. Develop reusable components and frameworks for data processing. Document technical solutions and maintain code repositories. Expertise You'll Bring: Bachelors or Masters degree in Computer Science, Engineering, or related field. 2+ years of experience in data engineering or software development. Strong proficiency in PySpark and Apache Spark. Hands-on experience with Databricks platform. Proficiency in SQL and working with relational databases. Experience with cloud platforms (Azure, AWS, or GCP). Familiarity with Delta Lake, MLflow, and other Databricks ecosystem tools. Strong problem-solving and communication skills. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 1 month ago
6.0 - 10.0 years
1 - 1 Lacs
Bengaluru
Remote
We are looking for a highly skilled Senior ETL Consultant with strong expertise in Informatica Intelligent Data Management Cloud (IDMC) components such as IICS, CDI, CDQ, IDQ, CAI, along with proven experience in Databricks.
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Kolkata, Pune, Delhi / NCR
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Azure Databricks Qualification : Any Graduate or Above Experience : 6 to 10 Yrs Location : PAN India Key Responsibilities: Design and build scalable and robust data pipelines using Azure Databricks, PySpark, and Spark SQL. Integrate data from various structured and unstructured data sources using Azure Data Factory,ADLS, Azure Synapse, etc. Develop and maintain ETL/ELT processes for ingestion, transformation, and storage of data. Collaborate with data scientists, analysts, and other engineers to deliver data products and solutions. Monitor, troubleshoot, and optimize existing pipelines for performance and reliability. Ensure data quality,governance, and security compliance in all solutions. Participate in architectural decisions and cloud data solutioning. Required Skills: 5+ years of experience in data engineering or related fields. Strong hands-on experience with Azure Databricks and Apache Spark. Proficiency in Python (PySpark),SQL, and performance tuning techniques. Experience with Azure Data Factory, Azure Data Lake Storage (ADLS), and Azure Synapse Analytics. Solid understanding of data modeling, data warehousing, and data lakes. Familiarity with DevOps practices, CI/CD pipelines, and version control (e.g., Git). Notice period :30,60, 90 days Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 1 month ago
6.0 - 8.0 years
9 - 18 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities Mandatory skill : MLOps with Azure Databricks / Devops A person who has exposure to Models/MLOps eco-system having exposure to Model Life Cycle Management, with primary responsibility being ability to engage with stakeholders around requirements elaboration, having them broken into stories for the pods by engaging with architects/leads for designs, participation in UAT and creation of user scenario testings, creation of product documentation describing features, capabilities etc. we basically not looking for a Project Manager who will track things. Preferred candidate profile
Posted 1 month ago
0.0 - 2.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence , you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.) Required Skills Azure Databricks / GCP Python SQL Pyspark
Posted 1 month ago
5.0 - 10.0 years
15 - 27 Lacs
Bengaluru
Work from Office
ob Summary: We are seeking a skilled Azure Databricks Developer with strong Terraform expertise to join our data engineering or cloud team. This role involves building, automating, and maintaining scalable data pipelines and infrastructure in the Azure cloud environment using Databricks and Infrastructure as Code (IaC) practices. The ideal candidate has hands-on experience with data processing in Databricks and cloud provisioning using Terraform. Key Responsibilities: Develop and optimize data pipelines using Azure Databricks (Spark, Delta Lake, notebooks, jobs) Design and automate infrastructure provisioning on Azure using Terraform Collaborate with data engineers, analysts, and cloud architects to integrate Databricks with other Azure services (e.g., Data Lake, Synapse, Key Vault) Maintain CI/CD pipelines for deploying Databricks and Terraform configurations Apply best practices for security, scalability, cost optimization , and performance Monitor and troubleshoot jobs and infrastructure components Document architecture, processes, and configuration standards Required Skills & Experience: 5+ years of experience in Azure Databricks , including PySpark, notebooks, cluster management, Delta Lake Strong hands-on experience in Terraform for managing cloud infrastructure (especially Azure) Proficiency in Python and SQL Experience with Azure services : Azure Data Lake, Azure Data Factory, Azure Key Vault, Azure DevOps Familiarity with CI/CD pipelines and version control (e.g., Git) Good understanding of data engineering concepts and cloud-native architecture Preferred Qualifications: Azure certifications (e.g., DP-203 , AZ-104 , or AZ-400 ) Knowledge of Databricks CLI , REST API, and workspace automation Experience with monitoring and alerting for data pipelines and cloud resources Understanding of cost management for Databricks and Azure services Role & responsibilities
Posted 1 month ago
6.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
Job Title : Azure Data Factory Engineer Location State : Maharashtra Location City : Pune Experience Required : 6 to 8 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: A minimum of 5 years experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partnersExperience in troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tunning, ETL importing large volume of data extracted from multiple systems, capacity planning Essential Job Functions: Strong knowledge of Extraction Transformation and Loading (ETL) processes using frameworks like Azure Data Factory or Synapse or Databricks; establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks etc Qualifications: Skill Required: Digital : PySpark~Azure Data Factory How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000
Posted 1 month ago
10.0 - 12.0 years
15 - 18 Lacs
Pune, Mumbai (All Areas)
Hybrid
Please Note - NP should be 0-15 days High proficiency and 8 10 years of experience in designing/developing data analytics and data warehouse solutions with Azure Data Factory (ADF) and Azure Data Bricks. Experience in designing large data distribution, integration with service-oriented architecture and/or data warehouse solutions, Data Lake solution using Azure Databricks with large and multi-format data Ability to translate working solution into implementable working package using Azure platform Good understanding on Azure storage Gen2 Hands on experience with Azure stack (minimum 5 years) Azure Databricks Azure Data Factory Azure DevOps (Candidate needs have basic understanding on CI/CD and kubernatics) Proficient coding experience using Spark (Scala/Python), T-SQL Understanding around the services related to Azure Analytics, Azure SQL, Azure function app, logic app Good to have knowledge in Kafka streaming Azure Infrastructure A proven track of work experience in cloud deployments (MS Azure preferred) in an agile SDLC environment, leveraging modern programming languages, DevOps, SQL and noSQL databases, and test-driven development. Understanding of full stack (backend, frontend, network, OS, middleware, etc) You hold a relevant bachelor’s degree or equivalent. You are a strong communicator, fluent in English, from making presentations to technical writing.Good communication to understand and explain clearly the requirements and the solution and can help in documenting the solutions, programming changes and problems and resolutions Should be able to demonstrate a constant and quick learning ability and to handle pressure situations without compromising on quality Well organized and able to manage multiple projects in a fast-paced demanding environment. Attention to detail and quality; excellent problem solving and communication skills. Ability and willingness to learn new tools and applications.
Posted 1 month ago
5.0 - 6.0 years
13 - 17 Lacs
Mumbai, Hyderabad
Work from Office
Project description A project is intended migrate a global application covering multiple workflows of a top Insurance company into Azure, develop a cloud native application from scratch. Application serves global and North American markets. Responsibilities Drive the development team towards the goal by integrating skills and experiences. Design, develop, test, deploy, maintain and improve the software. Work with QA, product management, and operations in an Agile environment. Develop and support data-driven product decisions in a high energy high-impact team. Develop features that will drive our business through real-time feedback loops. Skills Must have 5 to 6 years of hands-on Azure development expertise on the below AZ App Services, Az Web Jobs, Az Functions, Az Logic Apps, ADF, Key Vault, Az Connectors; Nice to have .Net experience Other Languages EnglishC1 Advanced Seniority Senior
Posted 1 month ago
6.0 - 9.0 years
8 - 11 Lacs
Chennai
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France