Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 - 12.0 years
15 - 30 Lacs
Pune, Bengaluru
Hybrid
Role & responsibilities Azure Data Engineer with Databricks (9+ Years) Experience: 9+ Years Location: Pune, Hyderabad (Preferred) Job Description: Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), If Interested,kindly share your update cv on Himanshu.mehra@thehrsolutions.in
Posted 1 week ago
3.0 - 8.0 years
6 - 15 Lacs
Ahmedabad
Work from Office
Job Description: As an ETL Developer, you will be responsible for designing, building, and maintaining ETL pipelines using MSBI stack, Azure Data Factory (ADF) and Fabric. You will work closely with data engineers, analysts, and other stakeholders to ensure data is accessible, reliable, and processed efficiently. Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 1 week ago
5.0 - 10.0 years
14 - 24 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities 5 -12 years of experience in Databricks and cloud infrastructure engineering, preferably with Azure • Strong hands-on experience writing Infrastructure-as-Code using Terraform; experience with ARM templates or CloudFormation is a plus. • Practical knowledge of provisioning and managing Databricks environments and associated cloud resources. • Familiarity with Medallion architecture and data lake house concepts. • Experience with CI/CD pipeline creation and automation tools such as Azure DevOps, Jenkins, or GitHub Actions. • Solid understanding of cloud networking, storage, security, and identity management. • Proficiency in scripting languages such as Python, Bash, or PowerShell. • Strong collaboration and communication skills to work across cross-functional teams. Preferred candidate profile
Posted 1 week ago
4.0 - 7.0 years
8 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities We are looking for Immediate Joiner who can join in 30 Days.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
4.0 - 8.0 years
0 - 0 Lacs
Noida
Hybrid
Design and implement data migration pipelines using Azure Data Factory, Azure SQL Database, Azure Synapse Analytics, and other Azure data services. Develop comprehensive data migration plans, ensuring the integrity, security, and compliance of data with regulatory requirements throughout the migration process 8+ years of experience in data engineering, data integration, or database administration, focusing on Microsoft data platforms (MSBI, SQL Server). Proficiency in Azure services, including Azure Data Factory, Azure SQL Database, Azure Synapse Analytics, and Azure Blob Storage.. Microsoft Azure data engineer Certified.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
nagpur, maharashtra
On-site
About the Company: HCL Technologies is a global technology company that helps enterprises reimagine their businesses for the digital age. The company's mission is to deliver innovative solutions that drive business transformation and enhance customer experiences. About the Role: We are looking for a skilled professional to join our team, focusing on technical expertise in various technologies and platforms. Responsibilities: - Python - Azure ADF - API Development - Azure Databricks - CI/CD - DevOps - Terraform - AWS - Big Data To apply for this position, please share your resume at sanchita.mitra@hcltech.com.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a highly experienced Senior Data Engineer to lead our data migration projects from On-Premise systems to Azure Cloud, utilizing Azure Databricks, PySpark, SQL, and Python. The successful candidate will be responsible for designing and implementing robust, scalable cloud data solutions to enhance business operations and decision-making processes. Responsibilities: Design and implement end-to-end data solutions using Azure Databricks, PySpark, MS SQL Server, and Python for data migration from on-premise to Azure Cloud. Develop architectural blueprints and detailed documentation for data migration strategies and execution plans. Construct, test, and maintain optimal data pipeline architectures across multiple sources and destinations within Azure Cloud environments. Leverage PySpark within Azure Databricks to perform complex data transformations, aggregations, and optimizations. Ensure seamless migration of large-scale databases from on-premise systems to Azure Cloud, maintaining data integrity and compliance. Handle technical escalations through effective diagnosis and troubleshooting of client queries. Manage and resolve technical roadblocks/escalations as per SLA and quality requirements. If unable to resolve the issues, timely escalate the issues to TA & SES. Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions. Troubleshoot all client queries in a user-friendly, courteous, and professional manner. Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business. Organize ideas and effectively communicate oral messages appropriate to listeners and situations. Follow up and make scheduled call-backs to customers to record feedback and ensure compliance with contract SLAs. Performance Parameters: 1. Process: No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ESAT. 2. Team Management: Productivity, efficiency, absenteeism. 3. Capability development: Triages completed, Technical Test performance. Mandatory Skills: Talend Big Data Experience: 5-8 Years Join us at Wipro as we reinvent our world together. We are an end-to-end digital transformation partner with the boldest ambitions, seeking individuals inspired by reinvention of themselves, their careers, and their skills. Be a part of a business powered by purpose and a place that empowers you to design your reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
At Capgemini Invent, we believe that difference drives change. As inventive transformation consultants, we combine our strategic, creative, and scientific capabilities to collaborate closely with clients in delivering cutting-edge solutions. Join our team to lead transformation customized to address our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by purpose-driven technology. What you will appreciate about working with us: We acknowledge the importance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will find an environment that fosters a healthy work-life balance. At the core of our mission lies your career growth. Our array of career growth programs and diverse professions are designed to assist you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Your Role: We are seeking a skilled PySpark Developer with expertise in Azure Databricks (ADB) and Azure Data Factory (ADF) to become a part of our team. The ideal candidate will have a pivotal role in designing, developing, and implementing data solutions using PySpark for large-scale data processing and analytics. Your Profile: - Design, develop, and deploy PySpark applications and workflows on Azure Databricks for data transformation, cleansing, and aggregation. - Implement data pipelines using Azure Data Factory (ADF) to orchestrate ETL/ELT processes across heterogeneous data sources. - Conduct regular financial risk assessments to identify potential vulnerabilities in data processing workflows. - Collaborate with Data Engineers and Data Scientists to integrate and process structured and unstructured data sets into actionable insights. Capgemini is a global business and technology transformation partner, aiding organizations in accelerating their dual transition to a digital and sustainable world while making a tangible impact for enterprises and society. With a responsible and diverse group of 340,000 team members in more than 50 countries, Capgemini, with its strong over 55-year heritage, is trusted by clients to unlock the value of technology to address the entire breadth of their business needs. It provides end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 week ago
10.0 - 15.0 years
17 - 30 Lacs
Pune
Work from Office
Dear Candidate, This is with reference for Opportunity Senior Tech Lead Databricks professionals PFB the Job Description Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of buildingand optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills
Posted 1 week ago
6.0 - 10.0 years
35 - 37 Lacs
Pune
Work from Office
Expertise in Java/Python, Scala & SPARK Architecture Ability to Comprehend the Business requirement & translate to the Technical requirements Familiar with development of life cycle including CI/CD pipelines Familiarity working with agile methodology Required Candidate profile Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google
Posted 1 week ago
8.0 - 13.0 years
37 - 65 Lacs
Pune
Work from Office
About Position: We are looking for Spark/Scala developer with hands on experience on ETL, Azure Databricks. Role: Spark Scala Developer Location: All Persistent Locations Experience: 8+ Years Job Type: Full Time Employment What You'll Do: Design and Development: Design and develop scalable data processing applications using Apache Spark and Scala. Data Processing: Develop Spark jobs to process large datasets, including data ingestion, transformation, and aggregation. Data Pipeline Development: Build and maintain data pipelines using Spark, Scala, and other related technologies. Data Quality: Ensure data quality and integrity by implementing data validation, data cleansing, and data normalization techniques. Performance Optimization: Optimize Spark jobs for performance, scalability, and reliability. Troubleshooting: Troubleshoot and resolve technical issues related to Spark and Scala applications. Expertise You'll Bring: Minimum 8 years experience of Data Engineering Experience in ETL/Pipeline Development using tools such as Azure Databricks/Apache Spark and Azure DataFactory with development expertise on batch and real-time data integration Experience in programming using Scala or Python Experience in writing the Store Procedures Experience in data ingestion, preparation, integration and operationalization techniques in optimally addressing the data requirements Experience in Cloud data warehouse like Azure Synapse, Snowflake analytical warehouse Experience with Orchestration tools, Azure DevOps and GitHub Experience in building end to end architecture for Data Lakes, Data Warehouses and Data Marts Experience in relational data processing technology like MS SQL, Delta lake, Spark SQL Experience to own end-to-end development, including coding, testing, debugging and deployment Extensive knowledge of ETL and Data Warehousing concepts, strategies, methodologies. Ability to provide solutions that are forward-thinking in data and analytics Must be team oriented with strong collaboration, prioritization, and adaptability skills required Excellent written and verbal communication skills including presentation skills Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos Familiarity with Healthcare business data models Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 1 week ago
6.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Person should have 6+ years in Azure Cloud. Should have experience in Data Engineer, Architecture. Experience in working on Azure Services like Azure Data Factory, Azure Function, Azure SQL, Azure Data Bricks, Azure Data Lake, Synapse Analytics etc.
Posted 1 week ago
15.0 - 20.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure DevOps Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding the team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving the success of application projects and fostering a collaborative environment among team members. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must to Have Skills: Proficiency in Microsoft Azure Devops.- Should have demonstrable experience of DevOps processes and practices, experience with Gitlab CI/CD pipelines an advantage.- Should have strong experience with shell scripting.- Should have In-depth knowledge of Linux and Kubernetes.- Should have experience in the Azure ecosystem with a focus on key Azure product offerings such as Azure Kubernetes Services, Azure Databricks , Azure Container Registry and Azure Storage offerings.- Knowledge of the Python programming language, knowledge of the R and SAS languages an advantage.- Should have technical understanding of Distributed and parallel computing technologies such as Apache Spark, In Memory Data Grid or Grid computing. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure DevOps.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Familiarity with data integration and ETL processes.- Experience in developing scalable applications. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
4.0 - 6.0 years
15 - 30 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities : Design Scalable Data Solutions: Contribute to the design of secure and scalable data engineering frameworks for analytics and BI. Build and Maintain Data Pipelines: Develop and maintain robust pipelines for data ingestion, transformation, and integration across diverse systems. Leverage Azure Platform Tools: Utilize services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics to implement enterprise-grade data solutions. Support BI at Scale: Enable high-performance reporting and dashboarding by optimizing Synapse SQL Dedicated Pools and integrating with BI platforms like Power BI. Ensure Data Quality & Governance: Implement validation checks, lineage tracking, and quality monitoring to ensure data integrity and trust. Optimize for Scale and Efficiency: Tune data solutions for cost-effectiveness and performance in large-scale environment Collaborate Across Functions: Work alongside Data Scientists, Analysts, Product Managers, and Business Leaders to align data work with business needs and strategy. Support Knowledge Sharing: Encourage best practices, continuous learning, and knowledge exchange within the team. Grow Commercial Acumen: Develop a strong understanding of FedExs business operations, goals, and customer value drivers. Preferred candidate profile : Bachelors or Masters degree in Computer Science, Engineering, or a related field. 4–6 years of experience in data engineering, ETL development, or related roles. Proficiency in SQL, Python, and PySpark/Spark SQL. Experience with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Understanding of data warehousing, data lakes, and BI-driven architecture. Ability to work with diverse data sources (APIs, flat files, Excel, PDFs, relational databases). Familiarity with DevOps, Git, and CI/CD for data workflows. Strong analytical, communication, and collaboration skills. Interest in learning the business context behind the data and its strategic applications.
Posted 1 week ago
3.0 - 5.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives. Roles and Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks . Build and optimize data flows and transformations for structured and unstructured data. Develop scalable ETL/ELT processes to extract data from various sources including SQL, APIs, and flat files. Implement data quality checks, error handling, and performance tuning of data pipelines. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Work with Azure services such as Azure Data Lake Storage (ADLS) , Azure Synapse Analytics , and Azure SQL . Participate in code reviews, version control, and CI/CD processes. Ensure data security, privacy, and compliance with governance standards. Strong hands-on experience with Azure Data Factory and Azure Databricks (Spark-based development). Proficiency in Python , SQL , and PySpark for data manipulation. Experience with Delta Lake , data versioning , and streaming/batch data processing . Working knowledge of Azure services such as ADLS, Azure Blob Storage, and Azure Key Vault. Familiarity with DevOps , Git , and CI/CD pipelines in data engineering workflows. Strong understanding of data modeling, data warehousing, and performance tuning. Excellent analytical, communication, and problem-solving skills.
Posted 1 week ago
8.0 years
2 - 3 Lacs
Noida, Kolkata, Bengaluru
Work from Office
PYSPARK PYTHON AZURE DATA BRICKS AZURE DATA FACTORY SQL
Posted 1 week ago
6.0 - 11.0 years
2 - 6 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Minimum 67 years of experience in designing, implementing, and supporting Data Warehousing and Business Intelligence solutions on Microsoft Fabric data pipelines Design and implement scalable and efficient data pipelines using Azure Data Factory, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. Implement ETL processes to extract data from diverse sources, transform it into suitable formats, and load it into the data warehouse or analytical systems. Hands-on experience in design, development, and implementation of Microsoft Fabric, Azure Data Analytics Service (Azure Data Factory – ADF, Data Lake, Azure Synapse, Azure SQL, and Databricks) Experience in writing optimized SQL queries on MS Azure Synapse Analytics (dedicated, serverless resources in queries, etc.) Troubleshoot, resolve and suggest a deep code-level analysis of Spark to address complex customer issues related to Spark core internals, Spark SQL, Structured Streaming, and Delta. Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, considering large-scale data sets. Experience with hybrid cloud deployments and integration between on-premises and cloud environments. Ensure data security and compliance with data privacy regulations throughout the data engineering process. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources. Conceptual knowledge of data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data.Role & responsibilities. Understanding of data engineering best practices like code modularity, documentation, and version control. Collaborate with business stakeholders to gather requirements and create comprehensive technical solutions and documentation
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As an Azure Architect at Fractal, you will be part of a dynamic team in the Artificial Intelligence space dedicated to empowering human decision-making in the enterprise. You will contribute to the development and delivery of innovative solutions that assist Fortune 500 companies in making strategic and tactical decisions. Fractal is renowned for its cutting-edge products such as Qure.ai, Cuddle.ai, Theremin.ai, and Eugenie.ai, which leverage AI to drive business success. Joining our Technology team, you will play a pivotal role in designing, building, and maintaining technology services for our global clientele. Your responsibilities will include actively participating in client engagements, developing end-to-end solutions for large projects, and utilizing Software Engineering principles to deliver scalable solutions. You will be instrumental in enhancing our technology capabilities to ensure successful project delivery. To excel in this role, you should hold a bachelor's degree in Computer Science or a related field and possess 5-10 years of experience in technology. Your expertise should span System Integration, Application Development, and Data-Warehouse projects, encompassing a variety of enterprise technologies. Proficiency in object-oriented languages like Python and PySpark, as well as relational and dimensional modeling, is essential. Additionally, a strong command of Microsoft Azure components such as Azure DataBricks, Azure Data Factory, and Azure SQL is mandatory. We are seeking individuals with a forward-thinking mindset and a proactive approach to problem-solving. If you are passionate about leveraging cloud technology and machine learning to drive business innovation, and if you thrive in a collaborative environment with like-minded professionals, we invite you to explore a rewarding career at Fractal. If you are ready to embrace challenges, foster growth, and collaborate with a team of high-performing individuals, we look forward to discussing how you can contribute to our mission of transforming decision-making processes with AI technology. Join us on this exciting journey towards shaping the future of enterprise decision-making!,
Posted 1 week ago
8.0 - 12.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: Data Engineer Location: Chennai, Bangalore, Hyderabad Experience: 8+ Work Mode: 5 days Work from Office only Responsibilities Knowledge to ingest, cleanse, transform and load data from varied data sources in the above Azure Services Strong knowledge of Medallion architecture Consume data from source with different file format such as XML, CSV, Excel, Parquet, JSON Create Linked Services with different type of sources. Create automated flow for pipeline which can consume data i.e may receive file via email or Share point. Strong problem-solving skill such as backtracking of dataset, data analysis etc. Strong Knowledge of in advanced SQL techniques for carrying out data analysis as per client requirement. Skills The candidates need to understand different data architecture patterns and parallel data processing. S/he should be proficient in using the following services to create data processing solutions: Azure Data Factory Azure Data Lake Storage Azure Databricks Strong Knowledge in PySpark SQL Good programming skill in Python Desired Skills Ability to query the data from serverless SQL Pool in Azure Synapse Analytics. Knowledge of Azure DevOps. Knowledge to configure any dataset with Vnet, Subnet Networks Knowledge of Microsoft Entra ID, to create App registration for single and multitenant for security purpose.
Posted 1 week ago
4.0 - 9.0 years
15 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Title: Data Engineer (Azure DE) Location: Hyderabad Experience: 4-12 Years Job Description: We are seeking a skilled and motivated Data Engineer to join our team in Hyderabad. The ideal candidate will have hands-on experience in designing, developing, and maintaining data pipelines and solutions using Azure services and open-source technologies. Key Responsibilities: Design, develop, and optimize data workflows using Azure Data Factory (ADF). Build and maintain scalable data processing solutions using Azure Data Bricks (ADB) with PySpark. Write efficient, reusable, and reliable Python and SQL scripts for data transformation and processing. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Monitor, troubleshoot, and improve data pipeline performance and reliability. Ensure data quality, security, and compliance standards are met. Required Skills & Qualifications: 4+ years of experience in Data Engineering or relevant roles. Hands-on experience with Azure Data Factory (ADF) and Azure Data Bricks (ADB). Strong proficiency in PySpark, Python, and SQL. Experience in designing and implementing ETL/ELT processes. Good understanding of data warehousing concepts and data modeling. Familiarity with cloud-based data solutions and architecture best practices. Excellent problem-solving and communication skills. Preferred Skills: Experience with other Azure services like Azure Data Lake, Azure SQL Database. Knowledge of DevOps practices related to data pipelines. Ability to work in a dynamic, collaborative environment. Role & responsibilities
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
7.0 - 12.0 years
15 - 27 Lacs
Hyderabad, Bengaluru
Work from Office
Must Have Python & PySpark: Proficient in both, with strong understanding of data engineering best practices. ( High expertise ) Data Exploration & Troubleshooting: Ability to investigate data quality issues, debug pipelines, and explore datasets beyond surface-level analysis. ( High expertise ) CI/CD & GitHub: Experience with GitHub and GitHub Actions for version control and automation; familiarity with CI/CD practices for testing and deployment. ( High expertise ) Cloud & Platform Experience( High expertise ) Azure & Databricks: Hands-on experience with Azure cloud services and Databricks, including: Databricks Jobs, Clusters, Unity Catalog ( Medium expertise ) Data Pipeline Development Batch : Design, build, and maintain robust, scalable, and automated data pipelines for both batch and streaming data ingestion using Databricks Workflows. ( High expertise ) Data Pipeline Development Streaming : Design, build, and maintain robust, scalable, and automated data pipelines for streaming data ingestion using Databricks Workflows. ( Medium expertise ) Implement data quality checks, profiling, validation, and root cause analysis to ensure data accuracy and consistency. ( High expertise ) Design and implement data models and architectures that align with business needs and support efficient processing, analysis, and reporting. ( Low/Medium expertise ) Orchestration & Monitoring ( High expertise )
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Hyderabad
Work from Office
Job Summary: We are seeking a highly skilled Data Engineer with expertise in leveraging Data Lake architecture and the Azure cloud platform to develop, deploy, and optimise data-driven solutions. . You will play a pivotal role in transforming raw data into actionable insights, supporting strategic decision-making across the organisation. Responsibilities Design and implement scalable data science solutions using Azure Data Lake, Azure Data Bricks, Azure Data Factory and related Azure services. Develop, train, and deploy machine learning models to address business challenges. Collaborate with data engineering teams to optimise data pipelines and ensure seamless data integration within Azure cloud infrastructure. Conduct exploratory data analysis (EDA) to identify trends, patterns, and insights. Build predictive and prescriptive models to support decision-making processes. Expertise in developing end-to-end Machine learning lifecycle utilizing crisp-DM which includes of data collection, cleansing, visualization, preprocessing, model development, model validation and model retraining Proficient in building and implementing RAG systems that enhance the accuracy and relevance of model outputs by integrating retrieval mechanisms with generative models. Ensure data security, compliance, and governance within the Azure cloud ecosystem. Monitor and optimise model performance and scalability in production environments. Prepare clear and concise documentation for developed models and workflows. Skills Required: Good experience in using Pyspark, Python, MLops (Optional), ML flow (Optional), Azure Data Lake Storage. Unity Catalog Worked and utilized data from various RDBMS like MYSQL, SQL Server, Postgres and NoSQL databases like MongoDB, Cassandra, Redis and graph DB like Neo4j, Grakn. Proven experience as a Data Engineer with a strong focus on Azure cloud platform and Data Lake architecture. Proficiency in Python, Pyspark, Hands-on experience with Azure services such as Azure Data Lake, Azure Synapse Analytics, Azure Machine Learning, Azure Databricks, and Azure Functions. Strong knowledge of SQL and experience in querying large datasets from Data Lakes. Familiarity with data engineering tools and frameworks for data ingestion and transformation in Azure. Experience with version control systems (e.g., Git) and CI/CD pipelines for machine learning projects. Excellent problem-solving skills and the ability to work collaboratively in a team environment.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough