Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 7.0 years
8 - 9 Lacs
Pune
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences.
Posted 1 month ago
2.0 - 7.0 years
4 - 9 Lacs
Bengaluru
Work from Office
What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 2 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them NoSql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Consult with clients and propose architectural solutions to help move & improve infra from on-premises to cloud or help optimize cloud spend from one public cloud to the other. Be the first one to experiment on new age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. Good understanding of cloud design principles, sizing, multi-zone/cluster setup, resiliency and DR design. Solution Architect or similar certifications from Azure is must. Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams. Strong communication skills and ability to lead discussions with client technical experts, application team & Vendors to drive collaboration, design thinking model towards reaching the desired objective Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in participating in technical reviews of requirements, designs, code, and other artifacts and use your experience in Multicloud to build hybrid-cloud solutions for customers. Provide leadership to project teams and facilitate the definition of project deliverables around core Cloud based technology and methods. Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results. Sound knowledge of SRE principles and ability to address performance issues through design or coding is must. Implement observability, develop, and support pipeline model to deploy key features, changes. Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas Preferred technical and professional experience 10 - 15 years of experience with at least 5+ years of hands-on experience in Azure Cloud Computing and IT operational experience in a global enterprise environment. Experience in Azure Databricks is preferred. Must have Azure DevOps experience and expertise in all Azure services and Database and Operating Systems experience and good experience in Automation skills like Terraform Ansible etc. Should work in IBM Cloud project as and when needed
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities Create and manage scalable data pipelines to collect, process, and store large volumes of data from various sources Integrate data from multiple sources, ensuring consistency, quality, and reliability Design, implement, and optimize database schemas and structures to support data storage and retrieval Develop and maintain ETL (Extract, Transform, Load) processes to accurately and efficiently move data between systems Build and maintain data warehouses to support business intelligence and analytics needs Optimize data processing and storage performance for efficient resource utilization and quick retrieval Create and maintain comprehensive documentation for data pipelines, ETL processes, and database schemas Monitor data pipelines and systems for performance and reliability, troubleshooting and resolving issues as they arise Stay up to date with emerging technologies and best practices in data engineering, evaluating and recommending new tools as appropriate Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Engineering or Math preferred) 5+ years of experience with SQL, Python, .NET, SSIS, and SSAS 2+ years of experience with Azure cloud services, particularly SQL Server, ADF, Azure Databricks, ADLS, Key Vault, Azure Functions, and Logic Apps, with an emphasis on Databricks 2+ years of experience using Git and deploying code using a CI/CD approach Strong analytical and problem-solving skills Excellent communication and interpersonal skills Ability to work independently and as part of a team Attention to detail and a commitment to quality
Posted 1 month ago
4.0 - 9.0 years
10 - 20 Lacs
Pune
Work from Office
As an Azure/SQL Data Analytics Consultant expect to be: •Working on projects that utilize products within the Microsoft Azure and SQL Data Analytics stack •Satisfying the expectations and requirements of customers, both internal and external Required Candidate profile Core:Azure Data Platform, SQL Server (T-SQL)Data Analytics(SSIS,SSAS, SSRS)Power BI,Synapse Supporting: Azure ML,Azure infra,Python,Data Factory Principles: Data Modelling,Data Warehouse Theory,
Posted 1 month ago
6.0 - 11.0 years
17 - 30 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Inviting applications for the role of Lead Consultant-Data Engineer, Azure+Python! Responsibilities Hands on experience with Azure, pyspark, and Python with Kafka Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the Azure environment, including IAM policies, security groups, and encryption mechanisms. Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on Azure Experience of Databricks will be added advantage Strong experience in Python and SQL Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. Azure Data Engineering & Cloud certifications, Databricks certifications Experience of working with Oracle ERP Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process
Posted 1 month ago
4.0 - 8.0 years
5 - 10 Lacs
Pune
Hybrid
About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Azure Data Engineer Qualification : Any Graduate or Above Relevant Experience : 4 to 7Years Required technical skill : Databricks, Python, PySpark , SQL, AZURE Cloud, PowerBI Location : Pune CTC Range : 5 to 10LPA Notice period : immediate/ serving notice period Shift Timing : NA Mode of Interview : Virtual Sonali jena Staffing analyst - IT recruiter Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA sonali.jena@blackwhite.in I www.blackwhite.in +91 8067432474
Posted 1 month ago
6.0 - 10.0 years
15 - 22 Lacs
Chennai
Work from Office
Job Tittle: Data Engineering Lead Exp: 6-8 yrs Location: Chennai Work Mode: WFO All 5 Days Shift Timing: General Shift Budget: Max 24 LPA Immediate Joiners Required Mail me at -> triveni2@elabsinfotech.com Mandatory Skills: . Data Engineer with Strong ETL experience . Azure Data Factory , Azure Synapse & Databricks All are Mandatory . Power BI, 1 yr exp . Azure Cloud . Must have managed a team of minimum 5 . Good Communication
Posted 1 month ago
2.0 - 6.0 years
7 - 17 Lacs
Chennai
Hybrid
We are seeking a highly skilled and motivated Azure Data Engineer to join our growing data team. In this role, you will be responsible for designing, developing, and maintaining scalable and robust data pipelines and data solutions within the Microsoft Azure ecosystem. You will work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into effective data architectures. The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and a deep understanding of Azure data services. Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, or other relevant Azure services. Develop and optimize data ingestion processes from various source systems (on-premises, cloud, third-party APIs) into Azure data platforms. Implement data warehousing solutions, including dimensional modeling and data lake strategies, using Azure Synapse Analytics, Azure Data Lake Storage Gen2, or Azure SQL Database. Write, optimize, and maintain complex SQL queries, stored procedures, and data transformation scripts. Develop and manage data quality checks, data validation processes, and data governance policies. Monitor and troubleshoot data pipeline issues, ensuring data accuracy and availability. Collaborate with data scientists and analysts to support their data needs for reporting, analytics, and machine learning initiatives. Implement security best practices for data storage and access within Azure. Participate in code reviews, contribute to architectural discussions, and promote best practices in data engineering. Stay up-to-date with the latest Azure data technologies and trends, proposing and implementing improvements where applicable. Document data flows, architectures, and operational procedures. Qualifications: Required: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3 to 5 years of professional experience as a Data Engineer, with a strong focus on Microsoft Azure data platforms. Proven experience with Azure Data Factory for orchestration and ETL/ELT. Solid understanding and hands-on experience with Azure Synapse Analytics (SQL Pool, Spark Pool) or Azure SQL Data Warehouse. Proficiency in SQL and experience with relational databases. Experience with Azure Data Lake Storage Gen2. Familiarity with data modeling, data warehousing concepts (e.g., Kimball methodology), and ETL/ELT processes. Strong programming skills in Python or Spark (PySpark). Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Posted 1 month ago
5.0 - 10.0 years
18 - 30 Lacs
Noida
Remote
Role Title: Sr. Azure Data Platform Engineer Location: India 1 remote role and 5 WFO Noida location candidates need to have L2 and L3 Support experience with the below) We are seeking an Azure Data Platform Engineer with a strong focus on Administration and hands-on experience in Azure platform engineering services. Ideal candidates should have expertise in administering services such as: Azure Key Vault Function App & Logic App Event Hub App Services Azure Data Factory (Administration) Azure Monitor & Log Analytics Azure Databricks (Administration) ETL processes Cosmos DB (Administration) Azure DevOps & CI/CD pipelines Azure Synapse Analytics (Administration) Python / Shell scripting Azure Data Lake Storage (ADLS) Azure Kubernetes Service (AKS) Additional knowledge of Tableau and Power BI would be a plus. Also, candidates should have hands-on experience managing and ensuring the stability, security, and performance of these platforms, with a focus on automation, monitoring, and incident management. Proficient in distributed system architectures and Azure Data Engineering services like Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App , Logic App, and App Services Implement, and manage infrastructure using IaC tools such as Azure Resource Manager (ARM) templates and Terraform. Manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS). Set up and manage monitoring, logging, and alerting systems using Azure Monitor, Log Analytics, and Application Insights. Implement disaster recovery (DR) strategies, backups, and failover mechanisms for critical workloads. Automate infrastructure provisioning, scaling, and management for high availability and efficiency. Experienced in managing and maintaining clusters across Development, Test, Preproduction, and Production environments on Azure. Skilled in defining, scheduling, and monitoring job flows, with proactive alert setup. Adept at troubleshooting failed jobs in azure tools like Databricks and Data Factory, performing root cause analysis, and applying corrective measures. Hands-on experience with distributed streaming tools like Event Hub. Expertise in designing and managing backup and disaster recovery solutions using Infrastructure as Code (IaC) with Terraform. Strong experience in automating processes using Python, shell scripting, and working with Jenkins and Azure DevOps. Proficient in designing and maintaining Azure CI/CD pipelines for seamless code integration, testing, and deployment. Experienced in monitoring and troubleshooting VM resources such as memory, CPU, OS, storage, and network. Skilled at monitoring applications and advising developers on improving job and workflow performance. Capable of reviewing and resolving log file issues for system and application components. Adaptable to evolving technologies, with a strong sense of responsibility and accomplishment. Knowledgeable in agile methodologies for software delivery. 5-15 years of experience with Azure and cloud platforms, leveraging cloud-native tools to build, manage, and optimize secure, scalable solutions.
Posted 1 month ago
5.0 - 10.0 years
9 - 18 Lacs
Coimbatore
Hybrid
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BEBTECHMTECH Summary: As an Application Lead, you will be responsible for leading the effort to design, build and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Analytics Services and collaborating with cross-functional teams to deliver high-quality solutions. Roles & Responsibilities: - Lead the design, development, and deployment of applications using Microsoft Azure Analytics Services. - Collaborate with cross-functional teams to ensure the timely delivery of high-quality solutions. - Act as the primary point of contact for all application-related issues, providing technical guidance and support to team members. - Ensure adherence to best practices and standards for application development, testing, and deployment. - Identify and mitigate risks and issues related to application development and deployment. Professional & Technical Skills: - Must To Have Skills: Strong experience in Microsoft Azure Analytics Services. - Good To Have Skills: Experience in other Microsoft Azure services such as Azure Functions, Azure Logic Apps, and Azure Event Grid. - Experience in designing, developing, and deploying applications using Microsoft Azure Analytics Services. - Strong understanding of cloud computing concepts and principles. - Experience in working with Agile methodologies. - Excellent problem-solving and analytical skills. Additional Information: - The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality solutions.
Posted 1 month ago
5.0 - 10.0 years
0 - 0 Lacs
Gurugram, Bengaluru, Delhi / NCR
Work from Office
Bachelors or higher degree in Computer Science or a related discipline; or equivalent (minimum 4 years work experience). • At least 2+ years of consulting or client service delivery experience on Azure Data Solution • At least 2+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Azure Synapse • Extensive experience providing practical direction with using Azure Native services. • Extensive hands-on experience implementing data ingestion, ETL and data processing using Azure services: ADLS, Azure Data Factory, Azure Functions, Azure Logic App Synapse/DW, Azure SQL DB, Databricks etc. • Experience in Data Analysis, data debugging, problem solving skills and business requirement understanding. • Minimum of 2+ years of hands-on experience in Azure and Big Data technologies such as Java, Python, SQL, ADLS/Blob, PySpark and SparkSQL, Databricks, HD Insight • Well versed in DevSecOps and CI/CD deployments • Experience in using Big Data File Formats and compression techniques. • Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git • Experience with private and public cloud architecture, pros/cons and migration considerations.Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 8.0 years
15 - 22 Lacs
Noida, Bengaluru, Delhi / NCR
Hybrid
HI Candidates, we have an opportunities with one of the leading IT consulting Group for the data engineer role. Interested candidates can mail their CV's at Abhishek.saxena@mounttalent.com Job Description- What were looking for Data Engineer III with: 5+ years of experience with ETL Process, Data warehouse architecture 5+ Years of experience with Azure Data services i.e. ADF, ADLS Gen 2, Azure SQL dB, Synapse, Azure Databricks, Microsoft Fabric 5+ years of experience designing business intelligence solutions Strong proficiency in SQL and Python/pyspark Implementation experience of Medallion architecture and delta lake (or lakehouse) Experience with cloud-based data platforms, preferably Azure Familiarity with big data technologies and data warehousing concepts Working knowledge of Azure DevOps and CICD (build and release)
Posted 1 month ago
4.0 - 9.0 years
1 - 2 Lacs
Kolkata, Pune, Chennai
Hybrid
Role & responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile: Bachelor's and/or masters degree in computer science or equivalent experience. Must have total 3+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL , Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail.
Posted 1 month ago
6.0 - 11.0 years
30 - 40 Lacs
Chennai
Work from Office
Role & responsibilities Data Engineer, with working on data migration projects. Experience with Azure data stack, including Data Lake Storage, Synapse Analytics, ADF, Azure Databricks, and Azure ML. Solid knowledge of Python, PySpark and other Python packages Familiarity with ML workflows and collaboration with data science teams. Strong understanding of data governance, security, and compliance in financial domains. Experience with CI/CD tools and version control systems (e.g., Azure DevOps, Git). Experience modularizing and migrating ML logic Note:- We encourage interested candidates to submit their updated CVs to mohan.kumar@changepond.com
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
We Are Hiring: Senior .NET Backend Developer with Azure Data Engineering Experience Job Location: Hyderabad, India Work Mode: Onsite Only Experience: Minimum 6+ Years Qualification: B.Tech, B.E, MCA, M.Tech Role Overview We are seeking an experienced .NET Backend Developer with strong Azure Data Engineering skills to join our growing team in Hyderabad. You will work closely with cross-functional teams to build scalable backend systems, modern APIs, and data pipelines using cutting-edge tools like Azure Databricks and MS Fabric. Technical Skills (Must-Have) Strong hands-on experience in C, SQL Server, and OOP Concepts Proficiency with .NET Core, ASP.NET Core, Web API, Entity Framework (v6 or above) Strong understanding of Microservices Architecture Experience with Azure Cloud technologies including Data Engineering, Azure Databricks, MS Fabric, Azure SQL, Blob Storage, etc. Experience with Snowflake or similar cloud data platforms Experience working with NoSQL databases Skilled in Database Performance Tuning and Design Patterns Working knowledge of Agile methodologies Ability to write reusable libraries and modular, maintainable code Excellent verbal and written communication skills (especially with US counterparts) Strong troubleshooting and debugging skills Nice to Have Skills Experience with Angular, MongoDB, NPM Familiarity with Azure DevOps CI/CD pipelines for build and release configuration Self-starter attitude with strong analytical and problem-solving abilities Willingness to work extra hours when needed to meet tight deadlines Why Join Us Work with a passionate, high-performing team Opportunity to grow your technical and leadership skills in a dynamic environment Be part of global digital transformation initiatives with top-tier clients Exposure to real-world enterprise data systems Opportunity to work on cutting-edge Azure and cloud technologies Performance-based growth & internal mobility opportunities Tags DotNetDeveloper BackendDeveloper AzureDataEngineering Databricks MSFabric Snowflake Microservices CSharpJobs HyderabadJobs FullTimeJob HiringNow EntityFramework ASPNetCore CloudEngineering SQLJobs DevOps DotNetCore BackendJobs SuzvaCareers DataPlatformDeveloper SoftwareJobsIndia
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Pune, Gurugram
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Business Technology ZS s Technology group focuses on scalable strategies, assets and accelerators that deliver to our clients enterprise-wide transformation via cutting-edge technology. We leverage digital and technology solutions to optimize business processes, enhance decision-making, and drive innovation. Our services include, but are not limited to, Digital and Technology advisory, Product and Platform development and Data, Analytics and AI implementation. What you ll do Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements; Apply appropriate development methodologies (e.g.agile, waterfall) and best practices (e.g. mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments; Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management; Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management; Bring transparency in driving assigned tasks to completion and report accurate status; Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams; Assist senior team members, delivery leads in project management responsibilities What you ll bring Big Data TechnologiesProficiency in working with big data technologies, particularly in the context of Azure Databricks, which may include Apache Spark for distributed data processing. Azure DatabricksIn-depth knowledge of Azure Databricks for data engineering tasks, including data transformations, ETL processes, and job scheduling. SQL and Query OptimizationStrong SQL skills for data manipulation and retrieval, along with the ability to optimize queries for performance in Snowflake. ETL (Extract, Transform, Load)Expertise in designing and implementing ETL processes to move and transform data between systems, utilizing tools and frameworks available in Azure Databricks. Data IntegrationExperience with integrating diverse data sources into a cohesive and usable format, ensuring data quality and integrity. Python/PySparkKnowledge of programming languages like Python and PySpark for scripting and extending the functionality of Azure Databricks notebooks. Version ControlFamiliarity with version control systems, such as Git, for managing code and configurations in a collaborative environment. Monitoring and OptimizationAbility to monitor data pipelines, identify bottlenecks, and optimize performance for both Azure Data Factory Security and ComplianceUnderstanding of security best practices and compliance considerations when working with sensitive data in Azure and Snowflake environments. Snowflake Data WarehouseExperience in designing, implementing, and optimizing data warehouses using Snowflake, including schema design, performance tuning, and query optimization. Healthcare Domain Knowledge: Familiarity with US health plan terminologies and datasets is essential. Programming/Scripting Languages: Proficiency in Python, SQL, and PySpark is required. Cloud Platforms: Experience with AWS or Azure, specifically in building data pipelines, is needed. Cloud-Based Data Platforms: Working knowledge of Snowflake and Databricks is preferred. Data Pipeline Orchestration: Experience with Azure Data Factory and AWS Glue for orchestrating data pipelines is necessary. Relational Databases: Competency with relational databases such as PostgreSQL and MySQL is required, while experience with NoSQL databases is a plus. BI Tools: Knowledge of BI tools such as Tableau and PowerBI is expected. Version Control: Proficiency with Git, including branching, merging, and pull requests, is required. CI/CD for Data Pipelines: Experience in implementing continuous integration and delivery for data workflows using tools like Azure DevOps is essential. Additional Skills Experience with front-end technologies such as SQL, JavaScript, HTML, CSS, and Angular is advantageous. Familiarity with web development frameworks like Flask, Django, and FAST API is beneficial. Basic knowledge of AWS CI/CD practices is a plus. Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams; Proven ability to work creatively and analytically in a problem-solving environment; Willingness to travel to other global offices as needed to work with client or other internal project teams. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Job Overview Foster effective collaboration with diverse teams across various functions and regions. Collect and analyze large datasets to identify patterns, trends, and insights that will inform business strategies and decision-making processes. Develop and maintain reports, dashboards, and other tools to monitor and track supply chain performance. Assist in the development and implementation of supply chain strategies that align with business objectives. Identify and suggest solutions to potential supply chain risks and challenges. Build data models and perform data mining to discover new opportunities and areas of improvement. Conduct data quality checks to ensure accuracy, completeness, and consistency of data sets. What your background should look like: 5+ years of hands-on experience in Data Engineering within the supply chain domain. Proficiency in Azure Data Engineering technologies, including but not limited to ETL processes Azure Data Warehouse (DW) Azure Databricks MS SQL Strong expertise in developing and maintaining scalable data pipelines, data models, and integrations to support analytics and decision-making Experience in optimizing data workflows for performance, scalability, and reliability. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). Location
Posted 1 month ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Senior Azure Data Engineer ? L1 Support
Posted 1 month ago
8.0 - 13.0 years
16 - 22 Lacs
Chennai, Bengaluru, Delhi / NCR
Work from Office
About the job : Role : Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Experience : 8-15 years Location : Bangalore, Chennai, Delhi, Pune Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack - Ability to provide solutions that are forward-thinking in data engineering and analytics space - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members - Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture - Should have hands-on experience in SQL, Python and Spark (PySpark) - Candidate must have experience in AWS/ Azure stack - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes - Experience with Apache Kafka for use with streaming data / event-based data - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects - Should have experience working in Agile methodology - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail. Location - Bangalore, Chennai, Delhi / NCR, Pune
Posted 1 month ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Work from Office
*Must-Have Skills:* * Azure Databricks / PySpark hands-on * SQL/PL-SQL advanced level * Snowflake – 2+ years * Spark/Data pipeline development – 2+ years * Azure Repos / GitHub, Azure DevOps * Unix Shell Scripting * Cloud technology experience *Key Responsibilities:* 1. *Design, build, and manage data pipelines using Azure Databricks, PySpark, and Snowflake. 2. *Analyze and resolve production issues (Tier 2 support with weekend/on-call rotation). 3. *Write and optimize complex SQL/PL-SQL queries. 4. *Collaborate on low-level and high-level design for data solutions. 5. *Document all project deliverables and support deployment. Good to Have: Knowledge of Oracle, Qlik Replicate, GoldenGate, Hadoop Job scheduler tools like Control-M or Airflow Behavioral: Strong problem-solving & communication skills
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Kolkata
Work from Office
Job Summary : We are seeking an experienced Data Engineer with strong expertise in Databricks, Python, PySpark, and Power BI, along with a solid background in data integration and the modern Azure ecosystem. The ideal candidate will play a critical role in designing, developing, and implementing scalable data engineering solutions and pipelines. Key Responsibilities : - Design, develop, and implement robust data solutions using Azure Data Factory, Databricks, and related data engineering tools. - Build and maintain scalable ETL/ELT pipelines with a focus on performance and reliability. - Write efficient and reusable code using Python and PySpark. - Perform data cleansing, transformation, and migration across various platforms. - Work hands-on with Azure Data Factory (ADF) for at least 1.5 to 2 years. - Develop and optimize SQL queries, stored procedures, and manage large data sets using SQL Server, T-SQL, PL/SQL, etc. - Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. - Engage directly with clients and business stakeholders to gather requirements, suggest optimal solutions, and ensure successful delivery. - Work with Power BI for basic reporting and data visualization tasks. - Apply strong knowledge of data warehousing concepts, modern data platforms, and cloud-based analytics. - Adhere to coding standards and best practices, including thorough documentation and testing (unit, integration, performance). - Support the operations, maintenance, and enhancement of existing data pipelines and architecture. - Estimate tasks and plan release cycles effectively. Required Technical Skills : - Languages & Frameworks : Python, PySpark - Cloud & Tools : Azure Data Factory, Databricks, Azure ecosystem - Databases : SQL Server, T-SQL, PL/SQL - Reporting & BI Tools : Power BI (PBI) - Data Concepts : Data Warehousing, ETL/ELT, Data Cleansing, Data Migration - Other : Version control, Agile methodologies, good problem-solving skills Preferred Qualifications : - Experience with coding in Pysense within Databricks (added advantage) - Solid understanding of cloud data architecture and analytics processes - Ability to independently initiate and lead conversations with business stakeholders
Posted 1 month ago
5.0 - 8.0 years
15 - 25 Lacs
Gurugram, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role :Azure Data Engineer Experience Required :5 to 8 yrs Work Location : Bangalore/Gurgaon Required Skills, Azure Databricks, ADF, Pyspark/SQL Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France