Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 2 months ago
5.0 - 10.0 years
8 - 14 Lacs
Hyderabad
Work from Office
Job Title : Azure Synapse Developer Position Type : Permanent Experience : 5+ Years Location : Hyderabad (Work From Office / Hybrid) Shift Timings : 2 PM to 11 PM Mode of Interview : 3 rounds (Virtual/In-person) Notice Period : Immediate to 15 days Job Description : We are looking for an experienced Azure Synapse Developer to join our growing team. The ideal candidate should have a strong background in Azure Synapse Analytics, SSRS, and Azure Data Factory (ADF), with a solid understanding of data modeling, data movement, and integration. As an Azure Synapse Developer, you will work closely with cross-functional teams to design, implement, and manage data pipelines, ensuring the smooth flow of data across platforms. The candidate must have a deep understanding of SQL and ETL processes, and ideally, some exposure to Power BI for reporting and dashboard creation. Key Responsibilities : - Develop and maintain Azure Synapse Analytics solutions, ensuring scalability, security, and performance. - Design and implement data models for efficient storage and retrieval of data in Azure Synapse. - Utilize Azure Data Factory (ADF) for ETL processes, orchestrating data movement, and integrating data from various sources. - Leverage SSIS/SSRS/SSAS to build, deploy, and maintain data integration and reporting solutions. - Write and optimize SQL queries for data manipulation, extraction, and reporting. - Collaborate with business analysts and other stakeholders to understand reporting needs and create actionable insights. - Perform performance tuning on SQL queries, pipelines, and Synapse workloads to ensure high performance. - Provide support for troubleshooting and resolving data integration and performance issues. - Assist in setting up automated data processes and create reusable templates for data integration. - Stay updated on Azure Synapse features and tools, recommending improvements to the data platform as appropriate. Required Skills & Qualifications : - 5+ years of experience as a Data Engineer or Azure Synapse Developer. - Strong proficiency in Azure Synapse Analytics (Data Warehouse, Data Lake, and Analytics). - Solid understanding and experience in data modeling for large-scale data architectures. - Expertise in SQL for writing complex queries, optimizing performance, and managing large datasets. - Hands-on experience with Azure Data Factory (ADF) for data integration, ETL processes, and pipeline creation. - SSRS (SQL Server Reporting Services) and SSIS (SQL Server Integration Services) expertise. - Power BI knowledge (basic to intermediate) for reporting and data visualization. - Familiarity with SSAS (SQL Server Analysis Services) and OLAP concepts is a plus. - Experience in troubleshooting and optimizing complex data processing tasks. - Strong communication and collaboration skills to work effectively in a team-oriented environment. - Ability to quickly adapt to new tools and technologies in the Azure ecosystem.
Posted 2 months ago
10 - 12 years
13 - 20 Lacs
Kolkata
Work from Office
Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.
Posted 2 months ago
10 - 12 years
13 - 20 Lacs
Chennai
Work from Office
Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.
Posted 2 months ago
10 - 18 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com
Posted 2 months ago
3 - 7 years
5 - 9 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About Emperen Technologies : Emperen Technologies is a leading consulting firm committed to delivering tangible results for clients through a relationship-driven approach. With successful implementations for Fortune 500 companies, non-profits, and startups, Emperen Technologies exemplifies a client-centric model that prioritizes values and scalable, flexible solutions. Emperen specializes in navigating complex technological landscapes, empowering clients to achieve growth and success. Role Description : Emperen Technologies is seeking a highly skilled Senior Master Data Management (MDM) Engineer to join our team on a contract basis. This is a remote position where the Senior MDM Engineer will be responsible for a variety of key tasks including data engineering, data modeling, ETL processes, data warehousing, and data analytics. The role demands a strong understanding of MDM platforms, cloud technologies, and data integration, as well as the ability to work collaboratively in a dynamic environment. Key Responsibilities : - Design, implement, and manage Master Data Management (MDM) solutions to ensure data consistency and accuracy across the organization. - Oversee the architecture and operation of data modeling, ETL processes, and data warehousing. - Develop and execute data quality strategies to maintain high-quality data in line with business needs. - Build and integrate data pipelines using Microsoft Azure, DevOps, and GitLab technologies. - Implement data governance policies and ensure compliance with data security and privacy regulations. - Collaborate with cross-functional teams to define and execute business and technical requirements. - Analyze data to support business intelligence and decision-making processes. - Provide ongoing support for data integration, ensuring smooth operation and optimal performance. - Troubleshoot and resolve technical issues related to MDM, data integration, and related processes. - Work on continuous improvements of the MDM platform and related data processes. Qualifications : Required Skills & Experience : - Proven experience in Master Data Management (MDM), with hands-on experience on platforms like Profisee MDM and Microsoft Master Data Services (MDS). - Solid experience in Microsoft Azure cloud technologies. - Expertise in DevOps processes and using GitLab for version control and deployment. - Strong background in Data Warehousing, Azure Data Lakes, and Business Intelligence (BI) tools. - Expertise in Data Governance, data architecture, data modeling, and data integration (particularly using REST APIs). - Knowledge and experience in data quality, data security, and privacy best practices. - Experience working with business stakeholders and technical teams to analyze business requirements and translate them into effective data solutions. - Basic Business Analysis Skills - Ability to assess business needs, translate those into technical requirements, and ensure alignment between data management systems and business goals. Preferred Qualifications : - Experience with big data technologies and advanced analytics platforms. - Familiarity with data integration tools such as Talend or Informatica. - Knowledge of data visualization tools such as Power BI or Tableau. - Certifications in relevant MDM, cloud technologies, or data management platforms are a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
12 - 22 years
35 - 65 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 2 months ago
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 2 months ago
10 - 20 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 2 months ago
11 - 20 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough