Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SMEs to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.
Posted 2 weeks ago
1.0 - 5.0 years
7 - 10 Lacs
Kolkata
Work from Office
Job Title : SSIS Developer Number of Positions : 5 Experience : 45 Years Location : Remote (Preferred : Ahmedabad, Gurgaon, Mumbai, Pune, Bangalore) Shift Timing : Evening/Night (Start time : 6 : 30 PM IST onwards) Job Summary We are seeking skilled SSIS Developers with 45 years of experience in developing and maintaining data integration solutions The ideal candidate will have strong expertise in SSIS and SQL, solid understanding of data warehousing concepts, and exposure to Azure data services This role requires clear communication and the ability to work independently during evening or night hours. Key Responsibilities Design, develop, and maintain SSIS packages for ETL processes. Write and optimize complex SQL queries and stored procedures. Ensure data accuracy, integrity, and performance across DWH systems. Collaborate with team members to gather and understand requirements. Work with Azure-based data platforms and services as needed. Troubleshoot and resolve data integration issues promptly. Document technical specifications and maintain version control. Required Skills Proficient in Microsoft SSIS (SQL Server Integration Services). Strong SQL skills, including performance tuning and debugging. Good understanding of data warehousing concepts and ETL best practices. Exposure to Azure (e.g., Data Factory, SQL Database, Blob Storage). Strong communication and collaboration skills. Ability to work independently during US-aligned hours. Preferred Qualifications Experience working in a remote, distributed team environment. Familiarity with agile methodologies and tools like JIRA, Git.
Posted 2 weeks ago
14.0 - 24.0 years
35 - 55 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.
Posted 2 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 3 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Azure Data Migration - Con/AM - HYD - J48933 Roles & Responsibilities Working with functional experts to understand data migration requirements and translate into data engineering and data analytics functionality. Design and implement data integrations, pipelines, and algorithms to extract and transform data from various sources into a format ready to load into target systems. Design and development of data profiling tools to analyse and understand the structure, quality, and integrity of data prior to migration. Implement reconciliation reports to verify and validate data accuracy post-migration, identifying discrepancies and ensuring consistency between source and target systems. Assisting in scoping, estimation, and task planning for assigned projects. Perform testing of ETL processes and data profiling tools, debugging issues and refining operations based on feedback and requirements. Document the ETL processes, data profiling methods, and reconciliation procedures to maintain clear and accessible records for future reference and compliance. Keep up-to-date with the latest tools, technologies, and best practices in data engineering to continuously improve the quality and efficiency of work. Mandatory skills: Demonstrated experience of converting business requirements and use cases into technical solutions. Deep knowledge and of how to design & build data pipelines in Data Factory/ Azure Synapse Strong skills in programming languages such as Python, SQL, or Java, which are commonly used for data manipulation and ETL processes Hands-on experience working in complex data warehouse implementations using Azure SQL Data warehouse, Azure Data Factory and Azure SQL Database. Good communication skills to work effectively within a team and interact with clients or other stakeholders to gather requirements and present solutions. Comfortable in an Agile working environment and using Scrum project management. Strong analytical and problem-solving skills to troubleshoot issues during the migration process and optimize data workflows. High attention to detail to accurately implement ETL processes and generate precise reconciliation reports. Desired skills: Using ETL tools in the context of Data Migration projects Experience of building ETL solutions against COTS or SaaS based applications, such as SAP, Oracle ERP or Microsoft Dynamics. A proven ability to build resilient, tested data pipelines with data quality monitoring embedded (DataOps) Knowledge of security best practices for data protection and compliance. Azure data engineering certification (DP203) Required Candidate profile Candidate Experience Should Be : 4 To 10 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA
Posted 3 weeks ago
8.0 - 10.0 years
13 - 15 Lacs
Pune
Work from Office
We are seeking a hands-on Lead Data Engineer to drive the design and delivery of scalable, secure data platforms on Google Cloud Platform (GCP). In this role you will own architectural decisions, guide service selection, and embed best practices across data engineering, security, and performance disciplines. You will partner with data modelers, analysts, security teams, and product owners to ensure our pipelines and datasets serve analytical, operational, and AI/ML workloads with reliability and cost efficiency. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Lead end-to-end development of high-throughput, low-latency data pipelines and lake-house solutions on GCP (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Composer, Dataplex, etc.). Define reference architectures, technology standards for data ingestion, transformation, and storage. Drive service-selection trade-offscost, performance, scalability, and securityacross streaming and batch workloads. Conduct design reviews and performance tuning sessions; ensure adherence to partitioning, clustering, and query-optimization standards in BigQuery. Contribute to long-term cloud data strategy, evaluating emerging GCP features and multi-cloud patterns (Azure Synapse, Data Factory, Purview, etc.) for future adoption. Lead the code reviews and oversee the development activities delegated to Data engineers. Implement best practices recommended by Google Cloud Provide effort estimates for the data engineering activities Participate in discussions to migrate existing Azure workloads to GCP, provide solutions to migrate the work loads for selected data pipelines Must-Have Skills 810 years in data engineering, with 3+ years leading teams or projects on GCP. Expert in GCP data services (BigQuery, Dataflow/Apache Beam, Dataproc/Spark, Pub/Sub, Cloud Storage) and orchestration with Cloud Composer or Airflow. Proven track record designing and optimizing large-scale ETL/ELT pipelines (streaming + batch). Strong fluency in SQL and one major programming language (Python, Java, or Scala). Deep understanding of data lake / lakehouse, dimensional & data-vault modeling, and data governance frameworks. Excellent communication and stakeholder-management skills; able to translate complex technical topics to non-technical audiences. Nice-to-Have Skills Hands-on experience with Microsoft Azure data services (Azure Synapse Analytics, Data Factory, Event Hub, Purview). Experience integrating ML pipelines (Vertex AI, Dataproc ML) or real-time analytics (BigQuery BI Engine, Looker). Familiarity with open-source observability stacks (Prometheus, Grafana) and FinOps tooling for cloud cost optimization. Preferred Certifications Google Professional Data Engineer (strongly preferred) or Google Professional Cloud Architect Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Pune
Work from Office
Job Summary We are seeking an energetic Senior Data Engineer with hands-on expertise in Google Cloud Platform to build, maintain, and migrate data pipelines that power analytics and AI workloads. You will leverage GCP servicesBigQuery, Dataflow, Cloud Composer, Pub/Sub, and Cloud Storagewhile collaborating with data modelers, analysts, and product teams to deliver highly reliable, well-governed datasets. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Design, develop, and optimize batch and streaming pipelines on GCP using Dataflow / Apache Beam, BigQuery, Cloud Composer (Airflow), and Pub/Sub. Maintain and enhance existing data workflows—monitoring performance, refactoring code, and automating tests to ensure data quality and reliability. Migrate data assets and ETL / ELT workloads from Azure (Data Factory, Databricks, Synapse, Fabric) to corresponding GCP services, ensuring functional parity and cost efficiency. Partner with data modelers to implement partitioning, clustering, and materialized-view strategies in BigQuery to meet SLAs for analytics and reporting. Conduct root-cause analysis for pipeline failures, implement guardrails for data quality, and document lineage. Must-Have Skills 4-6 years of data-engineering experience, including 2+ years building pipelines on GCP (BigQuery, Dataflow, Pub/Sub, Cloud Composer). Proficiency in SQL and one programming language (Python, Java, or Scala). Solid understanding of ETL / ELT patterns, data-warehouse modeling (star, snowflake, data vault), and performance-tuning techniques. Experience implementing data-quality checks, observability, and cost-optimization practices in cloud environments. Nice-to-Have Skills Practical exposure to Azure data services—Data Factory, Databricks, Synapse Analytics, or Microsoft Fabric. Preferred Certifications Google Professional Data Engineer or Associate Cloud Engineer Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 3 weeks ago
2.0 - 7.0 years
20 - 30 Lacs
Pune
Work from Office
Work mode – Currently this is remote but it’s not permanent WFH , once business ask the candidate to come to office, they must relocate. Mandatory:- DE , Azure , synapse , SQL python , Pyspark, ETL,Fabric, • Exp.in Python for scripting or data tasks. Required Candidate profile • Hands-on exp in SQL& relational databases (SQL Server,PostgreSQL). • data warehousing concepts (ETL, • Hands-on exp in Azure data integration tools like DF, Synapse, Data Lake and Blob Storage.
Posted 3 weeks ago
3.0 - 7.0 years
5 - 10 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Role & Responsibilities Job Description: We are seeking a skilled and experienced Microsoft Fabric Engineer to join data engineering team. The ideal candidate will have a strong background in designing, developing, and maintaining data solutions using Microsoft Fabric, i ncluding experience across key workloads such as Data Engineering, Data Factory, Data Science, Real-Time Analytics, and Power BI. Require deep understanding of Synapse Data Warehouse, OneLake, Notebooks, Lakehouse architecture, and Power BI integration within Microsoft ecosystem. Key Responsibilities: Design, implement scalable and secure data solutions using Microsoft Fabric. Build and maintain Data Pipelines using Dataflows Gen2 and Data Factory. Work with Lakehouse architecture and manage datasets in OneLake. Develop notebooks (PySpark or T-SQL) for data transformation and processing. Collaborate with data analysts to create interactive dashboards, reports using Power BI (within Fabric). Leverage Synapse Data Warehouse and KQL databases for structured real-time analytics. Monitor and optimize performance of data pipelines and queries. Ensure to adhere data quality, security, and governance practices. Stay current with Microsoft Fabric updates and roadmap, recommending enhancements. Required Skills: 3+ years of hands-on experience with Microsoft Fabric or similar tools in the Microsoft data stack. Strong proficiency with: Data Factory (Fabric) Synapse Data Warehouse / SQL Analytics Endpoints Power BI integration and DAX Notebooks (PySpark, T-SQL) Lakehouse and OneLake Understanding of data modeling, ETL/ELT processes, and real-time data streaming. Experience with KQL (Kusto Query Language) is a plus. Familiarity with Microsoft Purview, Azure Data Lake, or Azure Synapse Analytics is advantageous. Qualifications: Microsoft Fabric, Onelake, Data Factory, Data Lake, DataMesh
Posted 3 weeks ago
8 - 12 years
25 - 30 Lacs
Noida
Hybrid
Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits
Posted 1 month ago
8 - 12 years
25 - 30 Lacs
Gurugram
Hybrid
Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits
Posted 1 month ago
5 - 10 years
4 - 8 Lacs
Mysuru
Work from Office
Must have Selenium, Java with Data factory and Databricks. Excellent Communication Skill Max Budget : 20 LPA Max NP: 15 Days. Job Title: Off Shore Automation Engineer Minimum Qualifications and Job Requirements: 5+ years of experience in automating APIs and web services. 3+ years of experience in Selenium automation tool. 1+ years of expereince with Datafactory and Databricks Experience with BDD implementations using Cucumber.Excellent SQL skills and the ability to write complex queriesHighly skilled in at least one programming language. Java is preferred Highly skilled in 2 or more Automation Test tools. Experience in ReadyAPI is preferred. 2+ years of experience with Jenkins 2+ years of experience delivery automation solutions using Agile methodology. Experience with Eclipse or similar IDEsExperience with Source Control tools such as Git Ability to work on multiple projects concurrently and meet deadlines Ability to work in a fast-paced team environment. Expectations include a high level of initiative and a strong commitment to job knowledge, productivity, and attention to detailStrong verbal and written communication skills.Solid software engineering skills participated in full lifecycle development on large projects.
Posted 1 month ago
5 - 10 years
9 - 18 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Power BI Expert with over 5 years of experience in business intelligence and data analytics. The ideal candidate will have expertise in Azure, Data Factory, Microsoft Fabric, and Data Warehousing. Required Candidate profile Experience with Power BI, Azure, Data Warehousing, and related technologies Proficiency in DAX, Power Query, SQL, and data visualization best practices Degree in Computer Science, Data Analytic.
Posted 1 month ago
12 - 22 years
35 - 65 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
10 - 20 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
11 - 20 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 1 month ago
8 - 13 years
15 - 30 Lacs
Bengaluru
Work from Office
Design, develop, and maintain scalable ETL pipelines, data lakes, and hosting solutions using Azure tools. Ensure data quality, performance optimization, and compliance across hybrid and cloud environments. Required Candidate profile Data engineer with experience in Azure data services, ETL workflows, scripting, and data modeling. Strong collaboration with analytics teams and hands-on pipeline deployment using best practices
Posted 1 month ago
3 - 8 years
10 - 20 Lacs
Gurgaon
Work from Office
Ideal qualifications, skills and experiences we are looking for are: - We are actively seeking a talented and results-driven Data Scientist to join our team and take on a leadership role in driving business outcomes through the power of data analytics and insights. - Your contributions will be instrumental in making data-informed decisions, identifying growth opportunities, and propelling our organization to new levels of success. - Doctorate/Master's/bachelor's degree in data science, Statistics, Computer Science, Mathematics, Economics, commerce or a related field. - Minimum of 3 years of experience working as a Data Scientist or in a similar analytical role, with experience leading data science projects and teams. Experience in Healthcare domain with exposure to clinical operations, financial, risk rating, fraud, digital, sales and marketing, and wellness, e-commerce or the ed tech industry is a plus. - Proven ability to lead and mentor a team of data scientists, fostering an innovative environment. Strong decision-making and problem-solving skills to guide strategic initiatives. - Expertise in programming languages such as Python and R, and proficiency with data manipulation, analysis, and visualization libraries (e.g., pandas, NumPy, Matplotlib, seaborn). Very strong python and exceptional with pandas, NumPy, advanced python (pytest, class, inheritance, docstrings). - Deep understanding of machine learning algorithms, model evaluation, and feature engineering. Experience with frameworks like scikit-learn, TensorFlow, or Py torch. 1. Above 6 yrs team leading and handling projects with end-to-end ownership is a must 2. Deep understanding of ML and Deep Learning is a must 3. Basis NLP experience is highly valuable. 4. Pyspark experience is highly valuable. 5. Competitive coding experience (LeetCode) is highly valuable. - Strong expertise in statistical modelling techniques such as regression, clustering, time series analysis, and hypothesis testing. - Experience of building & deploying machine learning models in cloud environment: Microsoft Azure preferred (Databricks, Synapse, Data Factory, etc.) - Basic MLOPs experience with FastAPIs and experience of docker is highly valuable and AI governance - Ability to understand business objectives, market dynamics, and strategic priorities. Demonstrated experience translating data insights into tangible business outcomes and driving data-informed decision-making. - Excellent verbal and written communication skills - Proven experience leading data science projects, managing timelines, and delivering results within deadlines. - Strong collaboration skills with the ability to work effectively in cross-functional teams, build relationships, and foster a culture of knowledge sharing and continuous learning.
Posted 2 months ago
10 - 15 years
12 - 17 Lacs
Bengaluru
Work from Office
Skills : Azure, AI ,Kubernetes,Databricks ,Data Factory,Azure ML Service,Azure DevOps ,Docker/Containerization,Python,Azure OpenAI ,Azure Cognitive Search,Azure Functions, Azure Cosmos DB, Azure public/private Cloud platform
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Kota
Work from Office
Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.
Posted 2 months ago
5 - 8 years
7 - 11 Lacs
Pune
Work from Office
5-8 years of experience in IT Industry 3+ years of experience with Azure Data Engineering Stack (Event Hub, Data Factory, Cosmos DB, Synapse, SQL DB, Databricks, Data Explorer) 3+ years of experience with Python / Pyspark, Spark, Scala, Hive, Impala Excellent knowledge of SQL and coding skills Good understanding of other Azure services like Azure Data Lake Analytics, U-SQL, Azure SQL DW Good Understanding of Modern Data Warehouse/Lambda Architecture, Data warehousing concepts Experience with scripting languages such as shell. Excellent analytical and organization skills. Effective working in a team as well as working independently. Experience of working in Agile delivery Knowledge of software development best practices. Strong written and verbal communication skills. Azure Data Engineer certification is added advantage Required Skills Azure, Data Engineering, Hub, Data Factory, Cosmos DB, Synapse, SQL DB, Databricks, Data Explorer, Python. Pyspark, Spark, Scala, Hive, Impala, SQL, Azure Data Lake Analytics, U-SQL, Azure SQL DW, Architecture, Software Development, Senior Data Engineer Azure
Posted 2 months ago
8 - 12 years
20 - 32 Lacs
Bengaluru
Work from Office
Responsibilities: Responsible for database modeling, design, Metadata and storage of data in the database (L0 L1 and Sematic views for different use cases) Understand the business requirements and enable metrics for further analysis and visualization Optimal data pipeline architecture, assemble large, complex data sets that meet functional and non-functional business requirements Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Work with data and analytics experts to strive for greater functionality in client data systems. Requirements: 4+ years of Exposure to Azure cloud components like Data Factory, and Data Lake, Databricks SQL Database is mandatory Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (Leapfrog, TM1 Oracle, SQL Server etc) Experience building and optimizing - Cloud data modelling architectures, and data sets. Analysis and query on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Successful history of manipulating, processing and extracting value from large disconnected datasets Experience supporting and working with cross-functional teams in a dynamic environment Develop end to end Data Mart models based on business requirements Must have experience on ETL Mapping Design and Development experience - Source to Target mapping Perform the design and extension of data marts, meta data, and data models Expertise in advanced SQL including some query tuning, Stored Procedures, and Functions Logical data modeling/conceptual data modelling/Normalization Slowly changing dimensions, Key management, Recursive queries/scalable querieste profile
Posted 2 months ago
8 - 10 years
27 - 32 Lacs
Mumbai
Work from Office
Job Summary: We are seeking a highly skilled and experienced Senior Developer with a strong background in DAX and SSAS cube development. The ideal candidate should also have a deep understanding of how a Synapse data pipelines works and developed. The person should also have advanced SQL database expertise. In this role you will work closely with cross-functional teams to design develop and optimize data models and analytical solutions to meet business needs. Key Responsibilities: Design develop and maintain SSAS (SQL Server Analysis Services) cubes to support business intelligence and reporting needs. Write optimize and maintain DAX (Data Analysis Expressions) queries for various Power BI and SSAS solutions. Knowledge on developing and managing Synapse data pipelines for efficient ETL processes and data integration across the data warehouse. Collaborate with data analysts business stakeholders and other developers to understand requirements and deliver tailored solutions. Optimize and manage large SQL queries. Ensure the quality performance and scalability of deployed data models and solutions. Troubleshoot and resolve data-related issues in DAX queries SSAS cubes and data pipelines. Stay up to date with the latest trends and best practices in data modeling cube development and cloud data platforms like Azure Synapse. Contribute to the ongoing improvement of data processes and analytics infrastructure. Required Skills and Experience: 8+ years of experience in SSAS cube development. Proficiency in writing advanced DAX queries and optimizing them for performance. Strong knowledge with Azure Synapse Analytics and developing Synapse data pipelines. Extensive experience with SQL databases including advanced SQL query writing and performance tuning. Familiarity with Power BI and integrating SSAS cubes into Power BI reports. Strong analytical thinking and problem-solving skills. Experience working in Agile or similar development environments. Excellent communication and collaboration skills. Preferred Qualifications: Experience with cloud platforms such as Microsoft Azure and understanding of cloud data warehousing concepts. Familiarity with ETL tools data integration and modelling best practices. Knowledge of other Azure data services (Data Factory Databricks etc.) is a plus. Education: Bachelor's degree in Computer Science Information Technology or a related field.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2