Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities: Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills : Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience : Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks,improvingscalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications: Minimum 5 years of Relevant experience Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field
Posted 3 months ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 3 months ago
7.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position : Senior Azure Data Engineer (Only Immediate Joiner) Location : Bangalore Mode of Work : Work from Office Experience : 7 years relevant experience Job Type : Full Time (On Roll) Job Description Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted 3 months ago
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SMEs to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.
Posted 3 months ago
1.0 - 5.0 years
7 - 10 Lacs
Kolkata
Work from Office
Job Title : SSIS Developer Number of Positions : 5 Experience : 45 Years Location : Remote (Preferred : Ahmedabad, Gurgaon, Mumbai, Pune, Bangalore) Shift Timing : Evening/Night (Start time : 6 : 30 PM IST onwards) Job Summary We are seeking skilled SSIS Developers with 45 years of experience in developing and maintaining data integration solutions The ideal candidate will have strong expertise in SSIS and SQL, solid understanding of data warehousing concepts, and exposure to Azure data services This role requires clear communication and the ability to work independently during evening or night hours. Key Responsibilities Design, develop, and maintain SSIS packages for ETL processes. Write and optimize complex SQL queries and stored procedures. Ensure data accuracy, integrity, and performance across DWH systems. Collaborate with team members to gather and understand requirements. Work with Azure-based data platforms and services as needed. Troubleshoot and resolve data integration issues promptly. Document technical specifications and maintain version control. Required Skills Proficient in Microsoft SSIS (SQL Server Integration Services). Strong SQL skills, including performance tuning and debugging. Good understanding of data warehousing concepts and ETL best practices. Exposure to Azure (e.g., Data Factory, SQL Database, Blob Storage). Strong communication and collaboration skills. Ability to work independently during US-aligned hours. Preferred Qualifications Experience working in a remote, distributed team environment. Familiarity with agile methodologies and tools like JIRA, Git.
Posted 3 months ago
14.0 - 24.0 years
35 - 55 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.
Posted 3 months ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 3 months ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 3 months ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Azure Data Migration - Con/AM - HYD - J48933 Roles & Responsibilities Working with functional experts to understand data migration requirements and translate into data engineering and data analytics functionality. Design and implement data integrations, pipelines, and algorithms to extract and transform data from various sources into a format ready to load into target systems. Design and development of data profiling tools to analyse and understand the structure, quality, and integrity of data prior to migration. Implement reconciliation reports to verify and validate data accuracy post-migration, identifying discrepancies and ensuring consistency between source and target systems. Assisting in scoping, estimation, and task planning for assigned projects. Perform testing of ETL processes and data profiling tools, debugging issues and refining operations based on feedback and requirements. Document the ETL processes, data profiling methods, and reconciliation procedures to maintain clear and accessible records for future reference and compliance. Keep up-to-date with the latest tools, technologies, and best practices in data engineering to continuously improve the quality and efficiency of work. Mandatory skills: Demonstrated experience of converting business requirements and use cases into technical solutions. Deep knowledge and of how to design & build data pipelines in Data Factory/ Azure Synapse Strong skills in programming languages such as Python, SQL, or Java, which are commonly used for data manipulation and ETL processes Hands-on experience working in complex data warehouse implementations using Azure SQL Data warehouse, Azure Data Factory and Azure SQL Database. Good communication skills to work effectively within a team and interact with clients or other stakeholders to gather requirements and present solutions. Comfortable in an Agile working environment and using Scrum project management. Strong analytical and problem-solving skills to troubleshoot issues during the migration process and optimize data workflows. High attention to detail to accurately implement ETL processes and generate precise reconciliation reports. Desired skills: Using ETL tools in the context of Data Migration projects Experience of building ETL solutions against COTS or SaaS based applications, such as SAP, Oracle ERP or Microsoft Dynamics. A proven ability to build resilient, tested data pipelines with data quality monitoring embedded (DataOps) Knowledge of security best practices for data protection and compliance. Azure data engineering certification (DP203) Required Candidate profile Candidate Experience Should Be : 4 To 10 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA
Posted 3 months ago
8.0 - 10.0 years
13 - 15 Lacs
Pune
Work from Office
We are seeking a hands-on Lead Data Engineer to drive the design and delivery of scalable, secure data platforms on Google Cloud Platform (GCP). In this role you will own architectural decisions, guide service selection, and embed best practices across data engineering, security, and performance disciplines. You will partner with data modelers, analysts, security teams, and product owners to ensure our pipelines and datasets serve analytical, operational, and AI/ML workloads with reliability and cost efficiency. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Lead end-to-end development of high-throughput, low-latency data pipelines and lake-house solutions on GCP (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Composer, Dataplex, etc.). Define reference architectures, technology standards for data ingestion, transformation, and storage. Drive service-selection trade-offscost, performance, scalability, and securityacross streaming and batch workloads. Conduct design reviews and performance tuning sessions; ensure adherence to partitioning, clustering, and query-optimization standards in BigQuery. Contribute to long-term cloud data strategy, evaluating emerging GCP features and multi-cloud patterns (Azure Synapse, Data Factory, Purview, etc.) for future adoption. Lead the code reviews and oversee the development activities delegated to Data engineers. Implement best practices recommended by Google Cloud Provide effort estimates for the data engineering activities Participate in discussions to migrate existing Azure workloads to GCP, provide solutions to migrate the work loads for selected data pipelines Must-Have Skills 810 years in data engineering, with 3+ years leading teams or projects on GCP. Expert in GCP data services (BigQuery, Dataflow/Apache Beam, Dataproc/Spark, Pub/Sub, Cloud Storage) and orchestration with Cloud Composer or Airflow. Proven track record designing and optimizing large-scale ETL/ELT pipelines (streaming + batch). Strong fluency in SQL and one major programming language (Python, Java, or Scala). Deep understanding of data lake / lakehouse, dimensional & data-vault modeling, and data governance frameworks. Excellent communication and stakeholder-management skills; able to translate complex technical topics to non-technical audiences. Nice-to-Have Skills Hands-on experience with Microsoft Azure data services (Azure Synapse Analytics, Data Factory, Event Hub, Purview). Experience integrating ML pipelines (Vertex AI, Dataproc ML) or real-time analytics (BigQuery BI Engine, Looker). Familiarity with open-source observability stacks (Prometheus, Grafana) and FinOps tooling for cloud cost optimization. Preferred Certifications Google Professional Data Engineer (strongly preferred) or Google Professional Cloud Architect Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 3 months ago
4.0 - 6.0 years
6 - 8 Lacs
Pune
Work from Office
Job Summary We are seeking an energetic Senior Data Engineer with hands-on expertise in Google Cloud Platform to build, maintain, and migrate data pipelines that power analytics and AI workloads. You will leverage GCP servicesBigQuery, Dataflow, Cloud Composer, Pub/Sub, and Cloud Storagewhile collaborating with data modelers, analysts, and product teams to deliver highly reliable, well-governed datasets. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Design, develop, and optimize batch and streaming pipelines on GCP using Dataflow / Apache Beam, BigQuery, Cloud Composer (Airflow), and Pub/Sub. Maintain and enhance existing data workflows—monitoring performance, refactoring code, and automating tests to ensure data quality and reliability. Migrate data assets and ETL / ELT workloads from Azure (Data Factory, Databricks, Synapse, Fabric) to corresponding GCP services, ensuring functional parity and cost efficiency. Partner with data modelers to implement partitioning, clustering, and materialized-view strategies in BigQuery to meet SLAs for analytics and reporting. Conduct root-cause analysis for pipeline failures, implement guardrails for data quality, and document lineage. Must-Have Skills 4-6 years of data-engineering experience, including 2+ years building pipelines on GCP (BigQuery, Dataflow, Pub/Sub, Cloud Composer). Proficiency in SQL and one programming language (Python, Java, or Scala). Solid understanding of ETL / ELT patterns, data-warehouse modeling (star, snowflake, data vault), and performance-tuning techniques. Experience implementing data-quality checks, observability, and cost-optimization practices in cloud environments. Nice-to-Have Skills Practical exposure to Azure data services—Data Factory, Databricks, Synapse Analytics, or Microsoft Fabric. Preferred Certifications Google Professional Data Engineer or Associate Cloud Engineer Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 3 months ago
2.0 - 7.0 years
20 - 30 Lacs
Pune
Work from Office
Work mode – Currently this is remote but it’s not permanent WFH , once business ask the candidate to come to office, they must relocate. Mandatory:- DE , Azure , synapse , SQL python , Pyspark, ETL,Fabric, • Exp.in Python for scripting or data tasks. Required Candidate profile • Hands-on exp in SQL& relational databases (SQL Server,PostgreSQL). • data warehousing concepts (ETL, • Hands-on exp in Azure data integration tools like DF, Synapse, Data Lake and Blob Storage.
Posted 3 months ago
3.0 - 7.0 years
5 - 10 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Role & Responsibilities Job Description: We are seeking a skilled and experienced Microsoft Fabric Engineer to join data engineering team. The ideal candidate will have a strong background in designing, developing, and maintaining data solutions using Microsoft Fabric, i ncluding experience across key workloads such as Data Engineering, Data Factory, Data Science, Real-Time Analytics, and Power BI. Require deep understanding of Synapse Data Warehouse, OneLake, Notebooks, Lakehouse architecture, and Power BI integration within Microsoft ecosystem. Key Responsibilities: Design, implement scalable and secure data solutions using Microsoft Fabric. Build and maintain Data Pipelines using Dataflows Gen2 and Data Factory. Work with Lakehouse architecture and manage datasets in OneLake. Develop notebooks (PySpark or T-SQL) for data transformation and processing. Collaborate with data analysts to create interactive dashboards, reports using Power BI (within Fabric). Leverage Synapse Data Warehouse and KQL databases for structured real-time analytics. Monitor and optimize performance of data pipelines and queries. Ensure to adhere data quality, security, and governance practices. Stay current with Microsoft Fabric updates and roadmap, recommending enhancements. Required Skills: 3+ years of hands-on experience with Microsoft Fabric or similar tools in the Microsoft data stack. Strong proficiency with: Data Factory (Fabric) Synapse Data Warehouse / SQL Analytics Endpoints Power BI integration and DAX Notebooks (PySpark, T-SQL) Lakehouse and OneLake Understanding of data modeling, ETL/ELT processes, and real-time data streaming. Experience with KQL (Kusto Query Language) is a plus. Familiarity with Microsoft Purview, Azure Data Lake, or Azure Synapse Analytics is advantageous. Qualifications: Microsoft Fabric, Onelake, Data Factory, Data Lake, DataMesh
Posted 3 months ago
8 - 12 years
25 - 30 Lacs
Noida
Hybrid
Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits
Posted 4 months ago
8 - 12 years
25 - 30 Lacs
Gurugram
Hybrid
Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits
Posted 4 months ago
5 - 10 years
4 - 8 Lacs
Mysuru
Work from Office
Must have Selenium, Java with Data factory and Databricks. Excellent Communication Skill Max Budget : 20 LPA Max NP: 15 Days. Job Title: Off Shore Automation Engineer Minimum Qualifications and Job Requirements: 5+ years of experience in automating APIs and web services. 3+ years of experience in Selenium automation tool. 1+ years of expereince with Datafactory and Databricks Experience with BDD implementations using Cucumber.Excellent SQL skills and the ability to write complex queriesHighly skilled in at least one programming language. Java is preferred Highly skilled in 2 or more Automation Test tools. Experience in ReadyAPI is preferred. 2+ years of experience with Jenkins 2+ years of experience delivery automation solutions using Agile methodology. Experience with Eclipse or similar IDEsExperience with Source Control tools such as Git Ability to work on multiple projects concurrently and meet deadlines Ability to work in a fast-paced team environment. Expectations include a high level of initiative and a strong commitment to job knowledge, productivity, and attention to detailStrong verbal and written communication skills.Solid software engineering skills participated in full lifecycle development on large projects.
Posted 4 months ago
5 - 10 years
9 - 18 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled Power BI Expert with over 5 years of experience in business intelligence and data analytics. The ideal candidate will have expertise in Azure, Data Factory, Microsoft Fabric, and Data Warehousing. Required Candidate profile Experience with Power BI, Azure, Data Warehousing, and related technologies Proficiency in DAX, Power Query, SQL, and data visualization best practices Degree in Computer Science, Data Analytic.
Posted 4 months ago
12 - 22 years
35 - 65 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
10 - 20 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
11 - 20 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
8 - 13 years
15 - 30 Lacs
Bengaluru
Work from Office
Design, develop, and maintain scalable ETL pipelines, data lakes, and hosting solutions using Azure tools. Ensure data quality, performance optimization, and compliance across hybrid and cloud environments. Required Candidate profile Data engineer with experience in Azure data services, ETL workflows, scripting, and data modeling. Strong collaboration with analytics teams and hands-on pipeline deployment using best practices
Posted 4 months ago
2.0 - 5.0 years
4 - 9 Lacs
noida
Work from Office
We are seeking a skilled Data Engineer to design, build, and maintain high-performance data pipelines within the Microsoft Fabric ecosystem. The role involves transforming raw data into analytics-ready assets, optimising data performance across both modern and legacy platforms, and collaborating closely with Data Analysts to deliver reliable, business-ready gold tables. You will also coordinate with external vendors during build projects to ensure adherence to standards. Key Responsibilities Pipeline Development & Integration Design and develop end-to-end data pipelines using Microsoft Fabric (Data Factory, Synapse, Notebooks). Build robust ETL/ELT processes to ingest data from both modern and legacy sources. Create and optimise gold tables and semantic models in collaboration with Data Analysts. Implement real-time and batch processing with performance optimisation. Build automated data validation and quality checks across Fabric and legacy environments. Manage integrations with SQL Server (SSIS packages, cube processing). Data Transformation & Performance Optimisation Transform raw datasets into analytics-ready gold tables following dimensional modelling principles. Implement complex business logic and calculations within Fabric pipelines. Create reusable data assets and standardised metrics with Data Analysts. Optimise query performance across Fabric compute engines and SQL Server. Implement incremental loading strategies for large datasets. Maintain and improve performance across both Fabric and legacy environments. Business Collaboration & Vendor Support Partner with Data Analysts and stakeholders to understand requirements and deliver gold tables. Provide technical guidance to vendors during data product development. Ensure vendor-built pipelines meet performance and integration standards. Collaborate on data model design for both ongoing reporting and new analytics use cases. Support legacy reporting systems including Excel, SSRS, and Power BI. Resolve data quality issues across internal and vendor-built solutions. Quality Assurance & Monitoring Write unit and integration tests for data pipelines. Implement monitoring and alerting for data quality. Troubleshoot pipeline failures and data inconsistencies. Maintain documentation and operational runbooks. Support deployment and change management processes. Required Skills & Experience Essential 2+ years of data engineering experience with Microsoft Fabric and SQL Server environments. Strong SQL expertise for complex transformations in Fabric and SQL Server. Proficiency in Python or PySpark for data processing. Integration experience with SSIS, SSRS, and cube processing. Proven performance optimisation skills across Fabric and SQL Server. Experience coordinating with vendors on technical build projects. Strong collaboration skills with Data Analysts for gold table creation. Preferred Microsoft Fabric or Azure certifications (DP-600, DP-203). Experience with Git and CI/CD for data pipelines. Familiarity with streaming technologies and real-time processing. Background in BI or analytics engineering. Experience with data quality tools and monitoring frameworks.
Posted Date not available
6.0 - 11.0 years
15 - 27 Lacs
pune
Hybrid
Looking for immediate Joiners 0-30 Days Role & responsibilities Data Analytics Engineer Role - Mid level - IC Role IT Delivery Advanced Analytics The Advanced Analytics team is positioned within IT Delivery. This team serves as the brain of the business where insights are generated, processes are automated, and solutions are built. Our solutions are mainly focused on Supply Chain and Commercial. We are looking for (senior) data engineers to join our high performing team. We aim to attract and further develop the best Data Science & Supply Chain talent. The role and its responsibilities Collect business requirements from the various stakeholders (Data Scientists, Data Visualizers, Product Managers, IM) Translate business requirements into technical requirements, systems, and solutions Design, develop and document the technical solution: creation of data pipelines for data transfer between different storage services Maintenance of the solution: Actively monitor and manage the system/ solutions performance in close contact with the solution architect DevOps/ Agile WoW Basic Qualifications BSc in Computer Science, Statistics, Mathematics, Physics, or any quantitative field Experience in designing and building advanced ETL pipelines in a big data environment Big data frameworks: Apache Hadoop, Apache Spark, RapidMiner, Cloudera Programming languages: Scala, Python, PySpark, SQL Sound communication skills Team player Curious mind Preferred Qualifications MSc in Computer Science, Statistics, Mathematics, Physics, or any quantitative field Platform: Azure Data Factory, Databricks, PowerBI Experience in supply chain Familiar with Agile
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |