Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 18.0 years
35 - 45 Lacs
Pune, Chennai
Work from Office
Job Summary We are looking for an experienced Azure Data Architect who will lead the architecture, design, and implementation of enterprise data solutions on Microsoft Azure. The ideal candidate should have strong experience in cloud data architecture, big data solutions, and modern data platforms, with a focus on scalability, security, and performance. Key Responsibilities Architect, design, and implement end-to-end data solutions on Azure including Azure Data Factory, Synapse Analytics, Azure SQL, Data Lake Storage, and more. Define data architecture strategy , governance models, and frameworks for enterprise-grade solutions. Lead data modernization and migration projects from on-premise systems to Azure Cloud. Design data integration and transformation pipelines for batch and real-time data. Collaborate with stakeholders, data engineers, data scientists, and business analysts to align architecture with business goals. Ensure data solutions follow security, compliance, and regulatory standards. Conduct architecture reviews, performance tuning, and troubleshooting of cloud data environments. Mentor and guide data engineering and BI teams on Azure best practices. Required Skills & Experience 12+ years of overall IT experience, with at least 5+ years as a Data Architect on Azure . Hands-on expertise with: Azure Data Factory, Synapse Analytics, Azure SQL, Data Lake Gen2 Azure Databricks, Power BI, Azure Analysis Services CI/CD pipelines for data (Azure DevOps) Strong experience with data modeling, ETL/ELT design , and data warehousing concepts. Experience in building data lakes, data vaults, and lakehouse architectures . Proficiency in SQL, Python, PySpark , and other scripting languages for data workflows. Good knowledge of data security, encryption, RBAC, and compliance standards . Experience with Kafka, Event Hubs, or other streaming technologies is a plus. Strong understanding of Agile delivery, DevOps, and automation in data projects. Please note: We are strictly looking for candidates with shorter notice only.
Posted 3 weeks ago
5.0 - 10.0 years
5 - 15 Lacs
Bengaluru
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Azure Databricks Qualification : Any Graduate or Above Experience : 5 to 15 Yrs Location : Bangalore (madiwala) Key Responsibilities: Design and build scalable and robust data pipelines using Azure Databricks, PySpark, and Spark SQL. Integrate data from various structured and unstructured data sources using Azure Data Factory,ADLS, Azure Synapse, etc. Develop and maintain ETL/ELT processes for ingestion, transformation, and storage of data. Collaborate with data scientists, analysts, and other engineers to deliver data products and solutions. Monitor, troubleshoot, and optimize existing pipelines for performance and reliability. Ensure data quality,governance, and security compliance in all solutions. Participate in architectural decisions and cloud data solutioning. Required Skills: 5+ years of experience in data engineering or related fields. Strong hands-on experience with Azure Databricks and Apache Spark. Proficiency in Python (PySpark),SQL, and performance tuning techniques. Experience with Azure Data Factory, Azure Data Lake Storage (ADLS), and Azure Synapse Analytics. Solid understanding of data modeling, data warehousing, and data lakes. Familiarity with DevOps practices, CI/CD pipelines, and version control (e.g., Git). Notice period : Only Immediate Mode of Work : Hybrid Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 3 weeks ago
6.0 - 11.0 years
25 - 35 Lacs
Bengaluru
Hybrid
We are hiring Azure Data Engineers for an active project-Bangalore location Interested candidates can share details on the mail with their updated resume. Total Exp? Rel exp in Azure Data Engineering? Current organization? Current location? Current fixed salary? Expected Salary? Do you have any offers? if yes mention the offer you have and reason for looking for more opportunity? Open to relocate Bangalore? Notice period? if serving/not working, mention your LWD? Do you have PF account ?
Posted 3 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Azure Databricks Qualification : Any Graduate or Above Experience : 5 to 15 Yrs Location : Bangalore Key Responsibilities: Design and build scalable and robust data pipelines using Azure Databricks, PySpark, and Spark SQL. Integrate data from various structured and unstructured data sources using Azure Data Factory,ADLS, Azure Synapse, etc. Develop and maintain ETL/ELT processes for ingestion, transformation, and storage of data. Collaborate with data scientists, analysts, and other engineers to deliver data products and solutions. Monitor, troubleshoot, and optimize existing pipelines for performance and reliability. Ensure data quality,governance, and security compliance in all solutions. Participate in architectural decisions and cloud data solutioning. Required Skills: 5+ years of experience in data engineering or related fields. Strong hands-on experience with Azure Databricks and Apache Spark. Proficiency in Python (PySpark),SQL, and performance tuning techniques. Experience with Azure Data Factory, Azure Data Lake Storage (ADLS), and Azure Synapse Analytics. Solid understanding of data modeling, data warehousing, and data lakes. Familiarity with DevOps practices, CI/CD pipelines, and version control (e.g., Git). Notice period : immediate,serving notice Mode of Work : Hybrid Mode of interview : Face to Face -- Thanks & Regards Bhavana B Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432454 bhavana.b@blackwhite.in |www.blackwhite.in
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Role : Relational Database - Database Administrator (DBA) Experience : 5+ Years Notice period : Immediate to 30 Days Location : Bangalore Employment Type : Full Time, Permanent Working mode : Regular We are seeking an experienced Database Administrator (DBA) to join our dynamic team. The ideal candidate will have extensive knowledge in relational database management systems, with a strong focus on optimization, administration, and support. Key Responsibilities : - Administer and maintain both standalone and clustered database environments. - Optimize database performance through efficient indexing, partitioning strategies, and query optimization techniques. - Manage and support relational databases (e.g., MySQL, MS SQL). - Ensure data integrity, availability, security, and scalability of databases. - Implement and manage data storage solutions like S3/Parquet and Delta tables. - Monitor database performance, troubleshoot issues, and implement solutions. - Collaborate with development teams to design and implement database solutions. - Implement and manage database backups, restores, and recovery models. - Perform routine database maintenance tasks such as upgrades, patches, and migrations. - Document database processes, procedures, and configurations. Requirements : Required : - 5-10 years of proven experience as a Database Administrator (DBA). - Strong understanding of database management principles. - Proficiency in relational databases (e.g., MySQL, MS SQL). - Experience in optimizing database performance and implementing efficient query processing. - Familiarity with Linux environments and basic administration tasks. - Knowledge of Parquet file structures as relational data stores. Preferred : - Experience with RDS (AWS) and Databricks (AWS). - Understanding of Databricks and Unity Catalogue. - Experience with S3/Parquet and Delta tables. - Knowledge of Apache Drill and Trino DB connectors. - Prior experience with Hadoop, Parquet file formats, Impala, and HIVE.
Posted 3 weeks ago
2.0 - 5.0 years
4 - 5 Lacs
Dhule
Work from Office
The Development Lead will oversee the design, development, and delivery of advanced data solutions using Azure Databricks, SQL, and data visualization tools like Power BI. The role involves leading a team of developers, managing data pipelines, and creating insightful dashboards and reports to drive data-driven decision-making across the organization. The individual will ensure best practices are followed in data architecture, development, and reporting while maintaining alignment with business objectives. Key Responsibilities: Data Integration & ETL Processes: Design, build, and optimize ETL pipelines to manage the flow of data from various sources into data lakes, data warehouses, and reporting platforms. Data Visualization & Reporting: Lead the development of interactive dashboards and reports using Power BI, ensuring that business users have access to actionable insights and performance metrics. SQL Development & Optimization: Write, optimize, and review complex SQL queries for data extraction, transformation, and reporting, ensuring high performance and scalability across large datasets. Azure Cloud Solutions: Implement and manage cloud-based solutions using Azure services (Azure Databricks, Azure SQL Database, Data Lake) to support business intelligence and reporting initiatives. Collaboration with Stakeholders: Work closely with business leaders and cross-functional teams to understand reporting and analytics needs, translating them into technical requirements and actionable data solutions. Quality Assurance & Best Practices: Implement and maintain best practices in development, ensuring code quality, version control, and adherence to data governance standards. Performance Monitoring & Tuning: Continuously monitor the performance of data systems, reporting tools, and dashboards to ensure they meet SLAs and business requirements. Documentation & Training: Create and maintain comprehensive documentation for all data solutions, including architecture diagrams, ETL workflows, and data models. Provide training and support to end-users on Power BI reports and dashboards. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Development Lead or Senior Data Engineer with expertise in Azure Databricks, SQL, Power BI, and data reporting/visualization. Hands-on experience in Azure Databricks for large-scale data processing and analytics, including Delta Lake, Spark SQL, and integration with Azure Data Lake. Strong expertise in SQL for querying, data transformation, and database management. Proficiency in Power BI for developing advanced dashboards, data models, and reporting solutions. Experience in ETL design and data integration across multiple systems, with a focus on performance optimization. Knowledge of Azure cloud architecture, including Azure SQL Database, Data Lake, and other relevant services. Experience leading agile development teams, with a strong focus on delivering high-quality, scalable solutions. Strong problem-solving skills, with the ability to troubleshoot and resolve complex data and reporting issues. Excellent communication skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Knowledge of additional Azure services (e.g., Azure Synapse, Data Factory, Logic Apps) is a plus. Experience in Power BI for data visualization and custom calculations.
Posted 3 weeks ago
3.0 - 5.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Total Yrs. of Experience* 3-5 years Relevant Yrs. of experience* 3+ yrs. Detailed JD *(Roles and Responsibilities) Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering Mandatory skills* Azure Databricks, Azure data Factory and Python coding skills
Posted 3 weeks ago
3.0 - 5.0 years
8 - 9 Lacs
Hyderabad, Pune, Chennai
Work from Office
Total Yrs. of Experience-3-5 years Relevant Yrs. of experience-3+ yrs. Detailed JD *(Roles and Responsibilities)- Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering
Posted 3 weeks ago
12.0 - 14.0 years
25 - 30 Lacs
Chennai
Work from Office
The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.
Posted 3 weeks ago
0.0 - 1.0 years
1 - 3 Lacs
Bengaluru
Work from Office
Classic pipeline Powershell Yaml Biceps Arm Templateterraform/ Biceps CI/CD Experience with data lake and analytics technologies in Azure (e.g., Azure Data Lake Storage, Azure Data Factory, Azure Databricks)- most important Data background with Azure & Powershell. Location: Chennai, Hyderabad, Kolkata, Pune, Ahmedabad, Remote
Posted 3 weeks ago
5.0 - 10.0 years
22 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities 5+ Years of experience using Azure. Strong proficiency in Databricks. Experience with Pyspark. Proficiency in SQL or TSQL. Experience in Azure service components like Azure data factory, Azure data lake, Databricks, SQL DB SQL Server. Databricks Jobs for efficient data processing ETL tasks and report generation. Hands on experience with scripting languages such as Python for data processing and manipulation. Key responsibilities: Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Should be able to develop end to end automations in Azure stack for ETL workflows data quality validations.
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Experience in Software Development and Agile Methodologies. Excellent experience in developing ETL routines, processes and structures. Expert in SSIS and Azure Synapse ETL development. Significant experience in developing Python or pySpark.
Posted 3 weeks ago
8.0 - 13.0 years
5 - 10 Lacs
Chennai, Bengaluru, Delhi / NCR
Work from Office
Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack - Ability to provide solutions that are forward-thinking in data engineering and analytics space - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members - Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture - Should have hands-on experience in SQL, Python and Spark (PySpark) - Candidate must have experience in AWS/ Azure stack - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes - Experience with Apache Kafka for use with streaming data / event-based data - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects - Should have experience working in Agile methodology - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail. Location : Bangalore, Chennai, Delhi, Pune
Posted 3 weeks ago
6.0 - 11.0 years
32 - 40 Lacs
Hyderabad, Pune, Chennai
Hybrid
Data Software Engineer - Spark, Python, (AWS ,Kafka or Azure Databricks or GCP) Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks,GCP. 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Coimbatore
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Location- Hyderabad, Chennai, Coimbatore Notice period: Immediate to 60 days F2F interview on 12th July ,Saturday
Posted 3 weeks ago
4.0 - 9.0 years
9 - 14 Lacs
Hyderabad
Work from Office
As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc. Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs. Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of overall experience in Data & Analytics engineering 4+ years of experience working with Azure, Databricks, and ADF, Data Lake Solid experience working with data platforms and products using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application
Posted 3 weeks ago
8.0 - 13.0 years
20 - 35 Lacs
Chennai
Work from Office
Warm Greetings from SP Staffing Services Pvt Ltd!!!! Experience:8-15yrs Work Location :Chennai Job Description: Required Technical Skill Set: Azure Native Technology, synapse and data bricks, Python Desired Experience Range: 8+ Years Location of Requirement: Chennai Required Skills: Previous experience as a data engineer or in a similar role Must have experience with MS Azure services such as Data Lake Storage, Data Factory, Databricks, Azure SQL Database, Azure Synapse Analytics, Azure Functions Technical expertise with data models, data mining, analytics and segmentation techniques Knowledge of programming languages and environments such as Python, Java, Scala, R, .NET/C# Hands-on experience with SQL database design Great numerical and analytical skills Degree in Computer Science, IT, or similar field; a master's is a plus Experience working in integrating Azure PaaS services Interested candidates, Kindly share your updated resume to ramya.r@spstaffing.in or contact number 8667784354 (Whatsapp:9597467601 )to proceed further.
Posted 3 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Noida, Hyderabad
Work from Office
Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using Azure Databricks. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Develop complex ETL processes using Python, Pyspark, PL/SQL, SQL, Oracle ADF, and other tools. Ensure scalability, performance, security, and reliability of the developed solutions. Desired Candidate Profile 7-12 years of experience in designing and developing large-scale data integration projects on Azure platform (Azure Databricks). Strong expertise in ETL Tools such as Azure Data Factory (ADF), Oracle ADF or similar technologies. Proficiency in programming languages like Python or equivalent scripting languages like PowerShell/Bash Scripting. Experience working with relational databases like SQL Server/PostgreSQL/MySQL/Oracle; knowledge of database modeling concepts preferred.
Posted 3 weeks ago
8.0 - 13.0 years
25 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
Role : Senior Databricks Engineer / Databricks Technical Lead/ Data Architect. Experience : 8-15 years. Location : Bangalore, Chennai, Delhi, Pune. Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack. - Ability to provide solutions that are forward-thinking in data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues. - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs. - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members. - Orchestrate the data pipelines in scheduler via Airflow. Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles. - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture. - Should have hands-on experience in SQL, Python and Spark (PySpark). - Candidate must have experience in AWS/ Azure stack. - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes. - Experience with Apache Kafka for use with streaming data / event-based data. - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala). - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J). - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. - Should have experience working in Agile methodology. - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Job Title: Azure Data Engineer Experience:5+Years Main location: Bangalore Employment Type: Full Time Position Description Strong experience in Azure Functions, Azure Data Factory, Azure Sql, Azure Key Vault Strong Experience in Azure Data Bricks, Notebooks, ADLS, Delta Lake Experience in building data pipelines and analysis tools using Python, PySpark, Scala, SQL Proficient in integrating, transforming, and consolidating data from various structured and unstructured data. Knowledge on Autosys jobs, Batch scripts Hands on experience in using Agile methodology, Azure DevOps(CI/CD Pipeline) Knowledge on Streaming platforms (Kafka, Solace queue), Azure App Services, Azure Stream Analytics, Power BI Knowledge on Ability to work independently as well as in a team Roles & Responsibilities: Design and implement storage solutions using Azure services such as Azure SQL and Azure Data Lake Storage. Design and implement ETL using ADF data pipelines to move and transform data from various sources into cloud-based data warehouses Implement business solution using Azure DataBricks. Align with DevOps strategy to promote the code to Production. Think out of the solution to create reusable Databricks components Should have working experience in migration and modernization projects Involve in due diligence phases to transform the requirements to implementation Good communication skills Behavioural Competencies : Proven experience of delivering process efficiencies and improvements • Clear and fluent English (both verbal and written) • Ability to build and maintain efficient working relationships with remote teams • Demonstrate ability to take ownership of and accountability for relevant products and services • Ability to plan, prioritise and complete your own work, whilst remaining a team player • Willingness to engage with and work in other technologies
Posted 3 weeks ago
5.0 - 6.0 years
10 - 20 Lacs
Chennai
Remote
Job Title: Developer - Looking for A Strong Technical candidate. No. of Positions 2 Role: Application Development. Job Location: Chennai Work From Home - Permanent Remote Experience: between 5 and 6 yrs. Employment Type: Full Time - Direct Payroll. Work Timing: 3 Shifts - Timings given below Rotational - 2 weeks once 4.30 am to 1 pm 10 am to 7 pm 6.30 pm to 2.30 pm. 5 days a week from Tuesday to Saturday - The weekend is Sun and Monday Department: .NET team with Azure and DevOps. Domain/Vertical - Banking. Notice Period: Immediate joiners preferred We are looking for passionate and skilled Full Stack Developers who are proficient in .NET, ASP.NET, C#, Rest API, Entity Framework, with Azure Data Factory, Data Lake Storage, Data Bricks, SQL, with Autosys experience. Technical Skills Required: Please brush up well. .NET FullStack Azure Data Factory, Azure Data Bricks, Azure Data Lake Storage REST API SQL DevOps - CICD pipelines resumes can be shared with devika.raju@kumaran.com or WhatsAPP - 8122770798
Posted 3 weeks ago
5.0 - 8.0 years
20 - 25 Lacs
Pune, Mumbai (All Areas)
Hybrid
Job Title : Azure Platform Engineer / Data Engineer Location : Mumbai / Pune Experience : 5-8 years Work Mode : Hybrid (Minimum 3 days onsite) Shift Timing : 12:30 PM 9:30 PM IST Joining : Immediate joiners preferred What to Expect from the Role As an Azure Platform Engineer , you will be responsible for designing, building, automating, and managing cloud infrastructure and data pipelines. You’ll collaborate closely with engineering teams to optimize platform services and ensure secure, scalable, and efficient solutions. Key Responsibilities Engineer platform services using container orchestration, CI/CD pipelines, and automation frameworks. Implement data ingestion, curation, and transformation using Azure-native tools . Provision storage, enforce security policies, and manage large-scale enterprise data solutions. Leverage tools for Infrastructure as Code, logging, monitoring, and cloud operations. Integrate structured and unstructured data into unified, analytics-ready formats. Build and maintain Infrastructure as Code (IaC) scripts using Terraform, Bash, YAML, JSON . Work in a global, cross-functional environment to deliver cloud solutions. Required Skills and Experience 5–8 years of experience in cloud development, architecture, and engineering. Strong hands-on experience with: Azure Cloud Services : App Gateway, Azure Functions, Azure Key Vault, ADLS Azure Data Factory (ADF) : Pipelines, Datasets, Linked Services, Activities Databricks : Data processing and analytics Terraform : Building, upgrading, and deploying IaC templates Experience with: CI/CD automation , container orchestration, and cloud-native deployments REST APIs for platform and service integration Data transformation and integration from various sources Excellent problem-solving and communication skills. Experience working collaboratively in distributed global teams. Preferred Qualifications (Good to Have) Integration experience with Power BI or Power Apps Strong commitment to DevOps best practices and automation-first mindset Experience in designing and operationalizing enterprise-scale Azure data solutions Knowledge of Azure security and compliance practices Work Environment and Expectations Hybrid model: Minimum 3 days per week in the office Flexible work culture, supportive team environment Shift timing: 12:30 PM – 9:30 PM IST
Posted 3 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Familiarity with database management and data integration techniques.- Experience in troubleshooting and optimizing application performance. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Your role will require you to analyze requirements, propose solutions, and contribute to the continuous improvement of the data platform, making it a dynamic and engaging work environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Power Business Intelligence (BI), Microsoft Azure Databricks.- Strong understanding of data integration techniques and best practices.- Experience with data modeling and database design.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing concepts and architecture.- Experience with application design and development methodologies.- Familiarity with data integration and ETL processes.- Ability to troubleshoot and resolve technical issues efficiently. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough