Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
12 - 18 Lacs
Pune
Work from Office
Roles and Responsibilities Design, develop, and execute automated tests using PySpark and Azure Databricks for data warehouse testing. Develop ETL processes using SQL to extract, transform, and load data from various sources into a centralized repository. Troubleshoot issues related to data quality, performance tuning, and scalability in the data warehouse environment. Collaborate with cross-functional teams to identify requirements and create test plans that meet business needs. Desired Candidate Profile 3-7 years of experience in QA Automation Testing with expertise in Azure Databricks and PySpark. Strong understanding of Data Warehouse Testing principles and methodologies. Proficiency in writing complex queries using SQL for extracting, transforming, loading (ETL) operations.
Posted 1 month ago
6.0 - 11.0 years
4 - 7 Lacs
Ghaziabad
Remote
Dear Candidate, We are looking for a data engineer trainer with Databricks& Snowflake on a part- time basis and can provide training to our US students. Please find the job description for your reference. If it seems good fit for you, please revert to us with your updated resume. Job Summary We are looking for a skilled and experienced Data Engineer Trainer to join our team! In this role, you will deliver training content to our US based students in Data Engineer with Snowflake & Databricks . You will have an opportunity to combine a passion for teaching, with enthusiasm for technology, to drive learning and establish positive customer relationships. You should have excellent communication skills and proven technology training experience. Key job responsibilities In this role, you will be at the heart of the world class programs delivered by Synergistic Compusoft Pvt Ltd- Your job responsibilities will include : 1. Training working professionals on in-demand skills like Data Bricks, Snowflake , Azure Data Lake. 2. Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) 3. Good Knowledge and Understanding of Data warehouse concepts. 4. Experience with designing implementation modern data platforms (Data Fabrics, Data Mesh, Data Hubs etc,) 5. Experience with the design of Data Catalogs/dictionaries driven by active metadata 6. Delivering highly interactive lectures online that are in line with Synergistic Compusofts teaching methodology. 7. Develop cutting edge and innovative content for classes to help facilitate delivery of classes in an interesting way. 8. Strong programming skills in languages such as SQL, Python, or Scala. 9. Knowledge of data integration patterns, data lakes, and data warehouses. 10. Experience with data quality, data governance, and data security best practices. Note: Trainers have to make our students certify on any global azure certification & deliver content accordingly. Excellent communication and collaboration skills. Primary Skills: Databricks , Snowflake Secondary Skills: ADF, Databricks, Python, Perks and Benefits Remuneration Best in the Industry(55-60k per month) 5 Days working (Mon- Fri) For Part Time- 2.5 to 3 hours - Remote (Night Shift -10:30 Pm onwards) The curriculum and syllabus should be provided by the trainer The curriculum and syllabus should align with the Azure Certification requirements. The duration of a single batch depends on the trainer, but it cannot exceed more than 3 months. Company Website- www.synergisticit.com Companys LinkedIn profile- https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Fsynergisticit%2Ecom%2F&urlhash=rKyX&trk=about_website
Posted 1 month ago
6.0 - 8.0 years
9 - 13 Lacs
Kolkata
Remote
Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
6.0 - 9.0 years
9 - 13 Lacs
Mumbai
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
4.0 - 9.0 years
15 - 30 Lacs
Kochi
Work from Office
About Neudesic Passion for technology drives us, but its innovation that defines us . From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster.What sets us apart from the rest is an amazing collection of people who live and lead with our core values. We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity . If these attributes mean something to you - we'd like to hear from you. We are currently looking for Azure Data Engineers to become a member of Neudesic s Data & AI team. Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience * Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more. Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen
Posted 1 month ago
14.0 - 23.0 years
18 - 33 Lacs
Navi Mumbai
Work from Office
Job Purpose: To work as Data Engineering / Data Platform Support for Data lake house project. Establish an understanding of the business, develop a deep understanding of business needs, and build and shape delivery plans. Represent the role in line with the Banks culture, ensuring full transparency, auditability and value for money objectives are met/upheld. Understand data requirements projects of various internal department and regulators i.e. RBI, Compliance officers, Operations users. Review solution designs and improvise, write down and review test scenario, test plans. Prepare test bed for release deployment and perform testing as per test plan and strategy Maintaining and tracking defect tracker sheet Understanding the Source system Field Mapping for CDC ingestion and downstream pipeline Work with various teams of the bank in order to define, design and deliver regulatory and internal data needs of the bank Prepare and Maintain dictionary of data that can be used as ready reckoner for future data servicing using tools recommended. Responsible for tracking / managing Technology Obsolescence, OS/DB/Middleware baseline management. Responsible for Deployments Mgmt , Dev-ops CI CD process & purging/backup policy Automations Managing Cloud Infra Scalability / System stability Optimization. Responsible for Budget ,FPN, PO release activities & Vendor management activities. Highly organized, self-motivated, pro-active, and able to plan and execute plans Work with various teams of the bank in order to define, design and deliver regulatory and internal data needs of the bank • Prepare and Maintain dictionary of data that can be used as ready reckoner for future data servicing using tools recommended. • Identify, analyze and automate repetitive data needs Responsible for tracking / managing Technology Obsolescence, OS/DB/Middleware baseline management. Responsible for tracking / managing VAPT / IA Obs. tracking , closure & Infra. Capacity Management, projection . Responsible for Deployments Mgmt , Dev-ops CI CD process & purging/backup policy Automations Managing Cloud Infra Scalability / System stability Optimization. Responsible for Budget ,FPN, PO release activities & Vendor management activities. Highly organized, self-motivated, pro-active, and able to plan and execute plans Job Responsibilities : Project management - Review of volume estimations for new solutions, basis which infra requirements and performance tests to be planned and executed. Conduct periodic application projects review. Review of project plan to ensure that adequate timeframe is committed for IT activities and to ensure that all aspects of Change Management are covered. Understand data requirements projects of various internal department and regulators i.e. RBI, Compliance officers, Operations users. 2. Performance and Support Management - Review of volume estimations for new solutions, basis which infra requirements and performance tests to be planned and executed. Identify potential system performance bottlenecks, get system architecture reviewed and change wherever required to resolve the performance bottlenecks. 3. Change release management & Audit Compliance - Follow the Change and Release Management processes defined for the projects and tasks initiated as part of Project Management, Performance Management and Application Obsolescence Management
Posted 1 month ago
4.0 - 9.0 years
0 - 3 Lacs
Pune, Chennai, Bengaluru
Work from Office
Experience - 4 years - 12 years Location - Pune / Mumbai / Chennai / Bangalore Notice Period - Immediate - 45 Days Proven experience as a Databricks Developer or similar role, with a strong focus on Pyspark programming. Expertise in the Databricks platform, including Databricks Delta, Spark and Unity Catalog. Proficiency in Python for data manipulation, analysis, and automation. Strong understanding of data engineering concepts, data modeling, ETL processes, and data integration techniques. Excellent communication skills and ability to collaborate effectively with cross-functional teams. Stay updated with the latest advancements in Databricks, Python, and SQL technologies to continuously improve data solutions. Familiarity with Azure cloud platforms and their data services. Ability to work in an agile development environment and adapt to changing requirements.
Posted 1 month ago
11.0 - 18.0 years
10 - 20 Lacs
Kochi
Remote
Test automation frameworks for data with Pytest, SQL, PySpark, Azure Data Lake and Databricks,metadata-driven testing, data profiling tools, and reconciliation frameworks, CI/CD pipelines (e.g., Azure DevOps, GitHub Actions, Jenkins)
Posted 1 month ago
5.0 - 10.0 years
16 - 30 Lacs
Gurugram
Remote
Dear Candidate, Experience 6+ years Location Remote US Shift Preferred Qualifications- Experience developing solutions on a Big Data platform utilizing tools such as Impala and Spark Advanced knowledge/experience with Azure Databricks, PySpark, Teradata/Databricks SQL Advanced knowledge/experience in Python along with associated development environments (e.g. JupyterHub, PyCharm, etc.) Advanced knowledge/experience in building Tableau Dashboard Experience with Contact Center Data- customer complaints, call data, NPS, etc. Skills, Licenses & Certifications- Strong project management skills Proficient with Microsoft Office applications (MS Excel, Access and PowerPoint); advanced knowledge of Microsoft Excel Advanced aptitude in problem-solving, including the ability to logically structure an appropriate analytical framework PLEASE NOTE: Confirmed Interview Available For Weekdays and Weekends. -- Thanks & Regards, Evangelin.S IT Recruiter 7825822856 evangelin@sightspectrum.in
Posted 1 month ago
9.0 - 14.0 years
27 - 40 Lacs
Hyderabad
Remote
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within July 4th 2025) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Mandatory skills: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Posted 1 month ago
6.0 - 8.0 years
9 - 13 Lacs
Chennai
Work from Office
This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing concepts and services.- Experience with application development frameworks and methodologies.- Familiarity with data integration and ETL processes.- Ability to troubleshoot and optimize application performance. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
5.0 - 10.0 years
11 - 15 Lacs
Ahmedabad
Work from Office
Project Role : Business Process Architect Project Role Description : Design business processes, including characteristics and key performance indicators (KPIs), to meet process and functional requirements. Work closely with the Application Architect to create the process blueprint and establish business process requirements to drive out application requirements and metrics. Assist in quality management reviews, ensure all business and design requirements are met. Educate stakeholders to ensure a complete understanding of the designs. Must have skills : Data Analytics, Data Warehouse ETL Testing, Big Data Analysis Tool and Techniques, Hadoop Administration Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Specific undergraduate qualifications ie engineering computer science Summary :Experienced Data Engineer with a strong background in Azure data services and broadcast supply chain ecosystems. Skilled in OTT streaming protocols, cloud technologies, and project management. Roles & Responsibilities:- Proven experience as a Data Engineer or in a similar role.- Lead and support expert guidance to Principal - Solutions & Integration.- Track and report on project progress using internal applications.- Transition customer requirements to on-air operations with proper documentation.- Scope projects and ensure adherence to budgets and timelines.- Generate design and integration documentation. Professional & Technical Skills: - Strong proficiency in Azure data services (Azure Data Factory, Azure Databricks, Azure SQL Database).- Experience with SQL, Python, and big data tools (Hadoop, Spark, Kafka).- Familiarity with data warehousing, ETL techniques, and microservices in a cloud environment.- Knowledge of broadcast supply chain ecosystems (BMS, RMS, MAM, Playout, MCR/PCR, NLE, Traffic).- Experience with OTT streaming protocols, DRM, and content delivery networks.- Working knowledge of cloud technologies (Azure, Docker, Kubernetes, AWS Basics, GCP Basics).- Basic understanding of AWS Media Services (Media Connect, Elemental, MediaLive, Media Store, Media 2 Cloud, S3, Glacier). Additional Information:- Minimum of 5 years' experience in Data Analytics disciplines.- Good presentation and documentation skills.- Excellent interpersonal skills.- Undergraduate qualifications in engineering or computer science.Networking:Apply basic networking knowledge including TCP/IP, UDP/IP, IGMP, DHCP, DNS, and LAN/WAN technologies to support video delivery systems.Highly Desirable:- Experience in defining technical solutions with over 99.999% reliability. Qualification Specific undergraduate qualifications ie engineering computer science
Posted 1 month ago
4.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure DevOps Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationArchitecture and Design:Propose solutions that may shape the strategic technology direction of our data platform.Architect scalable and efficient Databricks and OpenAI platform for an increasing number of users.Infrastructure Automation:Develop and maintain Terraform Infrastructure as Code (IaC) scripts to automate Databricks and OpenAI environment deployments.Required Qualifications:3-5 years of experience as a Platform Engineer or similar role, focusing on deploying, optimizing, and maintaining large-scale analytics platforms.Hands-on expertise in Azure Databricks and OpenAI.Experience with Infrastructure as Code (IaC), preferably Terraform.Experience with CI/CD, preferably in Azure DevOps.Proficiency in writing clear and maintainable code, with Python experience being a significant plus.Proactive attitude towards picking problems up that challenge your knowledgeTeam player with excellent problem-solving skillsExperience in Kusto queries (Azure Monitoring query language) is a plusPreferred Qualifications:Certification in Microsoft Azure or related technologies. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing concepts and services.- Experience with data integration and ETL processes.- Familiarity with application development frameworks and methodologies.- Ability to troubleshoot and optimize application performance. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Databricks, PySpark, Core JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and streamline processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of new applications- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with PySpark- Strong understanding of data engineering concepts- Experience in building and optimizing data pipelines- Knowledge of cloud platforms like Microsoft Azure- Familiarity with data governance and security practices Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Microsoft Azure Databricks, Microsoft Azure Data ServicesMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications that align with the organization's goals and objectives. Your typical day will involve collaborating with cross-functional teams, analyzing business requirements, and developing innovative solutions to meet customer needs. You will also be involved in testing, debugging, and troubleshooting applications to ensure their smooth functioning and optimal performance. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Develop and maintain high-quality software applications.- Collaborate with business analysts and stakeholders to gather and analyze requirements.- Design and implement application features and enhancements.- Perform code reviews and ensure adherence to coding standards.- Troubleshoot and debug application issues.- Optimize application performance and scalability.- Conduct unit testing and integration testing.- Document application design, functionality, and processes.- Stay updated with emerging technologies and industry trends.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Python (Programming Language), Microsoft Azure Databricks, Microsoft Azure Data Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
12 - 19 Lacs
Pune
Hybrid
This is Only for Pune Local Candidates ( Not for Relocation Candidates) Role : Data Engineer This is C2H Role Experience : 3- 8 yrs Location : Kharadi , Pune Excellent Communication SKills NP: Immediate joiner to 1 m Primary Skills Python, document intelligence, NLP, unstructured data extraction (desirable to have OpenAI and prompt engineering) Secondary Skills Azure infra experiences and data bricks Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability 2.Programming Languages: Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# 3.Database Technology : Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design 4.Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. 5. . Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance 6.Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirement 7.Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Job Description : Able to enrich data by data transformation and joining with other datasets. Able to analyze data and derive statistical insights. Able to convey story through data visualization. Ability to build Data pipelines for diverse interfaces. Good understating of API workflow. Technical Skills : AWS Data Lake and AWS data hub and AWS cloud platform. Interested Candidate Share Resume at dipti.bhaisare@in.experis.com
Posted 1 month ago
8.0 - 13.0 years
20 - 30 Lacs
Chennai
Remote
Job Summary: We are seeking a highly skilled Azure Solution Architect to design, implement, and oversee cloud-based solutions on Microsoft Azure. The ideal candidate will have a deep understanding of cloud architecture, a strong technical background, and the ability to align Azure capabilities with business needs. You will lead the architecture and design of scalable, secure, and resilient Azure solutions across multiple projects. Role & responsibilities: Design end-to-end data architectures on Azure using Microsoft Fabric, Data Lake (ADLS Gen2), Azure SQL/Synapse, and Power BI. Lead the implementation of data integration and orchestration pipelines using Azure Data Factory and Fabric Data Pipelines. Architect Lakehouse/Data Warehouse solutions for both batch and real-time processing, ensuring performance, scalability, and cost optimization. Establish data governance, lineage, and cataloging frameworks using Microsoft Purview and other observability tools. Enable data quality, classification, and privacy controls aligned with compliance and regulatory standards. Drive adoption of event-driven data ingestion patterns using Event Hubs, Event Grid, or Stream Analytics. Provide architectural oversight on reporting and visualization solutions using Power BI integrated with Fabric datasets and models. Define architecture standards, data models, and reusable components to accelerate project delivery. Collaborate with data stewards, business stakeholders, and engineering teams to define functional and non-functional requirements. Support CI/CD, infrastructure as code, and DevOps for data pipelines using Azure DevOps or GitHub Actions. Lead Proof of Concepts (PoCs) and performance evaluations for emerging Azure data services and tools. Monitor system performance, data flow, and health using Azure Monitor and Fabric observability capabilities. Required Qualifications: Bachelors degree in Computer Science, Data Engineering, or a related field. 5+ years of experience as a data architect or solution architect in cloud data environments. 3+ years of hands-on experience designing and implementing data solutions on Microsoft Azure . Strong hands-on expertise with: Azure Data Factory Microsoft Fabric (Data Engineering, Data Warehouse, Real-Time Analytics, Power BI) Azure Data Lake (ADLS Gen2), Azure SQL, and Synapse Analytics Power BI for enterprise reporting and data modeling Experience with data governance and cataloging tools , ideally Microsoft Purview. Proficient in data modeling techniques (dimensional, normalized, or data vault). Strong understanding of security, RBAC, data encryption, Key Vault, and privacy requirements in Azure. Preferred Qualifications: Microsoft Certified: Azure Solutions Architect Expert (AZ-305) or Azure Enterprise Data Analyst Associate (DP-500) . Hands-on experience with Microsoft Fabric end-to-end implementation. Familiarity with medallion architecture , delta lake, and modern lakehouse principles. Experience in Agile/Scrum environments and stakeholder engagement across business and IT. Strong communication skills, with the ability to explain complex concepts to both technical and non-technical audiences.
Posted 1 month ago
7.0 - 10.0 years
22 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Exp - 6 to 9 Years Role - Azure Data Engineer Position - Permanent FTE Company - Data Analytics MNC Locations - Pune, Hyderabad, Bengaluru Mode - Hybrid (2-3 days from office) MUST HAVE - - Very Strong Python Coding skills of atleast 3 years hands-on - Excellent SQL skills of atleast 3 years hands-on - Strong PySpark skills - In Depth hands-on experience in Azure Databricks & Data Factory - Strong knowledge of Datawarehouse Important Note - Candidates must have PF in all companies worked thought out career and under One UAN.
Posted 1 month ago
8.0 - 12.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a skilled Business Analyst with 46 years of experience, including at least 2 years in Azure Data Engineering projects, for a 6-month remote full-time role. The ideal candidate will work closely with stakeholders to gather and analyze business and technical requirements, collaborate with Azure Data Engineers, and support design decisions across data integration, transformation, and storage layers. Strong SQL skills, understanding of data governance, and experience in data platforms are essential. Excellent communication and stakeholder management skills are required. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Noida, Pune, Bengaluru
Work from Office
Hi All, Happiest Minds Technologies are looking for Azure Databricks Developers & Architects. The mode of interview will be face to face on 28th June. Interview locations- Pune, Bangalore, Noida 1. Azure Databricks 5 to 10 Yrs As a Senior Azure Data Engineer, you will leverage Azure technologies to drive data transformation, analytics, and machine learning. You will design scalable Databricks data pipelines using PySpark, transforming raw data into actionable insights. Your role includes building, deploying, and maintaining machine learning models using MLlib or TensorFlow while optimizing cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. You will execute large-scale data processing using Spark Pools, fine-tuning configurations for efficiency. The ideal candidate holds a Bachelors or Masters in Computer Science, Data Science, or a related field, with 7+ years in data engineering and 3+ years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow is required, along with hands-on experience in Azure Data Lake, Blob Storage, and Synapse Analytics. 2. Azure Data Bricks Architect - 10 to 15 Yrs Key Responsibilities: Architect and design end-to-end data solutions on Azure, with a focus on Databricks. Lead data architecture initiatives, ensuring alignment with best practices and business objectives. Collaborate with stakeholders to define data strategies, architectures, and roadmaps. Migrate and transform data from Oracle to Azure Data Lake. Ensure data solutions are secure, reliable, and scalable. Provide technical leadership and mentorship to junior team members. Required Skills: Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse. Deep expertise in Databricks, including Spark, Delta Lake. Strong understanding of data architecture principles and best practices. Proven track record of leading large-scale data projects and initiatives. Design data integration strategies, ensuring seamless integration between Azure services and on-premises/cloud applications. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems. Monitor and manage cloud resources to ensure high availability, performance and scalability. Should have experience in setting up and configuring Azure DevOps. Excellent communication and collaboration skills. Interested candidates can send your resume at sreemoy.p.das@happiestminds.com
Posted 1 month ago
7.0 - 12.0 years
20 - 27 Lacs
Bengaluru
Work from Office
TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
18 - 24 Lacs
Bengaluru
Work from Office
Hiring Senior Data Engineer (5+ yrs) with expertise in Azure Data Factory, Databricks, PySpark, AWS. Build scalable ETL pipelines. Location: Bangalore/Hyderabad/Chennai. Immediate to 30 days joiners. share your resume to vadiraj@vtrickstech.com Health insurance Provident fund
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France