Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
6 - 11 Lacs
Mumbai
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
5.0 - 10.0 years
0 - 3 Lacs
Pune, Chennai, Bengaluru
Work from Office
Hi All, Wipro is hiring for Databricks. Location-Chennai, Coimbatore, Bangalore, pun Key skills Azure Data Factory (primary) , Azure Data bricks Spark (PySpark, SQL Experience - 7 to 10 Years Must-have skills Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong SQL skills ( T-SQL or PL-SQL) Orchestration tools, e.g. Autosys, Oozie Source-code versioning with Git. Nice-to-have skills Experience working with mainframe files Experience in Agile environment, JIRA/Confluence tools.
Posted 1 month ago
10.0 - 14.0 years
25 - 37 Lacs
Noida
Hybrid
Description - Internal 'Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance. Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability. Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues. Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues. Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking. Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption. Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security' Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles. Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems. Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments. Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities. Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition. May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data. Selects, develops and evaluates personnel to ensure the efficient operation of the function. - Generally work is self-directed and not prescribed. - Works with less structured, more complex issues. - Serves as a resource to others. Qualifications - Internal - Undergraduate degree or equivalent experience. 'Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Develops and implements scripts to maintain and monitor performance tuning. Designs scalable job scheduler solutions and advises on appropriate tools/technologies to use. Works across multiple domains to define and build data models Understands all the connected technology services and their impacts. Assesses design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility. ' 'Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA. Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Experience with data analytics tools like Tableau, Power BI, or similar. Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks. Experience in optimizing data processing workflows for performance and cost-efficiency.'
Posted 1 month ago
8.0 - 13.0 years
8 - 18 Lacs
Hyderabad, Pune, Chennai
Hybrid
Role & responsibilities Hands-on expertise in building solutions leveraging Azure Big Data technologies, Azure Data Factory, Azure Data Bricks, Azure platform management, and Azure DevOps Preferred a background in Finance / Capital Markets, demonstrating basic Understanding in industry practice and terminology including financial data concepts (client, product, account, transaction, settlement, payments, tax, balances and valuation) Experience in Kafka. Prior experience in data analytics Hands on Experience in ADB & ADF , Python & Spark . Exp in Reporting tools, Power Bi / Tableau Azure Data Engineer with minimum 5 years experience Prior experience in big data implementations (Batch/Stream/Real-time processing), ELT/ETL and Hadoop. Proficient with writing SQL queries, Stored Procedures and Views Experience in executing various development methodologies such as Agile etc. Proven experience with Git Hub.
Posted 1 month ago
8.0 - 13.0 years
15 - 27 Lacs
Hyderabad
Remote
Relevant years of experience ADF - 5+ ADB -2 years Mandatory skills Hands-on expertise in building solutions leveraging Azure Big Data technologies, Azure Data Factory, Azure Data Bricks, Azure platform management, and Azure DevOps Secondary skills Preferred a background in Finance / Capital Markets, demonstrating basic Understanding in industry practice and terminology including financial data concepts (client, product, account, transaction, settlement, payments, tax, balances and valuation) Experience in Kafka. Prior experience in data analytics Detailed JD Hands on Experience in ADB & ADF , Python & Spark . Exp in Reporting tools, Power Bi / Tableau Azure Data Engineer with minimum 5 years experience Prior experience in big data implementations (Batch/Stream/Real-time processing), ELT/ETL and Hadoop. Proficient with writing SQL queries, Stored Procedures and Views Experience in executing various development methodologies such as Agile etc. Proven experience with Git Hub.
Posted 1 month ago
3.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Skill required: Tech for Operations - Automation Anywhere Designation: App Automation Eng Analyst Qualifications: Any Graduation,BE Years of Experience: 3 - 6 Years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do As RPA Senior developer you will be responsible for design & development of end-to-end RPA automation leveraging A360 tools & technologies.This will include working with the clients or/and stake holders to understand the requirements, prepare technical specification documents, unit test cases and develop the automation adhering to client requirements and policies. What are we looking for Minimum 5 8 years of strong software design & development experience Minimum 3 5 year(s) of programming experience in Automation Anywhere A360 , Document Automation-pilot, PythonEffective GEN AI Prompts creation for Data extraction using GEN AI OCRExperience with APIs, data integration, and automation best practicesExperience in VBA VB or Python Script programmingGood Knowledge on GEN AI , Machine Learning.Good and Hands-on in core .NET concepts and OOPs Programming. Understands OO concepts and consistently applies them in client engagements. Hands on experience in SQL & T-SQL Queries, Creating complex stored procedures.Exposure towards performing Unit TestingExperience on Virtualization and VDI Technologies is a mandate.Exceptional presentation, written and verbal communication skills (English)Able to prioritize work, complete multiple tasks, and work under deadlines. Extensive customer facing with excellent business communication skills. Must be self-motivated with an excellent attitude. Automation Anywhere A360 Master/Advanced certification. Exposure to SAP automation is preferred. Azure Machine Learning, Azure Databricks, and other Azure AI services.Exposure to A360 Control Room features.Exposure to Pharma domain is preferred.Exposure to GDPR compliance is preferred.Agile development methodologies are an added advantage. Roles and Responsibilities: Design & build end-to-end automation leveraging A360 tool. Design & develop reusable components.Support building automation capability Anticipate, identify, track, and resolve technical issues and risks affecting delivery. Develop automation bots and processes using A360 platform. Utilize A360 s advanced features (AARI, WLM and API Consumption, Document automation, Co-pilot) to automate complex tasks, streamline processes, and optimize efficiency.Integrate A360 with various APIs, databases, and third-party tools to ensure seamless data flow and interaction between systems.Perform rigorous testing of automation solutions to identify and address issues. Debug and troubleshoot bots to ensure flawless execution of automation processes.Collaborate with cross-functional teams including business analysts, Process Architects to deliver holistic automation solutions that cater to various stakeholder needs. Qualification Any Graduation,BE
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Integration Engineer Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements. Must have skills : Infrastructure As Code (IaC) Good to have skills : Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure DatabricksMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Develop and maintain documentation related to integration processes and solutions.- Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management- Broad knowledge of operating systems- Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms- Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection- Expert proficiency in Linux CLI- Monitoring of the environment from technical perspective.- Monitoring the costs of the development environment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Infrastructure As Code (IaC).- Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks.- Strong understanding of cloud infrastructure and deployment strategies.- Experience with automation tools and frameworks for infrastructure management.- Familiarity with version control systems and CI/CD pipelines.- Solid understanding of Data Modelling, Data warehousing and Data platforms design.- Working knowledge of databases and SQL.- Proficient with version control such as:Git, GitHub or GitLab- Solid understanding of Data warehousing and Data platforms design.- Experience supporting BAT teams and BAT test environments.- Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Know-How on Java, Spark & BI reporting will be an added advantage.- Know-how of cloud platform and affinity towards modern technology an added advantage.- Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information:- The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC).- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
2.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in a dynamic work environment. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement software solutions to meet business requirements.- Collaborate with cross-functional teams to analyze user needs and design efficient applications.- Troubleshoot and debug applications to ensure optimal performance.- Stay updated with industry trends and technologies to enhance application development processes.- Provide technical guidance and support to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing principles and services.- Experience with data processing and analytics using Azure services.- Hands-on experience in developing and deploying applications on Azure cloud platform.- Knowledge of programming languages such as Python, Scala, or SQL. Additional Information:- The candidate should have a minimum of 2 years of experience in Microsoft Azure Databricks.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved throughout the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Experience with cloud computing platforms and services.- Strong understanding of application development methodologies.- Ability to design and implement scalable solutions.- Familiarity with data integration and ETL processes. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Data Engineer, you bring over 10 years of experience in applying advanced concepts and technologies in production environments.Your expertise and skills make you an ideal candidate to lead and deliver cutting-edge data solutions.Expertise Extensive hands-on experience with Azure Databricks and modern data architecture principles. In-depth understanding of Lakehouse and Medallion Architectures and their practical applications. Advanced knowledge of Delta Lake, including data storage, schema evolution, and ACID transactions. Comprehensive expertise in working with Parquet files, including handling challenges and designing effective solutions. Working experience with the Unity Catalog Knowledge on Azure Cloud services Working experience with Azure DevOpsSkills Proficiency in writing clear, maintainable, and modular code using Python and PySpark. Advanced SQL expertise, including query optimization and performance tuning.Tools Experience with Infrastructure as Code (IaC), preferably using Terraform. Proficiency in CI/CD pipelines, with a strong preference for Azure DevOps. Familiarity with Azure Data Factory for seamless data integration and orchestration. Hands-on experience with Apache Airflow for workflow automation. Automation skills using PowerShell. Nice to have/Basic knowledge of Lakehouse Apps and frameworks like Angular.js, Node.js, or React.js.You Also. Possess excellent communication skills, making technical concepts accessible to non-technical stakeholders. Nice to have API knowledge Nice to have Data Modelling knowledge Are open to discussing and tackling challenges with a collaborative mindset. Enjoy teaching, sharing knowledge, and mentoring team members to foster growth. Thrive in multidisciplinary Scrum teams, collaborating effectively to achieve shared goals. Have a solid foundation in software development, enabling you to bridge gaps between development and data engineering. Demonstrate a strong drive for continuous improvement and learning new technologies. Take full ownership of the build, run, and change processes, ensuring solutions are reliable and scalable. Embody a positive, proactive mindset, fostering teamwork and mutual support within your team. Qualification 15 years full time education
Posted 1 month ago
8.0 - 13.0 years
20 - 25 Lacs
Chennai, Bengaluru, Delhi / NCR
Work from Office
Role : Senior Databricks Engineer / Databricks Technical Lead/ Data Architect. Experience : 8-15 years. Location : Bangalore, Chennai, Delhi, Pune. Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack. - Ability to provide solutions that are forward-thinking in data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues. - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs. - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members. - Orchestrate the data pipelines in scheduler via Airflow. Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles. - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture. - Should have hands-on experience in SQL, Python and Spark (PySpark). - Candidate must have experience in AWS/ Azure stack. - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes. - Experience with Apache Kafka for use with streaming data / event-based data. - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala). - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J). - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. - Should have experience working in Agile methodology. - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail. Location - Bangalore,Chennai,Pune,Delhi NCR
Posted 1 month ago
5.0 - 7.0 years
10 - 14 Lacs
Mumbai
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
6.0 - 9.0 years
9 - 13 Lacs
Kolkata
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
6.0 - 10.0 years
9 - 13 Lacs
Mumbai
Work from Office
Job Description : The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 month ago
5.0 - 10.0 years
12 - 22 Lacs
Gurugram
Remote
Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. A master's degree is a plus. Proven experience as a Data Engineer or in a similar role, with a focus on ETL processes and database management. Proficiency in the Microsoft Azure data management suite (MSSQL, Azure Databricks , PowerBI , Data factories, Azure cloud monitoring, etc.) and Python scripting. Strong knowledge of SQL and experience with database management systems Strong development skills in python and pyspark . Experience with data warehousing solutions and data mart creation. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Good to have Databricks Certified data engineer associate or professional. Understanding of data modeling and data architecture principles. Experience with data governance and data security best practices.
Posted 1 month ago
7.0 - 12.0 years
30 - 45 Lacs
Bengaluru
Hybrid
Roles and Responsibilities: The Senior Data Engineer is the one who designs and builds data foundations and end to end solutions for the Shell Business to maximize value from data. The role helps create a data-driven thinking within the organization, not just within IT teams, but also in the wider Business stakeholder community. A Senior Data engineer is expected to be a subject matter expert, who design & build data solutions and mentor junior engineers. They are also the key drivers to convert Vison and Data Strategy for IT solutions and deliver. Key Characteristics Technology expert who constantly pursues knowledge enhancement and has inherent curiosity to understand work from multiple dimensions. Deep data focus with expertise in technology domain A skilled communicator capable of speaking to both technical developers and business managers. Respected and trusted by leaders and staff. Actively delivers the roll-out and embedding of Data Foundation initiatives in support of the key business programmers. Coordinate the change management process, incident management and problem management process. Present reports and findings to key stakeholders and be the subject matter expert on DataAnalysis & Design. Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery. Contributes to community building initiatives like CoE, CoP. Mandatory skills: AWS/Azure/SAP - Master ELT - Master Data Modeling - Master Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation
Posted 1 month ago
6.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Department : Platform Engineering. Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
5.0 - 7.0 years
10 - 14 Lacs
Kolkata
Work from Office
Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
6.0 - 10.0 years
9 - 13 Lacs
Kolkata
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 month ago
6.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Job Title : Sr.Data Engineer Ontology & Knowledge Graph Specialist. Department : Platform Engineering. Role Description : This is a remote contract role for a Data Engineer Ontology_5+yrs at Zorba AI. We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality and Governance :. - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration and Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : Bachelor's or master's degree in computer science, Data Science, or a related field. Experience : 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e., TigerGraph, JanusGraph) and triple stores (e., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
5.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
4.0 - 9.0 years
8 - 13 Lacs
Kolkata
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
6.0 - 10.0 years
9 - 13 Lacs
Chennai
Work from Office
The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.
Posted 1 month ago
8.0 - 13.0 years
5 - 10 Lacs
Pune, Chennai, Bengaluru
Work from Office
Location : Bangalore, Chennai, Delhi, Pune Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack - Ability to provide solutions that are forward-thinking in data engineering and analytics space - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members - Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture - Should have hands-on experience in SQL, Python and Spark (PySpark) - Candidate must have experience in AWS/ Azure stack - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes - Experience with Apache Kafka for use with streaming data / event-based data - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects - Should have experience working in Agile methodology - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail.
Posted 1 month ago
2.0 - 4.0 years
2 - 7 Lacs
Chennai, Bengaluru
Work from Office
Role & responsibilities: Provision and manage Azure resources such as Virtual Machines, Storage Accounts, Networking components, Databricks workspaces, and Azure SQL Managed Instance Deploy and configure Azure Databricks privately, including setting up clusters, managing workspaces, libraries, and permissions. Configured and debugged private endpoint setups and firewall rules for secure access to Azure Databricks. Optimized cluster sizing and autoscaling configurations based on workload characteristics and cost considerations. Analyzed job run logs, and cluster event logs to identify and remediate root causes of failures. Troubleshoot and resolve errors and service interruptions across the Azure ecosystem, especially issues related to Databricks, Azure Data Factory, and APIs . Monitor health and performance of services using Azure Monitor, Log Analytics, and Application Insights . Ensure optimal configuration of Azure Networking , Loadbalacer, Application Gateway,including VNets, NSGs, Firewalls, and ExpressRoute or VPNs if required. Implement and maintain RBAC, IAM policies, and resource tagging for access control and cost management. Coordinate with engineering and data teams to support infrastructure needs and resolve platform-level issues. Maintain proper backup, disaster recovery, and patch management across services. Work with Azure DevOps for resource deployment automation and release management. Designed and implemented CI/CD pipelines using GitHub Actions for automated build, test, and deployment workflows across multiple environments. Keep documentation updated for architecture, configurations, and operational processes. Designed, deployed, and managed scalable Kubernetes clusters using Azure Kubernetes Service (AKS). Configured node pools, autoscaling, and workload balancing in AKS. Implemented and maintained AKS cluster upgrades and versioning strategies. Integrated AKS with Azure services including Azure Monitor, Azure Key Vault, and Azure Container Registry (ACR). Managed network configurations including VNETs, subnets, NSGs, and private cluster setups for AKS. Designed and implemented CI/CD pipelines using GitHub Actions for automated build, test, and deployment workflows across multiple environments. Required Skills & Experience: 2-6 years of experience as an Azure Administrator or similar role. Strong hands-on experience in managing: Azure Databricks (clusters, workspaces, permissions) Azure VMs, Networking, and Storage,Backup,AKS,Keyvault,Private Endpoint Azure Monitor and Diagnostics Azure Resource Manager (ARM) templates or Bicep Proficient in identifying and resolving Azure connectivity errors and performance issues , especially in Databricks pipelines and integrations . Working knowledge of PowerShell, CLI , and portal-based operations . Familiarity with Azure Data Factory , APIM , and SQL MI is a plus. Strong troubleshooting and communication skills to work across teams.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France