Home
Jobs

3233 Databricks Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Role Title: Data Scientist Location: India Worker Type: Full-Time Employee (FTE) Years of Experience: 8+ years Start Date: Within 2 weeks Engagement Type: Full-time Salary Range: Flexible Remote/Onsite: Hybrid (India-based candidates) Job Overview: We are looking for an experienced Data Scientist to join our team and contribute to developing AIdriven data conversion tools. You will work closely with engineers and business stakeholders to build intelligent systems for data mapping, validation, and transformation. Required Skills and Experience: • Bachelor’s or Master’s in Data Science, Computer Science, AI, or a related field • Strong programming skills in Python and SQL • Experience with ML frameworks like TensorFlow or PyTorch • Solid understanding of AI-based data mapping, code generation, and validation • Familiarity with databases like SQL Server and MongoDB • Excellent collaboration, problem-solving, and communication skills • At least 8 years of relevant experience in Data Science • Open mindset with a willingness to experiment and learn from failures Preferred Qualifications: • Experience in the financial services domain • Certifications in Data Science or AI/ML • Background in data wrangling, ETL, or master data management • Exposure to DevOps tools like Jira, Confluence, BitBucket • Knowledge of cloud and AI/ML tools like Azure Synapse, Azure ML, Cognitive Services, and Databricks • Prior experience delivering AI solutions for data conversion or transformation Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Title : Automation Engineer- Databricks Job Type : Full-time, Contractor Location : Hybrid - Hyderabad | Pune| Delhi About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Automation Engineer- Databricks to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Nashik Area

On-site

Linkedin logo

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role We are looking for a highly skilled and motivated Data Scientist with 3+ of professional experience to join our dynamic team. The ideal candidate will excel in data analytics, working with complex datasets, and applying machine learning and deep learning techniques to solve real-world problems. If you are passionate about leveraging data to drive insights and innovation, this is the role for you. Key tasks & accountabilities Analyze and interpret complex datasets to uncover actionable insights. Design, develop, and implement machine learning and deep learning models using tools and frameworks such as Pandas, Scikit-learn, TensorFlow, Keras, PyTorch, etc. Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. Create and maintain scalable data pipelines and workflows. Use statistical techniques to test hypotheses and validate models. Optimize machine learning algorithms for efficiency and scalability. Communicate insights and model performance to stakeholders via data visualization and presentations. Stay up-to-date with the latest advancements in data science, machine learning, and big data technologies. Qualifications, Experience, Skills Level Of Educational Attainment Required B. Tech in Computer Science, or Background in Statistics, Economics, Mathematics. Previous Work Experience & Skills Required Data Analytics: Proficiency in statistical analysis and deriving insights from data. Business Exposure: Experience in building optimization model, Marketing mix model. Machine Learning & Deep Learning Frameworks: Strong working knowledge of libraries like Pandas, Scikit-learn, TensorFlow, Keras, and PyTorch. Programming: Proficiency in Python and experience using GitHub for version control. Databases: Expertise in working with structured and unstructured data using databases. Cloud Platforms: Hands-on experience with Azure infrastructure for data storage, Azure DataBricks processing, and deployment. Data Visualization: Ability to create compelling visualizations using tools like Matplotlib, Seaborn, or Power BI. Complex Datasets: Exposure to working with large and intricate datasets in various domains. Version Control: Experience using GitHub for version control, collaboration, and managing repositories effectively And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead, Data Engineer Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview The Mastercard Services Technology team is looking for a Lead in Data Engineering, to drive our mission to unlock potential of data assets by consistently innovating, eliminating friction in how we manage big data assets, store those assets, accessibility of data and, enforce standards and principles in the Big Data space both on public cloud and on-premises set up. We are looking for a hands-on, passionate Data Engineer who is not only technically strong in PySpark, cloud platforms, and building modern data architectures, but also deeply committed to learning, growing, and lifting others. The person will play a key role in designing and building scalable data solutions, shaping our engineering culture, and mentoring team members. This is a role for builders and collaborators—engineers who love clean data pipelines, cloud-native design, and helping teammates succeed. Role Design and build scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. Mentor and guide other engineers, sharing knowledge, reviewing code, and fostering a culture of curiosity, growth, and continuous improvement. Create robust, maintainable ETL/ELT pipelines that integrate with diverse systems and serve business-critical use cases. Lead by example—write high-quality, testable code and participate in architecture and design discussions with a long-term view in mind. Decompose complex problems into modular, efficient, and scalable components that align with platform and product goals. Champion best practices in data engineering, including testing, version control, documentation, and performance tuning. Drive collaboration across teams, working closely with product managers, data scientists, and other engineers to deliver high-impact solutions. Support data governance and quality efforts, ensuring data lineage, cataloging, and access management are built into the platform. Continuously learn and apply new technologies, frameworks, and tools to improve team productivity and platform reliability. Own and optimize cloud infrastructure components related to data engineering workflows, storage, processing, and orchestration. Participate in architectural discussions, iteration planning, and feature sizing meetings Adhere to Agile processes and participate actively in agile ceremonies Stakeholder management skills All About You 5+ years of hands-on experience in data engineering with strong PySpark and Python skills. Solid experience designing and implementing data models, pipelines, and batch/stream processing systems. Proven ability to work with cloud platforms (AWS, Azure, or GCP), especially in data-related services like S3, Glue, Data Factory, Databricks, etc. Strong foundation in data modeling, database design, and performance optimization. Understanding of modern data architectures (e.g., lakehouse, medallion) and data lifecycle management. Comfortable with CI/CD practices, version control (e.g., Git), and automated testing. Demonstrated ability to mentor and uplift junior engineers—strong communication and collaboration skills. Bachelor’s degree in computer science, Engineering, or related field—or equivalent hands-on experience. Comfortable working in Agile/Scrum development environments. Curious, adaptable, and driven by problem-solving and continuous improvement. Good To Have Experience integrating heterogeneous systems and building resilient data pipelines across cloud environments. Familiarity with orchestration tools (e.g., Airflow, dbt, Step Functions, etc.). Exposure to data governance tools and practices (e.g., Lake Formation, Purview, or Atlan). Experience with containerization and infrastructure automation (e.g., Docker, Terraform) will be a good addition. Master’s degree, relevant certifications (e.g., AWS Certified Data Analytics, Azure Data Engineer), or demonstrable contributions to open source/data engineering communities will be a bonus. Exposure to machine learning data pipelines or MLOps is a plus. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251380 Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position-Azure Data Engineer Location- Pune Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking an MDM Associate Data Engineer with 2 5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

Company Description Muoro.io partners with organizations to build dedicated engineering teams and offshore development centers with top talent curated for specific objectives. The company focuses on addressing talent shortage and managing remote technology teams, particularly in emerging technologies. Role Description This is a remote contract role for an Azure Data Engineer at Muoro. The Azure Data Engineer will be responsible for designing and implementing data solutions using Azure services, building and maintaining data pipelines, and optimizing data workflows for efficiency. Qualifications Experience with Azure services, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics Proficiency in SQL and NoSQL databases Expertise in data modeling and ETL processes Strong analytical and problem-solving skills Experience with data visualization tools like Power BI or Tableau Bachelor's degree in Computer Science, Engineering, or a related field Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We're Hiring – API Developer (AI/ML Deployment & Cloud Integration) 📍 Location: Hyderabad, India ( only Telangana & A.P people) 💼 Type: Full-Time | Permanent | Offshore 🕛 Working Hours: US shift (Night shift) What You’ll Work On: 🔹 RESTful & GraphQL APIs for AI/ML services 🔹 AWS (API Gateway, Lambda, EKS, CodePipeline) 🔹 Databricks ML tools (Model Serving, Registry, Unity Catalog) 🔹 Deploying batch & streaming ML pipelines 🔹 Collaborating with cross-functional teams You Bring: ✅ 6+ years in API development ✅ Hands-on with AWS & Databricks ✅ Docker + Kubernetes experience ✅ Experience in ML model deployment at scale Interview Process: 1️⃣ Real-world Code Task (24 hours) 2️⃣ Technical Interview Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Purpose Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills And Experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews & workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT) Show more Show less

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are looking for a creative and technically skilled Senior Software Engineer/AI Engineer Search to help design and build cutting-edge AI-powered search solutions for the pharmaceutical industry. In this role, you'll develop intelligent systems that surface the most relevant insights from clinical trials, scientific literature, regulatory documents, and internal knowledge assets. Your work will empower researchers, clinicians, and decision-makers with faster, smarter access to the right information. . Design and implement search algorithms using NLP, machine learning, semantic understanding, and deep learning models Build and fine-tune models for information retrieval, query expansion, document ranking, summarization, and Q&A systems Support integration of LLMs (e.g., GPT, BERT, Bio BERT) for semantic and generative search Train and evaluate custom models for biomedical named entity recognition, relevance ranking, and similarity search. Build and deploy vector-based search systems using embeddings and vector databases Work closely with platform engineers to integrate AI models into scalable cloud-based infrastructures (AWS, Azure, GCP) Package and deploy search services using containerization (Docker, Kubernetes) and modern MLOps pipelines. Preprocess and structure unstructured content such as clinical trial reports, research articles, and regulatory documents Apply knowledge graphs, taxonomies, and ontologies (e.g., MeSH, UMLS, SNOMED) to enhance search results Build and deploy recommendation systems models, utilize AIML infrastructure, and contribute to model optimization and data processing Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience 2-4 years of experience building AI/ML models, ideally in search, NLP, or biomedical domains. Proficiency in Python and frameworks such as PyTorch, TensorFlow, Hugging Face Transformers Experience with search technologies like Elasticsearch, OpenSearch, or vector search tools Solid understanding of NLP techniques: embeddings, transformers, entity recognition, text classification Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Proficient in programming language AI/ML, Python, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in AI/ML, Java, Python Experienced with Fast Pythons API, GraphQL Experience with design patterns, data structures, data modelling, data algorithms Experienced with AWS /Azure Platform, building and deploying the code Experience in Postgres SQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Knowledge of LLMs, generative AI, and their use in enterprise search Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Exposure to MLOps tools like MLflow, Airflow, or SageMaker Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

8.0 - 10.0 years

27 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will lead the engagement model between Amgen's Technology organization and our global business partners in Commercial Data & Analytics. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with senior leaders on the Sales Insights product team within the Commercial Data & Analytics organization, focused on operations within the United States Serve as the technology product owner for an agile product team committed to delivering business value to Commercial stakeholders via data pipeline buildout for sales data Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience Ability to connect and understand our vast array Commercial and other functional data sources including Sales, Activity, and Digital data, etc. into consumable and user-friendly modes (e.g., dashboards, reports, mobile, etc.) for key decision makers such as executives, brand leads, account managers, and field representatives. Become the lead subject matter expert in reporting technology capabilities by researching and implementing new tools and features, internal and external methodologies Basic Qualifications: Masters degree with 8 - 10 years of experience in Information Systems experience OR Bachelors degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Must-Have Skills Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Excellent interpersonal skills, strong attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Strong understanding of sales and incentive compensation value streams Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred)

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Summary Senior Analyst – Data Engineer - Deloitte Technology - Deloitte Support Services India Private Limited Do you thrive on developing creative and innovative insights to solve complex challenges? Want to work on next-generation, cutting-edge products and services that deliver outstanding value and that are global in vision and scope? Work with premier thought leaders in your field? Work for a world-class organization that provides an exceptional career experience with an inclusive and collaborative culture? Work you’ll do Seeking a candidate with extensive experience on designing, delivering and maintaining implementations of solutions in the cloud, specifically Microsoft Azure. This candidate should also possess strong cross-discipline communication skills, strong analytical aptitude with critical thinking, a solid understanding of how data would translate into reporting / dashboarding capabilities, and the tools and platforms that support them. Responsibilities Role Specific Designing a well-structured data model using methodologies (e.g., Kimball or Inmon) that accurately represents the business requirements, ensures data integrity and minimizes redundancies. Developing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into Azure data services. This includes using Azure Data Factory, Azure Databricks, or other tools to orchestrate data workflows and data movement. Build, Test and Run of data assets tied to tasks and user stories from the Azure DevOps instance of Enterprise Data & Analytics. Bring a level of technical expertise of the Big Data space that contributes to the strategic roadmaps for Enterprise Data Architecture, Global Data Cloud Architecture, and Global Business Intelligence Architecture, as well contributes to the development of the broader Enterprise Data & Analytics Engineering community Actively participate in regularly scheduled contact calls to transparently review the status of in-flight projects, priorities of backlog projects, and review adoption of previous deliveries from Enterprise Data & Analytics with the Data Insights team. Handle break fixes and participate in a rotational on-call schedule. On-call includes monitoring of scheduled jobs and ETL pipelines. Actively participate in team meetings to transparently review the status of in-flight projects and their progress. Follow standard practice and frameworks on each project from development, to testing and then productionizing, each within the appropriate environment laid out by Data Architecture. Challenge’s self and others to make an impact that matters and help team connect their contributions with broader purpose. Sets expectations to the team, aligns the work based on the strengths and competencies, and challenges them to raise the bar while providing the support. Extensive knowledge of multiple technologies, tools, and processes to improve the design and architecture of the assigned applications. Knowledge Sharing / Documentation Contribute to, produce, and maintain processes, procedures, operational and architectural documentation. Change Control - ensure compliance with Processes and adherence to standards and documentation. Work with Deloitte Technology leadership and service teams in reviewing documentation and aligning KPIs to critical steps in our service operations. Active participation in ongoing training within BI space. The team At Deloitte, we’re all about collaboration. And nowhere is this more apparent than among our 2,000-strong internal services team. With our combined specialist skills, we provide all the essential support and advice our client-facing colleagues need, right across the firm. This enables them to focus all of their efforts on delivering the best service possible to their clients. Covering seven distinct areas; Human Resources, Clients & Industries, Finance & Legal, Practice Support Services, Quality & Risk Services, IT Services, and Workplace Services & Real Estate, together we live, breathe and deliver the Deloitte experience. Location: Hyderabad Work shift Timings: 11 AM to 8 PM Qualifications Bachelor of Engineering/ Bachelor of Technology 3-6 years of broad-based IT experience with technical knowledge of Microsoft SQL Server, Azure SQL Data Warehouse, Azure Data Lake Store, Azure Data Factory Demonstrated experience in Apache Framework (Spark, Scala, etc.) Well versed in SQL and comfortable in scripting using Python or similar language. First Month Critical Outcomes: Absorb strategic projects from the backlog and complete the related Azure SQL Data Warehouse Development work. Inspect existing run-state SQL Server databases and Azure SQL Data Warehouses and identify optimizations for potential development. Deliver new databases assigned as needed. Integration to on-call rotation (First 90 days). Contribute to legacy content and architecture migration to data lake (First 90 days). Delivery of first 2 data ingestion pipelines to include ingestion, QA and automation using Azure Big Data tools (First 90 days). Ability to document all work following standard documentation practices set forth by Data Governance (First 90 days). How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #EAG-Technology Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304653 Show more Show less

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216330 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned and passionate Principal Architect (Enterprise Architect – Data Platform Engineering) in our Data Architecture & Engineering group to drive the architecture, development and implementation of ourstrategy spanning across Data Fabric, Data Management, and Data Analytics Platform stack. The ideal candidate possesses a deep technical expertise and understanding of data and analytics landscape, current tools and technology trends, and data engineering principles, coupled with strong leadership and data-driven problem-solving skills.As a Principal Architect, you will play a crucial role in building the strategy and driving the implementation of best practices across data and analyticsplatforms. Roles & Responsibilities: Must be passionate about Data, Content and AI technologies - with ability to evaluate and assess new technology and trends in the market quickly - with enterprise architecture in mind Drive the strategy and implementation ofenterprise data platform and technical roadmapsthat align with the Amgen Data strategy Maintain the pulse of current market trends in data & AI space and be able to quickly perform hands-on experimentation and evaluations Provide expert guidance and influence the management and peers from functional groups with Enterprise mindset and goals Responsible for design, develop, optimize, delivery and support of Enterprise Data platform on AWS and Databricks architecture Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Advice and support Application teams (product managers, architects, business analysts, and developers) on tools, technology, and methodology related to the design and development of applications that have large data volume and variety of data types Collaborate and align withEARB, Cloud Infrastructure, Securityand other technology leaders on Enterprise Data Architecture changes Ensure scalability, reliability, and performance of data platforms by implementing best practices for architecture, cloud resource optimization, and system tuning. Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. Basic Qualifications and Experience: Master’s degree with 8 - 10 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10 - 14 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills: 8+ years of experience in data architecture and engineering or related roles with hands-on experience building enterprise data platforms in a cloud environment (AWS, Azure, GCP). 5+ years of experience in leading enterprise scale data platforms and solutions Expert-level proficiency with Databricks and experience in optimizing data pipelines and workflows in Databricks environments. Deep understanding of distributed computing, data architecture, and performance optimization in cloud-based environments. Experience with Enterprise mindset / certifications like TOGAF etc. are a plus. Highlypreferred to have Big Tech or Big Consulting experience. Solid knowledge of data security, governance, and compliance practices in cloud environments. Must have exceptional communication to engage and influence architects and leaders in the organization Good-to-Have Skills: Experience with Gen AI tools in databricks Experience with unstructured data architecture and pipelines Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216678 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned Principal Architect – Solutions to drive the architecture, development and implementation of data solutions to Amgen functional groups. The ideal candidate able to work in large scale Data Analytic initiatives, engage and work along with Business, Program Management, Data Engineering and Analytic Engineering teams. Be champions of enterprise data analytic strategy, data architecture blueprints and architectural guidelines. As a Principal Architect, you will play a crucial role in designing, building, and optimizing data solutions to Amgen functional groups such as R&D, Operations and GCO. Roles & Responsibilities: Implement and manage large scale data analytic solutions to Amgen functional groups that align with the Amgen Data strategy Collaborate with Business, Program Management, Data Engineering and Analytic Engineering teams to deliver data solutions Responsible for design, develop, optimize, delivery and support of Data solutions on AWS and Databricks architecture Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Provide expert guidance and mentorship to the team members, fostering a culture of innovation and best practices. Be passionate and hands-on to quickly experiment with new data related technologies Define guidelines, standards, strategies, security policies and change management policies to support the Enterprise Data platform. Collaborate and align with EARB, Cloud Infrastructure, Security and other technology leaders on Enterprise Data Architecture changes Work with different project and application groups to drive growth of the Enterprise Data Platform using effective written/verbal communication skills, and lead demos at different roadmap sessions Overall management of the Enterprise Data Platform on AWS environment to ensure that the service delivery is cost effective and business SLAs around uptime, performance and capacity are met Ensure scalability, reliability, and performance of data platforms by implementing best practices for architecture, cloud resource optimization, and system tuning. Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. Maintain knowledge of market trends and developments in data integration, data management and analytics software/tools Work as part of team in a SAFe Agile/Scrum model Basic Qualifications and Experience: Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills: 8+ years of hands-on experience in Data integrations, Data Management and BI technology stack. Strong experience with one or more Data Management tools such as AWS data lake, Snowflake or Azure Data Fabric Expert-level proficiency with Databricks and experience in optimizing data pipelines and workflows in Databricks environments. Strong experience with Python, PySpark, and SQL for building scalable data workflows and pipelines. Experience with Apache Spark, Delta Lake, and other relevant technologies for large-scale data processing. Familiarity with BI tools including Tableau and PowerBI Demonstrated ability to enhance cost-efficiency, scalability, and performance for data solutions Strong analytical and problem-solving skills to address complex data solutions Good-to-Have Skills: Preferred to have experience in life science or tech or consultative solution architecture roles Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216648 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will manage and oversee the development of robust Data Architectures, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers. You will be responsible for leading the development, implementation, and management of enterprise-level data data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement of scalable, high-performance Modern Data Engineering solutions (applications) that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends in data engineering area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Engineering capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data engineering team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 8 to 10 years of computer science and engineering preferred, other Engineering fields will be considered OR Bachelor’s degree and 12 to 14 years of computer science and engineering preferred, other Engineering fields will be considered OR Diploma and 16 to 18 years of computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in Data Engineering, working in COE development or product building 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, Snowflake, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Engineering and building AI ready data lakes Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

6.0 - 10.0 years

2 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Country/Region: IN Requisition ID: 26435 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Technical Lead-Data Engg Description: Area(s) of responsibility Job Description: Years of experience 6 to 10 Years – Experience in Perform Design, Development & Deployment using Azure Services (Data Factory, Databricks, PySpark , SQL) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume and complexity. Experience in creating Technical Specification Design , Application Interface Design. Developing Modern Data Warehouse solutions using Azure Stack ( Azure Data Lake , Azure Databricks) and PySpark Develop batch processing and integration solutions and process Structured and Non-Structured Data Demonstrated in-depth skills with Azure Databricks and PySpark, and SQL Collaborate and engage with BI & analytics and the business team Minimum 2 year of Project experience in Azure Databricks

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

GlassDoor logo

Description VP, AI and Engineering Syneos Health® is a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success. We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities. Every day we perform better because of how we work together, as one team, each the best at what we do. We bring a wide range of talented experts together across a wide range of business-critical services that support our business. Every role within Corporate is vital to furthering our vision of Shortening the Distance from Lab to Life®. Discover what our 29,000 employees, across 110 countries already know. WORK HERE MATTERS EVERYWHERE Why Syneos Health We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program. We are committed to our Total Self culture – where you can authentically be yourself. Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people. We are continuously building the company we all want to work for and our customers want to work with. Why? Because when we bring together diversity of thoughts, backgrounds, cultures, and perspectives – we’re able to create a place where everyone feels like they belong. Job Responsibilities Job Summary: This role is responsible for leading the AI, software, data, and quality engineering organization. Accountable for delivering best in class AI, data, and applications used by thousands of users worldwide. The engineering organization will partner with digital and technology product teams to create solutions and value for customers, solutions that help accelerate medicines and vaccines development and enable patient access. The role drives and delivers AI-infused applications at scale that will support Syneos Health’s efficient growth. The engineering organization also develops technology products that supercharge internal capabilities across corporate functions. Builds and develops the engineering organization based in India and leads a network of engineering nodes in other locations around the world. Participates in customer meetings, conferences, and technology incubators with a focus on building relationships, tracking trends, and engaging with peers in the industry. As a senior digital and technology leader in India, the role is responsible for overseeing daily operations (technology and people) for the delivery team, technology development, and strategic growth of the company’s regional offices. This leadership role focuses on driving the implementation of global technology initiatives, ensuring operational alignment with global standards, and fostering a high-performance culture within the team. Plays a key role in talent management, project delivery, stakeholder communication, and driving innovation to support the company's global goals. Core Responsibilities: Develop Best in Class and Cost Engineering Organization Attract, Develop, and Retain engineering talent across all disciplines including AI, software, data analytics, quality, testing, and agile facilitation. Manage and scale a team of technology professionals, ensuring the right mix of talent to meet business demands. Continuously upskill organization on new technologies in alignment with enterprise technology decisions. Manage strategic 3rd parties to access engineering talent and source capacity when internal capabilities are fully utilized. Assess maturity of organization, set path to implement best practices and standards for engineering disciplines, and lead communities of practice. Oversee and manage a High-Performing Technology Delivery Partner with digital and tech product leaders to understand priorities, manage demand, provide work estimates, and maintain product roadmaps. Staff engineering resources on product and project teams to deliver prioritized initiatives, ensuring utilization of organization. Deliver coding, configuration, and testing in product-centric and agile ways and measure performance quarterly across value, flow, and quality metrics. Where needed, staff and deliver projects. Drive Devops, Dataops, and MLops platforms and engineering productivity, AI automation, automated code and test, in partnership with Core Technology. Regional Tech Leadership: Lead and manage the day-to-day operations of the site-based team, ensuring alignment with the global strategic objectives. Provide site leadership across technology projects end to end, including software development, product delivery, infrastructure management, and IT services. Monitor industry trends, emerging technologies, and best practices to ensure the site remains competitive and innovative. Foster a culture of collaboration, innovation, and continuous improvement within the site. Build, mentor, and inspire a high-performing team, ensuring the growth and development of employees. Drive employee engagement and retention initiatives to ensure a motivated and committed workforce. Partner with HR and Talent Acquisition in support of these initiatives for an engaged and sought after employee experience. Stakeholder Communication: Maintain strong relationships with key stakeholders in the CDIO LT, including senior leadership, product, and engineering teams. Provide regular updates on performance, delivery progress, risks, and opportunities to CDIO executives. Act as a cultural ambassador, ensuring that the team’s work aligns with the company’s global vision and values. Risk Management and Compliance: Ensure the organization complies with relevant legal, regulatory, and company policies. Identify risks related to technology, operations, and talent management, and implement mitigation strategies. Innovation and Continuous Improvement: Promote and drive innovation within the team, encouraging the use of new technologies and approaches. Continuously assess and improve site processes to enhance efficiency, reduce costs, and drive value. Qualifications: Experience in technology or operations leadership roles, with experience managing a tech team in a region or a similar market. Experience leading a pharma services technology organization (CRO, professional services, biotech/biopharma, and healthcare technology) focused on life sciences. Proven track record in leading cross-functional teams and delivering complex end to end technology projects at a global scale. Experience leveraging data, analytics, and AI to develop new products and services. Ability to transform legacy technology and digital teams into a highly efficient, disciplined, delivery-oriented organization with strong alignment to business strategy. Experience managing both technical and operational aspects of a global business, particularly with teams in different geographic locations. P&L experience is a plus. Proven experience to lead a high-performing team as well as attracting talent for a continuous cycle of diversity of thought tied to employee growth and business objectives being met. Experience leading a technology organization providing both product development and SaaS Software solutions for a broad range of technologies such as Python, Java, Apex, Databricks, Workday, Oracle Fusion, ServiceNow, SalesForce, Veeva CRM, Veeva Vault Clinical, as well as cloud and analytical services provided by Microsoft Azure, AWS, Oracle OCI is preferred. Strong leadership and team-building abilities. Excellent communication and interpersonal skills, with the ability to effectively interact with senior management, technical teams, and global stakeholders. Deep understanding of modern software, AI, and data development methodologies, including Agile methodologies, devops, dataops, and MLops. Proficiency in technology management, project delivery, and risk mitigation. Strong business acumen, including the ability to manage budgets, resources, and operational performance. Strong problem-solving skills and a proactive approach to resolving challenges. Ability to work in a fast-paced, dynamic environment. Experience in a global or multi-site organization is highly desirable. Get to know Syneos Health Over the past 5 years, we have worked with 94% of all Novel FDA Approved Drugs, 95% of EMA Authorized Products and over 200 Studies across 73,000 Sites and 675,000+ Trial patients. No matter what your role is, you’ll take the initiative and challenge the status quo with us in a highly competitive and ever-changing environment. Learn more about Syneos Health. http://www.syneoshealth.com Additional Information Tasks, duties, and responsibilities as listed in this job description are not exhaustive. The Company, at its sole discretion and with no prior notice, may assign other tasks, duties, and job responsibilities. Equivalent experience, skills, and/or education will also be considered so qualifications of incumbents may differ from those listed in the Job Description. The Company, at its sole discretion, will determine what constitutes as equivalent to the qualifications described above. Further, nothing contained herein should be construed to create an employment contract. Occasionally, required skills/experiences for jobs are expressed in brief terms. Any language contained herein is intended to fully comply with all obligations imposed by the legislation of each country in which it operates, including the implementation of the EU Equality Directive, in relation to the recruitment and employment of its employees. The Company is committed to compliance with the Americans with Disabilities Act, including the provision of reasonable accommodations, when appropriate, to assist employees or applicants to perform the essential functions of the job.

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-213468 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking an MDM Associate Data Engineerwith 2–5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications: Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline development and management. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud data storage solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

5.0 years

4 - 5 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description: At least 5+ years’ of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description: Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities: Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technology: Python , Azure Data Bricks Qualifications: Bachelor’s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field Employee Type: Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 1 week ago

Apply

2.0 - 3.0 years

5 - 8 Lacs

Cochin

On-site

GlassDoor logo

Support/services analysts Location: Cochin Availability: Immediate Exp : 3-5 yrs Budget: Max 8 LPA The core technical skills we are be looking for are: Azure Platform Familiarity Automated Logging Managed Identities and Roles Databricks Platform Familiarity Databricks Jobs Databricks Logging Python - there will be a custom Python Ul to interact with ML models Required Skills:- 2 to 3 years of relevant experience Willing to work in product-based organization in post implementation services/managed services teams Willing to work in a rotational shifts is a must, till 12:30 AM IST maximum at the moment Strong analytical skills Client-facing, effective Stakeholder management, interpersonal and communication skills Confident and good communicator especially with the stakeholders independently when needed Complete internal product certifications as needed, quick self-learner, quickly understand the healthcare domain with exploring mindset Incident management, unblock technical problems, Requests fulfillment Ability to write knowledge articles Should technically sound enough to handle end-to-end support activities independently Quick ramp up on the existing product/process and provide solution/support at a faster speed To participate in project handover activities, understand the BRD, SDD documents, to be able to manage post production rollout activities independently post hyper care period Mail Id: sakthi.sankar@extendotech.com Contact No: 86105 69663 Job Type: Full-time Pay: ₹543,005.70 - ₹806,641.35 per year Schedule: Rotational shift Work Location: In person

Posted 1 week ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking an MDM Associate Data Engineer with 2-- 5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark , and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications : Any ETL certification ( e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies