Home
Jobs
Companies
Resume

3324 Databricks Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Hiring Alert! Looking for Support/Services Analyst .... Location: Cochin Availability: Immediate Exp : 3-5 yrs Budget: Max 8 LPA Core Technical Skills: Azure Platform Familiarity Automated Logging Managed Identities and Roles Databricks Platform Familiarity Databricks Jobs Databricks Logging Python - there will be a custom Python Ul to interact with ML models Required Skills:- 2 to 3 years of relevant experience Willing to work in product-based organization in post implementation services/managed services teams Willing to work in a rotational shifts is a must, till 12:30 AM IST maximum at the moment Strong analytical skills Client-facing, effective Stakeholder management, interpersonal and communication skills Confident and good communicator especially with the stakeholders independently when needed Complete internal product certifications as needed, quick self-learner, quickly understand the healthcare domain with exploring mindset Incident management, unblock technical problems, Requests fulfillment Ability to write knowledge articles Should technically sound enough to handle end-to-end support activities independently Quick ramp up on the existing product/process and provide solution/support at a faster speed To participate in project handover activities, understand the BRD, SDD documents, to be able to manage post production rollout activities independently post hyper care period Interested candidates, please share your resumes to reubenpeter@extendotech.com/nivashini@extendotech.com Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role: We are looking for a highly skilled Senior Platform Engineer (Azure) who brings hands-on experience with distributed systems and a deep understanding of Azure Data Engineering and DevOps practices. The ideal candidate will be responsible for architecting and managing scalable, secure, and highly available cloud solutions on Microsoft Azure. Key Responsibilities: Design and manage distributed system architectures using Azure services such as Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App, Logic App, and App Services . Implement infrastructure as code (IaC) using ARM templates and Terraform for consistent, automated environment provisioning. Deploy and manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS) . Monitor and troubleshoot infrastructure and applications using Azure Monitor , Log Analytics , and Application Insights . Design and implement disaster recovery strategies , backups, and failover mechanisms to ensure business continuity. Automate provisioning, scaling, and infrastructure management to maintain system reliability and performance. Manage Azure environments across development, test, pre production, and production stages. Monitor and define job flows , set up proactive alerts, and ensure smooth ETL operations in Azure Data Factory and Databricks . Conduct root cause analysis and implement fixes for job failures. Work with Jenkins and Azure DevOps for automating CI/CD pipelines and deployment workflows. Write automation scripts using Python and Shell scripting for various operational tasks. Monitor VM performance metrics (CPU, memory, OS, network, storage) and recommend optimizations. Collaborate with development teams to improve application reliability and performance. Work in Agile environments with a proactive and results-driven mindset. Required Skills: Expertise in Azure services for data engineering and application deployment. Strong knowledge of Terraform , ARM templates, and CI/CD tools . Hands-on experience with Databricks , Data Factory , and Event Hub . Familiarity with Python , Shell scripting , Jenkins , and Azure DevOps . Deep understanding of container orchestration using AKS . Experience in monitoring , alerting , and log analysis for cloud-native applications. Ability to troubleshoot and support distributed systems in production environments. Excellent problem-solving and communication skills. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Governance & MDM Specialist – Azure Purview and Profisee (POC Implementation) Job Overview: We are seeking a skilled Data Governance and Master Data Management (MDM) Specialist to lead the setup and validation of a POC environment using Azure Purview and Profisee . The goal is to establish an integrated framework that ensures high standards in data quality, lineage tracking, security, and master data management across source systems. Key Responsibilities: Set up and configure Azure Purview based on the defined architecture Deploy and configure Profisee MDM platform Provision necessary Azure resources , including storage, compute, and access controls Configure: Data glossary Data classifications Domains and metadata schemas Security and access policies Set up data lineage tracking across integrated systems Define and implement match/merge rules , workflow processes , and data quality logic in Profisee Integrate Azure Purview and Profisee with multiple source systems for the POC Build data ingestion and transformation pipelines using tools such as Azure Data Factory (ADF) Ensure accurate lineage visualization , data quality validation , and matching logic verification Provide support for orchestration , testing, and ongoing validation during the POC Required Skills & Experience: Hands-on experience with Azure Purview configuration and integration Strong expertise in Profisee or similar MDM platforms Experience with data cataloging , metadata management , and data lineage Familiarity with data governance frameworks and data stewardship Proficient in Azure Data Factory (ADF) or other data pipeline tools Good understanding of Azure cloud architecture , RBAC, and resource provisioning Strong SQL skills for data profiling and validation Experience with match/merge logic , workflow configuration, and business rules in MDM tools Ability to troubleshoot and resolve integration issues across systems Nice to Have: Familiarity with Databricks , Azure Functions , or Logic Apps Knowledge of Infrastructure as Code (IaC) – ARM, Bicep, or Terraform Prior experience with API-based integrations for real-time data sync Show more Show less

Posted 23 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Qualification: B.E./B.Tech Computer science, Electrical engineering, Mechanical engineering Experience Range: 3 to 6 Years Roles & Responsibilities: As a C++ developer in the Simulation environment unit, you will work with: Enablement of advanced large-scale simulation testing for the development of autonomous vehicles Integration of third-party state-of-the-art tools in our simulation platform. Support of software developers and testers throughout the organization with the simulation tools. Contributing to the team’s planning and roadmaps. Additional Responsibilities: To succeed in your tasks, we believe you have: • Degree in Computer science, Electrical engineering, Mechanical engineering • More than 5 years of experience working with C++ on Linux • Experience using debugging tools • Git experience Good to have: • Real-time operating system experience • Basic CI/CD experience • Basic containerized tool experience • Cloud experience -> Cloud experience: AWS, Databricks, etc. • Basic knowledge of robotics or autonomous technologies • Working experience in product with large code base Required Skill Set: Real-time operating system experience Basic CI/CD experience Basic containerized tool experience Cloud experience -& GT; Cloud experience: AWS, Databricks, etc. Basic knowledge of robotics or autonomous technologies Working experience in product with large code base In your role, you will learn how our autonomous vehicle software stack works. Your contributions will be one of the keys to securing the efficient and safe deployment of Scania’s next-generation autonomous software. We give high value to cooperation, support to each other and knowledge sharing. Spontaneous mob sessions happen frequently. Show more Show less

Posted 23 hours ago

Apply

4.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Greetings from TCS!!! **TCS is Hiring for Azure Data Engineer ** Walk-in Interview for Azure Data Engineer in Kolkata Walk-in Interview Date: 21st June 2025 (Saturday) Role: Azure Data Engineer Desired Experience: 4-12 Years Job Location: Kolkata Job Description: Must Have- 1. Azure Data Factory 2. Azure Data Bricks 3. Python 4. SQL Query writing Good-to-Have 1. PySpark 2. SQL query optimization 3. Power shell Responsibilities:- Developing/design solution from detail design specification. Playing an active role in defining standard in coding, system design and architecture. Revise, refactor, update and debug code. Customer interaction. Must have strong technical background and hands on coding experience in Azure Data Factory. Azure Databricks and SQL. Date of Walk-In: 21st June 2025 (Saturday) Time: 9:30 AM to 12:30 PM Venue: TCS Candor Tech Space. DH Block(Newtown), Action Area I, Newtown, Chakpachuria, New Town, West Bengal 700135 Show more Show less

Posted 23 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Senior Automation Engineer Job Type : Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Senior Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less

Posted 23 hours ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 7 - 12 years Work Location: Hyderabad (Hybrid) / Remote Mandatory skills: AWS, Python, SQL, Airflow, DBT Must have done 1 or 2 projects in Clinical Domain/Clinical Industry. Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 7 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less

Posted 23 hours ago

Apply

20.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description: We are seeking a skilled Cloud Architect with expertise in Azure, Databricks, and Snowflake to join our team. The ideal candidate will be responsible for designing and implementing cloud solutions to meet our organization's needs. Responsibilities: Designing and implementing cloud architecture solutions using Azure, Databricks, and Snowflake - Collaborating with cross-functional teams to ensure successful implementation of cloud projects - Providing technical guidance and support to team members on cloud-related issues Qualifications: Proven experience as a Cloud Architect - Strong knowledge of Azure, Databricks, and Snowflake - Excellent communication and teamwork skills - Bachelor's degree in Computer Science or related field Experience - 20 Years to 25 Years Salary - 1850000 to 2500000 Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Key Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 1 day ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Bengaluru L&T Technology Services is seeking a Data Engineer (Experience range - 9+ years) of experience, proficient in: 9+ years relevant data engineering hands on work experience- data ingestion, processing, exploratory analysis to build solutions that deliver value through data as an asset. Data engineer build ,test and deploy data pipelines efficiently and reliably move data across systems and should be top of latest architectural trends on AZURE cloud. Folks who understand parallel and distributed processing, storage, concurrency, fault tolerant systems. Folks who thrive on new technologies, able to adapt and learn easily to meet the needs of next generation engineering challenges. Technical Skills (Must-Have) Applied experience with distributed data processing frameworks - Spark , Databricks with Python and SQL Must have worked at least 2 end-end data analytics projects with Databricks Configuration , Unity Catalog, Delta Sharing and medallion architecture. Applied experience with Azure Data services ADLS , Delta Required Skills: Azure Data Lake Storage (ADLS), Advanced SQL and Python Programming, Databricks Expertise with Medallion Architecture, Data Governance and Security, #AzureDataEngineer, #AzureCloud, #AzureDatabricks, #AzureDataLake, #AzureSynapse, #AzureDataFactory, #AzureSQL, #Databricks, #DataEngineering, #Python, #Flask Show more Show less

Posted 1 day ago

Apply

162.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job –The Azure Data Architect is responsible for designing, implementing and managing scalable and secure data solutions on Microsoft Azure cloud platform. This role requires a deep understanding of data transformation, data cleansing, data profiling, data architecture principles, cloud technologies and data engineering practices helping build an optimal data ecosystem for performance, scalability and cost efficiency. Job Title - Azure Data Factory Architect Location: Noida/Pune Educational Background: Bachelor’s degree in computer science, Information Technology, or related field. Mode of Work- Hybrid Experience Required - 14+ years Mandatory skills Key Responsibilities Solution Design: Design and implement robust, scalable and secure data architectures on Azure Define end-to-end data solutions, including data ingestion, storage, transformation, processing, and analytics. Understand business requirements and translate them to technical solutions Azure Platform Expertise: Leverage Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks, Azure Cosmos DB, and Azure SQL Database. Knowledge on optimization and cost of Azure Data Solutions Data Integration and ETL/ELT pipelines: Design and implement data pipeline for real-time and batch processing SQL skills to write complex queries Must have knowledge in establishing a one way or two-way communication channels while integration between various systems. Data Governance and Security: Implement data security in line with organizational and regulatory requirements. Implement data quality assurance. Should have knowledge in different Authentication methods used in Cloud solutions Performance Optimization: Monitor, troubleshoot and improve data solution performance Implement best practice for data solutions Collaboration and Leadership: Provide technical leadership and mentorship to team members Mandatory Skills Required Hands on experience in Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Keyvault Services, Azure SQL Database, Azure Databricks. Hands on experience in data migration/data transformation. Data cleansings. Data profiling Experience in Logic Apps Soft Skills Communicates effectively Problem solving – analytical skills Adapt evolving technologies Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Pocket Entertainment Pocket Entertainment is revolutionizing entertainment through immersive storytelling. With millions of users worldwide, we're building the future of entertainment — blending human creativity with cutting-edge AI. Now, we’re expanding our global footprint. Let’s reimagine global entertainment — powered by AI and fueled by imagination. Work on category-defining AI products reshaping entertainment that operate at massive scale: millions of users, billions of minutes consumed. Collaborate with some of the best engineers, AI researchers, and creators globally. Mission of the role As a Senior Principal Data Scientist at Pocket FM, you will play a pivotal role in driving innovation in recommendation systems , user personalization , and content strategy . You will lead high-impact initiatives to understand user behavior, predict preferences, and enhance engagement through intelligent content delivery. This is a hands-on leadership role where you will set the technical vision, guide the roadmap, influence strategy, and mentor a high-performing data science team. Key responsibilities: Design and deploy machine learning models for recommendations, personalization, ranking, and user behavior prediction Tackle high-impact challenges across personalization, churn prediction, LTV modeling, creator analytics, and platform growth Partner with engineering, product, and content teams to align modeling efforts with business impact Lead the full ML lifecycle: prototyping, experimentation, A/B testing, deployment, and monitoring Translate large-scale user behavior data into actionable insights and strategies for content discovery and retention Mentor and guide junior and senior data scientists, fostering a culture of technical excellence, innovation, and continuous learning Conduct in-depth data analysis and exploratory research to uncover actionable insights, understand user trends, and identify new opportunities for product improvement and growth Drive MLOps best practices including model monitoring, versioning, and lifecycle management Stay current with advances in ML/NLP and apply them to improve recommendation quality and user satisfaction Qualifications: Advanced degree (PhD or Master’s) in CS, ML, Stats, or related field 12+ years of experience building and deploying ML models in production Expertise in recommender systems, personalization, ranking, or user modeling Strong Python skills and deep experience with ML frameworks (e.g., PyTorch, TensorFlow, XGBoost) Solid grounding in experimentation, A/B testing, and statistical inference Experience with big data tools (Spark, Databricks) and cloud platforms (AWS/GCP) Strong analytical and creative problem-solving skills Strong communication and cross-functional collaboration skills Bonus Points: Experience with NLP techniques applied to text or audio data Contributions to open-source ML projects or research publications Familiarity with generative AI models and their applications Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer Job Type: Full-Time Location: On-site Hyderabad, Telangana, India Job Summary: We are seeking an accomplished Data Engineer to join one of our top customer's dynamic team in Hyderabad. You will be instrumental in designing, implementing, and optimizing data pipelines that drive our business insights and analytics. If you are passionate about harnessing the power of big data, possess a strong technical skill set, and thrive in a collaborative environment, we would love to hear from you. Key Responsibilities: Develop and maintain scalable data pipelines using Python, PySpark, and SQL. Implement robust data warehousing and data lake architectures. Leverage the Databricks platform to enhance data processing and analytics capabilities. Model, design, and optimize complex database schemas. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Lead and mentor junior data engineers and establish best practices. Troubleshoot and resolve data processing issues promptly. Required Skills and Qualifications: Strong proficiency in Python and PySpark. Extensive experience with the Databricks platform. Advanced SQL and data modeling skills. Demonstrated experience in data warehousing and data lake architectures. Exceptional problem-solving and analytical skills. Strong written and verbal communication skills. Preferred Qualifications: Experience with graph databases, particularly MarkLogic. Proven track record of leading data engineering teams. Understanding of data governance and best practices in data management. Show more Show less

Posted 1 day ago

Apply

9.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!!! TCS is hiring for Big Data Architect Location - PAN India Years of Experience - 9-14 years Job Description- Experience with Python, Spark, and Hive data pipelines using ETL processes Apache Hadoop development and implementation Experience with streaming frameworks such as Kafka Hands on experience in Azure/AWS/Google data services Work with big data technologies (Spark, Hadoop, BigQuery, Databricks) for data preprocessing and feature engineering. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer – Azure & API Development Location: Remote Experience Required: 7+ Years Job Summary: We are looking for an experienced Data Engineer with strong expertise in Azure cloud architecture , API development , and modern data engineering tools . The ideal candidate will have in-depth experience in building and maintaining scalable data pipelines and API integrations using Azure services like Azure Data Factory (ADF) , Databricks , Azure Functions , and Service Bus , along with infrastructure provisioning using Terraform . Key Responsibilities: Design and implement scalable, secure, and high-performance data solutions on Azure . Develop, deploy, and manage RESTful APIs to support data access and integration. Build and maintain ETL/ELT data pipelines using Azure Data Factory , Databricks , and Azure Functions . Integrate data workflows with Azure Service Bus and other messaging services. Define and implement cloud infrastructure using Terraform and Infrastructure-as-Code (IaC) best practices. Collaborate with stakeholders to understand data requirements and develop technical solutions. Ensure best practices for data governance , security , monitoring , and performance optimization . Work closely with DevOps and Data Architects to implement CI/CD pipelines and production-grade deployments. Must-Have Skills: 7+ years of professional experience in Data Engineering or related roles. Strong hands-on experience with Azure services , particularly: Azure Data Factory (ADF) Databricks (Spark-based processing) Azure Functions Azure Service Bus Proficient in API development (RESTful APIs using Python, .NET, or Node.js). Good command over SQL , Spark SQL , and data transformation techniques. Experience with Terraform for IaC and provisioning Azure resources. Excellent understanding of data architecture , cloud security , and governance models . Strong problem-solving skills and experience working in Agile environments. Preferred Skills: Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins. Exposure to event-driven architecture and real-time data streaming. Knowledge of containerization (Docker/Kubernetes) is a plus. Experience in performance tuning and cost optimization in Azure environments. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development. Responsibilities Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management. Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting. Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts. Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time. Documentation : Document ticket resolutions, testing protocols, and data validation processes. Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers. Ticket Management: Monitor the Jira ticket queue and respond to tickets as they are raised. Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them. Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues. Troubleshooting and Support: Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics. Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance. Desired Skills & Requirements Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit. Our ideal candidate possesses the following attributes and qualifications: Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments. Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions. Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management. Hands-on experience with PySpark for data processing and automation. Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments. Some experience with Azure DevOps CI/CD IaC and release pipelines. Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills. Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement. Experience with Data Engineering in Microsoft Fabric Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks). Certifications in Azure Data Engineering. Why Join Us? Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless. Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success. Enjoy the flexibility to work from anywhere Work-life balance that suits your lifestyle. Competitive salary and comprehensive benefits package. Career growth and professional development opportunities. A collaborative and inclusive work culture. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Aramya, we’re redefining fashion for India’s underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, we’ve already achieved ₹40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of ₹100 Cr, we’re scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, we’re on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. We’re looking for a passionate Data Engineer with a strong foundation. The ideal candidate should have a solid understanding of D2C or e-commerce platforms and be able to work across the stack to build high-performing, user-centric digital experiences. Key Responsibilities Design, build, and maintain scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks , and Spark . Own and manage data lakes/warehouses on AWS Redshift (or Snowflake/BigQuery). Optimize SQL queries and data models for analytics, performance, and reliability. Develop and maintain backend APIs using Python (FastAPI/Django/Flask) or Node.js . Integrate external data sources (APIs, SFTP, third-party connectors) and ensure data quality & validation. Implement monitoring, logging, and alerting for data pipeline health. Collaborate with stakeholders to gather requirements and define data contracts. Maintain infrastructure-as-code (Terraform/CDK) for data workflows and services. Must-Have Skills Strong in SQL and data modeling (OLTP and OLAP). Solid programming experience in Python , preferably for both ETL and backend. Hands-on experience with Databricks , Redshift , or Spark . Experience with building and managing ETL pipelines using tools like Airflow , dbt , or similar. Deep understanding of REST APIs , microservices architecture, and backend design patterns. Familiarity with Docker , Git, CI/CD pipelines. Good grasp of cloud platforms (preferably AWS ) and services like S3, Lambda, ECS/Fargate, CloudWatch. Nice-to-Have Skills Exposure to streaming platforms like Kafka, Kinesis, or Flink. Experience with Snowflake , BigQuery , or Delta Lake . Proficient in data governance , security best practices, and PII handling. Familiarity with GraphQL , gRPC , or event-driven systems. Knowledge of data observability tools (Monte Carlo, Great Expectations, Datafold). Experience working in a D2C/eCommerce or analytics-heavy product environment. Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

We are looking for a Senior Data Lead to lead enterprise-level data modernization and innovation. In this highly strategic role, you will design scalable, secure, and future-ready data architectures, modernize legacy systems, and provide trusted technical leadership across both technology and business teams. This is a unique opportunity to make a company-wide impact by influencing data strategy and enabling smarter, faster decision-making through data. Key Responsibilities Architect & Design: Lead the development of robust, scalable data models, data management systems, and integration frameworks to ensure enterprise-wide data accuracy, consistency, and security. Domain Expertise: Act as a subject matter expert across key business functions such as Supply Chain, Product Engineering, Sales & Marketing, Manufacturing, Finance, and Legal. Modernization Leadership: Drive the transformation of legacy systems and manage end-to-end cloud migrations with minimal business disruption. Collaboration: Partner with data engineers, scientists, analysts, and IT leaders to build high-performance, scalable data pipelines and transformation solutions. Governance & Compliance: Establish and maintain data governance frameworks including metadata repositories, data dictionaries, and data lineage documentation. Strategic Advisory: Provide guidance on data architecture best practices, technology selection, and roadmap alignment to senior leadership and cross-functional teams. Mentorship: Serve as a mentor and thought leader to junior data professionals, fostering a culture of innovation, knowledge sharing, and technical excellence. Innovation & Trends: Stay abreast of emerging technologies in cloud, data platforms, and AI/ML to identify and implement innovative solutions. Communication: Translate complex technical concepts into clear, actionable insights for technical and non-technical audiences alike. Required Qualifications 10+ years of experience in data architecture, engineering, or enterprise data management roles. Demonstrated success leading large-scale data initiatives in life sciences or other highly regulated industries. Deep expertise in modern data architecture paradigms such as Data Lakehouse, Data Mesh, or Data Fabric. Strong hands-on experience with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Proficiency in data modeling, ETL/ELT frameworks, and enterprise integration patterns. Deep understanding of data governance, metadata management, master data management (MDM), and data quality practices. Experience with tools and platforms including but not limited to: Data Integration: Informatica, Talend Data Governance: Collibra Modeling/Transformation: dbt Cloud Platforms: Snowflake, Databricks Excellent problem-solving skills with the ability to translate business requirements into scalable data solutions. Exceptional communication skills and experience engaging with both executive stakeholders and engineering teams. Preferred Qualifications (Nice to Have) Experience with AI/ML data pipelines or real-time streaming architectures. Certifications in cloud technologies (e.g., AWS Certified Solutions Architect, Azure Data Engineer). Familiarity with regulatory frameworks such as GxP, HIPAA, or GDPR. Show more Show less

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Summary: IT - Lead Architect/Associate Principle – Azure Lake - D&IT DATA This profile will lead a team of architects and engineers to focus on Strategic Azure architecture and AI projects. Job Responsibility: Strategic Data Architecture and Roadmap: Develop and maintain the company’s data architecture strategy aligned with business objectives. Lead design and/or architecture validation reviews with all stakeholders, assess projects against architectural principles and target solutions, organize and lead architecture committees. Select new solutions that meet business needs, aligned with existing recommendations and solutions, and broadly with IS strategy. Model the company’s information systems and business processes. Define a clear roadmap to modernize and optimize data management practices and technologies. Emerging Technologies and Innovation: Drive the adoption of new technologies (AL/ML) and assess their impact on the organization’s data strategy. Conduct technological watch in both company activity domains and IT technologies, promoting innovative solutions adapted to the company. Define principles, standards, and tools for system modeling and process management. Platform Design and Implementation: Architect scalable data flows, storage solutions, and analytics platforms in cloud and hybrid environments. Ensure secure, high-performing, and cost-effective data solutions. Data Governance and Quality: Establish data governance frameworks ensuring data accuracy, availability, and security. Promote and enforce best practices for data quality management. Ensure compliance with enterprise architecture standards and principles. Technical Leadership: Act as a technical advisor on complex data-related projects and proof of concepts. Stakeholder Collaboration: Collaborate with business stakeholders to understand data needs and translate them into architectural solutions. Work with relevant stakeholders in defining project scope, planning development, and validating milestones throughout project execution. Exposure to a wide range of technologies related to Datalakes SQL, SYNAPSE, Databricks, PowerBI, Fabric, Python Tools: Visual Studio & TFS, GIT. Database: SQL Server. NoSQL Methodologies: Agile (SCRUM). SAP BW / SAC Required Skill: Expert in Azure, Databricks and Synapse Proven experience leading technical teams and strategic projects. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks, or AWS). Proven experience in designing and implementing AI solutions within data architectures. Understanding of SAP-based technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC). Experience with analytics, visualization, reporting, and self-service tools (Power BI, Tableau, SAP Analytics Cloud). Expert understanding of data modeling, ETL/ELT technologies, and big data. Experience with relational and NoSQL databases. Deep knowledge of data security and compliance best practices Strong experience in Solution Architecture. Proven ability to lead AI/ML projects from conception to deployment. Familiarity with data mesh and data fabric architectural approaches. Qualification and Experience: Experience – 10-12 years Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Experience in data architecture, with at least 5 years in a leadership role. Experience with AI/ML projects. Certifications in data architecture or cloud technologies, project management. 5-year experience in AI model design & deployment Excellent communication and presentation skills for both technical and non-technical audiences. Strong problem-solving skills, stakeholder management, and the ability to navigate complexity. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Developer - Python Job Type: Full-time Location: On-site, Hyderabad, Telangana, India Job Summary: Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. You’ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment. Key Responsibilities: Develop, test, and deploy robust backend components and microservices using Python and PySpark. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability. Integrate machine learning models into production-ready backend systems supporting AI-driven features. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing. Required Skills and Qualifications: Proficiency in Python programming for backend development. Hands-on experience with Databricks and PySpark in a production environment. Strong understanding of MySQL database design, querying, and performance tuning. Practical background in machine learning concepts and deploying ML models. Experience with Redis for caching and state management. Excellent written and verbal communication skills, with a keen attention to detail. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad. Preferred Qualifications: Previous experience in high-growth AI/ML or data engineering projects. Familiarity with additional backend technologies or cloud platforms. Demonstrated leadership or mentorship in technical teams. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: Azure Databricks Engineer Experience: 4+ Years Required Skills: 4+ years of experience in Data Engineering . Strong hands-on experience with Azure Databricks and PySpark . Good understanding of Azure Data Factory (ADF) , Azure Data Lake (ADLS) , and Azure Synapse . Strong SQL skills and experience with large-scale data processing. Experience with version control systems (Git), CI/CD pipelines, and Agile methodology. Knowledge of Delta Lake, Lakehouse architecture, and distributed computing concepts. Preferred Skills: Experience with Airflow , Power BI , or machine learning pipelines . Familiarity with DevOps tools for automation and deployment in Azure. Azure certifications (e.g., DP-203) are a plus. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies