Jobs
Interviews

20 Databrick Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

kerala

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. **The Opportunity** We're looking for candidates with strong technology and data understanding in the big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. **API Developer:** - Design, develop, and maintain RESTful APIs that provide secure and efficient data exchange. - Participate in developing the Proof of Concept by integrating the Rest API with external services to ensure scalability and responsiveness. - Good understanding of XML, JSON, and Data Parsing techniques. - Write unit, integration, and end-to-end tests for APIs, and troubleshoot any API-related issues. - Knowledge of API authentication and authorization methods (OAuth, JWT, etc.). - Working experience in Python, Scala, Database Operations, and experience in designing and developing API Framework. - SQL Knowledge & Database knowledge is an added value. **Technology Stack:** Understanding of API lifecycle release management, hands-on experience with Python, Spark Java, or PHP, and managing database interactions with MySQL, PostgreSQL. SQL, Exposure to Databricks environment & Git and other version control systems are added values to the program. **Ideally, you'll also have:** - Client management skills. **What We Look For:** - Minimum 5 years of experience as an Architect on Analytics solutions and around 2 years of experience with Snowflake. - People with technical experience and enthusiasm to learn new things in this fast-moving environment. **What Working At EY Offers:** At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies, and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: - Support, coaching, and feedback from some of the most engaging colleagues around. - Opportunities to develop new skills and progress your career. - The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As an Azure Data Engineer at our company, you will be responsible for designing, developing, and deploying scalable data pipelines using Azure Data Factory, Databricks, and PySpark. Your role will involve working with large datasets (500 GB+) to optimize data processing workflows. Collaborating with cross-functional teams to prioritize project requirements and ensuring data quality, security, and compliance with organizational standards will be essential tasks. Troubleshooting data pipeline issues, optimizing performance, and effectively communicating technical solutions to non-technical stakeholders are key aspects of the role. To excel in this position, you should have at least 4 years of experience in data engineering and hold an Azure certification. Proficiency in Azure, Databricks, PySpark, SQL, data modeling, data integration, and workflow orchestration is crucial. Strong communication and interpersonal skills are necessary, along with the ability to work in a 2nd shift. Experience in product engineering would be a valuable asset. Additionally, having a minimum of 2-3 projects working with large datasets is preferred. Key Responsibilities: - Design, develop, and deploy scalable data pipelines using Azure Data Factory, Databricks, and PySpark - Work with large datasets (500 GB+) to develop and optimize data processing workflows - Collaborate with cross-functional teams to identify and prioritize project requirements - Develop and maintain data models, data integration, and workflow orchestration - Ensure data quality, security, and compliance with organizational standards - Troubleshoot data pipeline issues and optimize performance - Communicate technical solutions to non-technical stakeholders Requirements: - 4+ years of experience in data engineering - Azure certification is mandatory - Strong proficiency in Azure, Databricks, PySpark, SQL, data modeling, data integration, and workflow orchestration - Experience working with large datasets (500 GB+) - Strong communication and interpersonal skills - Ability to work in a 2nd shift - Experience in product engineering is a plus - Minimum 2-3 projects with experience working in large datasets Skills: azure, sql, integration, orchestration, communication skills, data security, databricks, workflow, modeling, data quality, data integration, data modeling, advanced SQL, workflow orchestration, data compliance, PySpark,

Posted 2 weeks ago

Apply

7.0 - 10.0 years

7 - 10 Lacs

coimbatore, tamil nadu, india

On-site

Sr Data Engg : Very Strong Python knowledge Experience with pyspark, bigdata technologies Proficiency in Databricks Knowledge of cloud AWS Prefered Strong debugging skills Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools. Curiosity, creativity, and excitement for technology and innovation. Demonstrated quantitative and problem-solving abilities. Expert proficiency in using Python, Spark (tuning jobs), SQL, Hadoop platforms to build Big Data Comfortable in developing shell scripts for automation. Proficient in standard software development, such as version control, testing, and deployment. Extensive data warehousing/data lake development experience with strong data modelling and data integration experience. Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

7 - 10 Lacs

hyderabad, telangana, india

On-site

Sr Data Engg : Very Strong Python knowledge Experience with pyspark, bigdata technologies Proficiency in Databricks Knowledge of cloud AWS Prefered Strong debugging skills Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools. Curiosity, creativity, and excitement for technology and innovation. Demonstrated quantitative and problem-solving abilities. Expert proficiency in using Python, Spark (tuning jobs), SQL, Hadoop platforms to build Big Data Comfortable in developing shell scripts for automation. Proficient in standard software development, such as version control, testing, and deployment. Extensive data warehousing/data lake development experience with strong data modelling and data integration experience. Good SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

bengaluru

Work from Office

Role & responsibilities Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Delta Lake). Design and optimize data models in AWS Redshift for performance and scalability. Manage Redshift clusters and EC2-based deployments, ensuring reliability and cost efficiency. Integrate data from diverse sources (structured/unstructured) into centralized data platforms. Implement data quality checks, monitoring, and logging across pipelines. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Required Skills & Experience: 36 years of experience in data engineering. Strong expertise in Databricks (Spark, Delta Lake, notebooks, job orchestration). Hands-on experience with AWS Redshift (cluster management, performance tuning, workload optimization). Proficiency with AWS EC2, S3, and related AWS services. Strong SQL and Python skills. Experience with CI/CD and version control (Git). Preferred candidate profile We are seeking a skilled Data Engineer with hands-on experience in Databricks and AWS Redshift (including EC2 deployments) to design, build, and optimize data pipelines that support analytics and business intelligence initiatives.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Remote

Role & responsibilities Key Responsibilities: At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Databricks with knowledge in Pyspark Database: Oracle or any other database Programming: Python with awareness of Streamlit

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Role & responsibilities Looking for a skilled Data Engineer with expertise in Python and Azure Databricks for building scalable data pipelines. Must have strong SQL skills for designing, querying, and optimizing relational databases. Responsible for data ingestion, transformation, and orchestration across cloud platforms. Experience with coding best practices, performance tuning, and CI/CD in Azure ecosystem is essential. Need Streamlit exp.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Role & responsibilities Looking for a skilled Data Engineer with expertise in Python and Azure Databricks for building scalable data pipelines. Must have strong SQL skills for designing, querying, and optimizing relational databases. Responsible for data ingestion, transformation, and orchestration across cloud platforms. Experience with coding best practices, performance tuning, and CI/CD in Azure ecosystem is essential.

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Senior Data Modeller Telecom Domain Job Location: Anywhere in India ( Preferred location - Gurugram , Noida , Hyderabad , Bangalore ) Experience: 7+ Years Domain: Telecommunications Job Summary: We are hiring a Senior Data Modeller with strong telecom domain expertise. You will design and standardize enterprise-wide data models across domains like Customer, Product, Billing, and Network, ensuring alignment with TM Forum standards (SID, eTOM). You'll collaborate with cross-functional teams to translate business needs into scalable, governed data structures, supporting analytics, ML, and digital transformation. Key Responsibilities: Design logical/physical data models for telecom domains Align models with TM Forum SID, eTOM, ODA, and data mesh principles Develop schemas (normalized, Star, Snowflake) based on business needs Maintain data lineage, metadata, and version control Collaborate with engineering teams on Azure, Databricks implementations Tag data for privacy, compliance (GDPR), and data quality Required Skills: 7+ years in data modelling, 3+ years in telecom domain Proficient in TM Forum standards and telecom business processes Hands-on with data modeling tools (SSAS, dbt, Informatica) Expertise in SQL, metadata documentation, schema design Cloud experience: Azure Synapse, Databricks, Snowflake Experience in CRM, billing, network usage, campaign data models Familiar with data mesh, domain-driven design, and regulatory frameworks Education: Bachelors or Masters in CS, Telecom Engineering, or related field Please go the JD and If you are interested, kindly share your updated resume along with the following details:Few bullet points on Current CTC (fixed plus variable) Offer in hand (fixed plus variable) Expected CTC Notice period Few points on relevant skills and experience Email: sp@intellisearchonline.net

Posted 2 months ago

Apply

2.0 - 6.0 years

10 - 18 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure/AWS data system. You will play a crucial role in managing and optimizing data workflows across Cloud platforms such as Azure/AWS - Data Factory, Data Lake, Databricks, and Synapse. Your key responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure/AWS Data Factory (ADF), Databricks, and Azure/AWS Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure/AWS Data Lake, ensuring performance and security best practices. Location - Kerala and Coimbatore To qualify for the role, you must have 2 - 6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure/AWS ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment To know more about the opportunity click on below Link and apply for the role Azure Link - https://careers.ey.com/job-invite/1609635/ AWS Link - https://careers.ey.com/job-invite/1609586/

Posted 2 months ago

Apply

3.0 - 5.0 years

6 - 16 Lacs

Pune

Work from Office

Primary Job Responsibilities: Collaborate with team members to maintain, monitor, and improve data ingestion pipelines on the Data & AI platform. Attend the office 3 times a week for collaborative sessions and team alignment. Drive innovation in ingestion and analytics domains to enhance performance and scalability. Work closely with the domain architect to implement and evolve data engineering strategies. Required Skills: Minimum 5 years of experience in Python development focused on Data Engineering. Hands-on experience with Databricks and Delta Lake format. Strong proficiency in SQL, data structures, and robust coding practices. Solid understanding of scalable data pipelines and performance optimization. Preferred / Nice to Have: Familiarity with monitoring tools like Prometheus and Grafana. Experience using Copilot or AI-based tools for code enhancement and efficiency.

Posted 3 months ago

Apply

3.0 - 5.0 years

6 - 16 Lacs

Pune

Work from Office

Primary Job Responsibilities: Collaborate with team members to maintain, monitor, and improve data ingestion pipelines on the Data & AI platform. Attend the office 3 times a week for collaborative sessions and team alignment. Drive innovation in ingestion and analytics domains to enhance performance and scalability. Work closely with the domain architect to implement and evolve data engineering strategies Required Skills: Minimum 5 years of experience in Python development focused on Data Engineering. Hands-on experience with Databricks and Delta Lake format. Strong proficiency in SQL, data structures, and robust coding practices. Solid understanding of scalable data pipelines and performance optimization. Preferred / Nice to Have: Familiarity with monitoring tools like Prometheus and Grafana. Experience using Copilot or AI-based tools for code enhancement and efficiency.

Posted 3 months ago

Apply

10.0 - 16.0 years

60 - 75 Lacs

Pune

Hybrid

Position Summary: As a Software Architect, you will be responsible for providing technical leadership and architectural guidance to development teams, ensuring the design and implementation of scalable, robust, and maintainable software solutions. You will collaborate with stakeholders, including business leaders, project managers, and developers, to understand requirements, define architectural goals, and make informed decisions on technology selection, system design, and implementation strategies. Additionally, you will mentor and coach team members, promote best practices, and foster a culture of innovation and excellence within the organization. This role is based in Redaptive Pune, India office. Responsibilities and Duties: Time Spent Performing Duty: System Design and Architecture : 40% Identify and propose technical solutions for complex problem-statements. Provides an application-level perspective during design and implementation, which incorporates for cost constraints, testability, complexity, scalability, performance, migrations, etc. Provide technical leadership and guidance to development teams, mentoring engineers and fostering a culture of excellence and innovation. Review code and architectural designs to ensure adherence to coding standards, best practices, and architectural principles. Create and maintain architectural documentation, including architectural diagrams, design documents, and technical specifications, to ensure clarity and facilitate collaboration. Software Design and Development: 50% Gather and analyze requirements from stakeholders, understanding business needs, and translating them into technical specifications. Work alongside teams at all stages of design & development. Augmenting and supporting teams as needed. Collaborate with product managers, stakeholders, and cross-functional teams to define project scope, requirements, and timelines, and ensure successful project execution. Knowledge Sharing and Continuous Improvement: 10% Conduct presentations, workshops, and training sessions to educate stakeholders and development teams on architectural concepts, best practices, and technologies. Stay updated with emerging technologies, industry trends, and best practices in software architecture and development. Identify opportunities for process improvement, automation, and optimization in software development processes and methodologies. Share knowledge and expertise with team members through mentorship, training sessions, and community involvement. Required Abilities and Skills: Strong analytical and troubleshooting skills. Excellent verbal and written communication skills. Ability to effectively communicate with stakeholders, including business leaders and project managers to understand requirements and constraints. Works effectively with cross-functional teams, including developers, QA, product managers, and operations. Capability to understand the bigger picture and design systems that align with business goals, scalability requirements, and future growth. Ability to make tough decisions and take ownership of architectural choices, considering both short-term and long-term implications Mastery of one or more programming languages commonly used in software development, such as Java, Python, or JavaScript. Expertise in SQL and NoSQL database, including database design and optimization. Ability to quickly learn new technologies and adapt to changing requirements. Knowledge of techniques for designing scalable and high-performance web services, including load balancing, caching, and horizontal scaling. Knowledge of software design principles (e.g. object-oriented principles, data structures, and algorithms.) Processes a security mindset, drives adoption of best practices to design systems that are secure and resilient to security threats. Continuously learning and staying up to date with emerging technologies and best practices. Domain knowledge in energy efficiency, solar/storage, or electric utilities is a plus. Education and Experience: 10+ years of software development experience. Proven track record of delivering high-quality software solutions within deadlines. Demonstrated technical leadership experience. Experience with data heavy systems like Databricks and Data Ops. Experience with Cloud (AWS) application development. Experience with Java & Spring framework strongly preferred. Experience with distributed architectures, SOA, microservices and containerization technologies (e.g., Docker, Kubernetes) Experience designing and developing web-based applications and backend services. Travel: This role may require 1-2 annual international work visits to the US. The Perks! Equity plan participation Medical and Personal Accident Insurance Support on Hybrid working and Relocation Flexible Time Off Continuous Learning Annual bonus, subject to company and individual performance The company is an Equal Opportunity Employer, drug free workplace, and complies with Labor Laws as applicable. All duties and responsibilities are essential functions and requirements and are subject to possible modification to reasonably accommodate individuals with disabilities. The requirements listed in this document are the minimum levels of knowledge, skills, or abilities.

Posted 3 months ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Primary Responsibilities Come up with architecture and design on various aspects like extensibility, scalability, security, design patterns etc., against a predefined checklist and ensure that all relevant best practices are followed Execute POCs to make sure that suggested design/technologies meet the requirements Architecting with modern technology stack and Designing Public Cloud Application leveraging in Azure Possess/acquire solid troubleshooting skills and be interested in performing. Troubleshooting of issues in different desperate technologies and environments Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent ( Experience - 6+ years ) Solid foundational skills on Data engineering (using on prime & cloud tools) Data Management Data engineering on Azure Azure Databricks Azure Data flow, Azure Functions ANSI SQL, PySpark and Spark SQL - ETL Processes, Databrick DLT (Delta Live table) Unity Catalog integration Orchestrate with Databricks workflow CI/CD and version control Knowledge on Snowflake, RDBMS

Posted 3 months ago

Apply

12 - 22 years

35 - 65 Lacs

Chennai

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

10 - 18 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

10 - 20 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

11 - 20 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

10.0 - 18.0 years

20 - 35 Lacs

noida

Remote

Job Title: MLOps Engineer Work Timing: US EST Hours Duration: Contract Experience: 8+ years Location: India (Remote) Job Description: We are seeking an experienced Machine Learning Operations (MLOps) Engineer to join our AI & Machine Learning Platform Team . The ideal candidate will have strong expertise in cloud infrastructure, CI/CD, container orchestration, and ML model deployment. You will work at the intersection of Data Science, Data Engineering, and DevOps , enabling scalable ML solutions and production-grade AI applications. Key Responsibilities: Design and deploy scalable ML infrastructure using cloud and containerization technologies (Azure, Kubernetes, AKS, Docker). Collaborate with cross-functional teams to build cloud-hosted, automated pipelines for running, monitoring, and retraining ML models. Implement model and pipeline validation procedures with Data Scientists, Data Engineers, and ML Engineers. Optimize and refactor development code to ensure seamless production deployments . Build and maintain data and feature engineering pipelines to support ML models. Define configurations and specifications for automated production environments. Develop CI/CD pipelines to enable continuous integration, testing, and deployment of ML models and features. Support AI/ML applications and MLOps practices for production-grade deployments. Required Skills: Strong experience with: Azure Cloud Azure DevOps (CI/CD pipelines) Kubernetes & AKS Databricks (Data & ML platform) Solid understanding of machine learning model deployment into production. Hands-on experience with platform engineering and MLOps practices . Nice-to-Have Skills: Experience with ML Infrastructure deployments . Exposure to AI applications and advanced MLOps frameworks. Thanks & Regards: Kanika Katiyar Associate Recruiter Email: kkatiyar@fcsltd.com FCS Software Solutions Limited https://www.fcsltd.com

Posted Date not available

Apply

7.0 - 10.0 years

20 - 25 Lacs

noida

Remote

Focus : Build and maintain data pipelines and infrastructure to feed AI models. Responsibilities: Architect and implement efficient ETL/ELT pipelines to ingest data from diverse internal and external sources (APIs, databases, streaming platforms). Optimize data storage, indexing, and retrieval to support high-volume, low-latency AI workloads. Ensure data quality by implementing validation, cleansing, and anomaly detection mechanisms. Manage cloud data infrastructure (AWS Glue, Databricks, Snowflake, Kafka, etc.) or equivalent on-prem tools

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies