Home
Jobs
Companies
Resume
3 Job openings at Inferenz
Sr MLOps Engineer

Pune, Maharashtra, India

5 years

Not disclosed

On-site

Full Time

Position: Sr. MLOps Engineer Location: Ahmedabad, Pune Required Experience: 5+ Years of experience Preferred Immediate Joiners Job Overview Building the machine learning production infrastructure (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. We are looking for a highly skilled MLOps Engineer to join our team. As an MLOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure that supports the deployment, monitoring, and scaling of machine learning models in production. You will work closely with data scientists, software engineers, and DevOps teams to ensure seamless integration of machine learning models into our production systems. The job is NOT for your if You don’t want to build a career in AI/ML. Becoming an expert in this technology and staying current will require significant self-motivation. You like the comfort and predictability of working on the same problem or code base for years. The tools, best practices, architectures, and problems are all going through rapid change — you will be expected to learn new skills quickly and adapt. Key Responsibilities: · Model Deployment: Design and implement scalable, reliable, and secure pipelines for deploying machine learning models to production. · Infrastructure Management: Develop and maintain infrastructure as code (IaC) for managing cloud resources, compute environments, and data storage. · Monitoring and Optimization: Implement monitoring tools to track the performance of models in production, identify issues, and optimize performance. · Collaboration: Work closely with data scientists to understand model requirements and ensure models are production ready. · Automation: Automate the end-to-end process of training, testing, deploying, and monitoring models. · Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for machine learning projects. · Version Control: Implement model versioning to manage different iterations of machine learning models. · Security and Governance: Ensure that the deployed models and data pipelines are secure and comply with industry regulations. · Documentation: Create and maintain detailed documentation of all processes, tools, and infrastructure. Qualifications: · 5+ years of experience in a similar role (DevOps, DataOps, MLOps, etc.) · Bachelor’s or master’s degree in computer science, Engineering, or a related field. · Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) · Strong understanding of machine learning lifecycle, data pipelines, and model serving. · Proficiency in programming languages such as Python, Shell scripting, and familiarity with ML frameworks (TensorFlow, PyTorch, etc.). · Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.) · Experience with CI/CD tools like Jenkins, GitLab CI, or similar · Experience building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer (or equivalent) · Strong software engineering skills in complex, multi-language systems · Comfort with Linux administration · Experience working with cloud computing and database systems · Experience building custom integrations between cloud-based systems using APIs · Experience developing and maintaining ML systems built with open-source tools · Experience developing with containers and Kubernetes in cloud computing environments · Familiarity with one or more data-oriented workflow orchestration frameworks (MLFlow, KubeFlow, Airflow, Argo, etc.) · Ability to translate business needs to technical requirements · Strong understanding of software testing, benchmarking, and continuous integration · Exposure to machine learning methodology and best practices · Understanding of regulatory requirements for data privacy and model governance. Preferred Skills: · Excellent problem-solving skills and ability to troubleshoot complex production issues. · Strong communication skills and ability to collaborate with cross-functional teams. · Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). · Knowledge of database systems (SQL, NoSQL). · Experience with Generative AI frameworks · Preferred cloud-based or MLOps/DevOps certification (AWS, GCP, or Azure) Show more Show less

Lead Data Engineer (Databricks)

Ahmedabad, Gujarat, India

7 years

Not disclosed

On-site

Full Time

Position: Lead Data Engineer (Databricks) Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner We are looking for an accomplished Lead Data Engineer with expertise in Databricks to join our dynamic team. This role is crucial for enhancing our data engineering capabilities, and it offers the chance to work with advanced technologies, including Generative AI. Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit Show more Show less

Senior Data Engineer (Snowflake+DBT+Airflow)

Ahmedabad

5 years

INR 3.0 - 7.75 Lacs P.A.

On-site

Part Time

Location: Ahmedabad / Pune Required Experience: 5+ Years Preferred Immediate Joiner We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving skills and the ability to work independently. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit

Inferenz

3 Jobs

My Connections Inferenz

Download Chrome Extension (See your connection in the Inferenz )

chrome image
Download Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview