Position: Sr. MLOps Engineer Location: Ahmedabad, Pune Required Experience: 5+ Years of experience Preferred Immediate Joiners Job Overview Building the machine learning production infrastructure (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. We are looking for a highly skilled MLOps Engineer to join our team. As an MLOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure that supports the deployment, monitoring, and scaling of machine learning models in production. You will work closely with data scientists, software engineers, and DevOps teams to ensure seamless integration of machine learning models into our production systems. The job is NOT for your if You don’t want to build a career in AI/ML. Becoming an expert in this technology and staying current will require significant self-motivation. You like the comfort and predictability of working on the same problem or code base for years. The tools, best practices, architectures, and problems are all going through rapid change — you will be expected to learn new skills quickly and adapt. Key Responsibilities: · Model Deployment: Design and implement scalable, reliable, and secure pipelines for deploying machine learning models to production. · Infrastructure Management: Develop and maintain infrastructure as code (IaC) for managing cloud resources, compute environments, and data storage. · Monitoring and Optimization: Implement monitoring tools to track the performance of models in production, identify issues, and optimize performance. · Collaboration: Work closely with data scientists to understand model requirements and ensure models are production ready. · Automation: Automate the end-to-end process of training, testing, deploying, and monitoring models. · Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for machine learning projects. · Version Control: Implement model versioning to manage different iterations of machine learning models. · Security and Governance: Ensure that the deployed models and data pipelines are secure and comply with industry regulations. · Documentation: Create and maintain detailed documentation of all processes, tools, and infrastructure. Qualifications: · 5+ years of experience in a similar role (DevOps, DataOps, MLOps, etc.) · Bachelor’s or master’s degree in computer science, Engineering, or a related field. · Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) · Strong understanding of machine learning lifecycle, data pipelines, and model serving. · Proficiency in programming languages such as Python, Shell scripting, and familiarity with ML frameworks (TensorFlow, PyTorch, etc.). · Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.) · Experience with CI/CD tools like Jenkins, GitLab CI, or similar · Experience building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer (or equivalent) · Strong software engineering skills in complex, multi-language systems · Comfort with Linux administration · Experience working with cloud computing and database systems · Experience building custom integrations between cloud-based systems using APIs · Experience developing and maintaining ML systems built with open-source tools · Experience developing with containers and Kubernetes in cloud computing environments · Familiarity with one or more data-oriented workflow orchestration frameworks (MLFlow, KubeFlow, Airflow, Argo, etc.) · Ability to translate business needs to technical requirements · Strong understanding of software testing, benchmarking, and continuous integration · Exposure to machine learning methodology and best practices · Understanding of regulatory requirements for data privacy and model governance. Preferred Skills: · Excellent problem-solving skills and ability to troubleshoot complex production issues. · Strong communication skills and ability to collaborate with cross-functional teams. · Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). · Knowledge of database systems (SQL, NoSQL). · Experience with Generative AI frameworks · Preferred cloud-based or MLOps/DevOps certification (AWS, GCP, or Azure) Show more Show less
Position: Lead Data Engineer (Databricks) Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner We are looking for an accomplished Lead Data Engineer with expertise in Databricks to join our dynamic team. This role is crucial for enhancing our data engineering capabilities, and it offers the chance to work with advanced technologies, including Generative AI. Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit Show more Show less
Location: Ahmedabad / Pune Required Experience: 5+ Years Preferred Immediate Joiner We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving skills and the ability to work independently. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
Location: Ahmedabad,Pune Required Experience: 7+ Years Preferred Immediate Joiner Inferenz is a pioneering AI and Data Native company dedicated to transforming how organizations leverage data and AI to drive innovation and efficiency. As industry leaders, we specialize in delivering cutting-edge AI and data solutions that empower businesses to harness the full potential of their data assets. We are seeking an experienced Senior Full Stack Developer to join our team building innovative Generative AI (GenAI) based applications. The ideal candidate will have a strong background in developing scalable RESTful APIs using Python, as well as modern frontend applications with Node.js or React. Experience with cloud platforms (Azure or AWS), Kubernetes, microservices architecture, and version control (Git or Azure Repos) is essential. Familiarity with AI/ML/GenAI technologies and Agenting AI is highly valued. Key Responsibilities: Full-Stack Development: Design, build, and maintain scalable RESTful APIs using Python and frontend applications using Node.js or React. GenAI Integration: Develop and optimize Agenting AI components using Python, ensuring seamless integration with backend services. Cloud Deployment: Manage application deployment, scaling, and monitoring on Azure or AWS using Kubernetes and microservices architecture. Collaboration: Work with cross-functional teams using Jira and Confluence for project tracking and documentation. Performance Optimization: Ensure high availability, security, and efficiency of applications through robust coding practices and infrastructure management. Required Skills & Experience: Backend: Strong expertise in Python and REST API development. Frontend: Proficiency in Node.js, React, and modern JavaScript frameworks. Cloud & DevOps: Hands-on experience with Azure or AWS, Kubernetes, microservices, and Git or Azure Repos for version control. Tools: Familiarity with Jira, Confluence, and CI/CD pipelines. Experience: 5+ years in full-stack development with production-grade applications. Preferred Skills: AI/ML Knowledge: Understanding of GenAI tools (e.g., LangChain, LLMs, RAG/GraphRAG/MCP architectures) and Agenting AI development. Cloud AI Services: Experience with cloud-based AI platforms (e.g., AWS Bedrock, Azure AI). Architecture: Proficiency in designing scalable systems and troubleshooting distributed environments. What We Offer: Competitive salary and comprehensive benefits package. Flexible work hours and a hybrid working model to support work-life balance. Opportunities for professional growth and development in a cutting-edge technology environment. Exposure to Generative AI and other advanced technologies. Inferenz is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
Location: Ahmedabad,Pune Required Experience: 5+ Years Preferred Immediate Joiner Inferenz is a pioneering AI and Data Native company dedicated to transforming how organizations leverage data and AI to drive innovation and efficiency. As industry leaders, we specialize in delivering cutting-edge AI and data solutions that empower businesses to harness the full potential of their data assets. We are seeking an experienced Senior Full Stack Developer to join our team building innovative Generative AI (GenAI) based applications. The ideal candidate will have a strong background in developing scalable RESTful APIs using Python, as well as modern frontend applications with Node.js or React. Experience with cloud platforms (Azure or AWS), Kubernetes, microservices architecture, and version control (Git or Azure Repos) is essential. Familiarity with AI/ML/GenAI technologies and Agenting AI is highly valued. Key Responsibilities: Full-Stack Development: Design, build, and maintain scalable RESTful APIs using Python and frontend applications using Node.js or React. GenAI Integration: Develop and optimize Agenting AI components using Python, ensuring seamless integration with backend services. Cloud Deployment: Manage application deployment, scaling, and monitoring on Azure or AWS using Kubernetes and microservices architecture. Collaboration: Work with cross-functional teams using Jira and Confluence for project tracking and documentation. Performance Optimization: Ensure high availability, security, and efficiency of applications through robust coding practices and infrastructure management. Required Skills & Experience: Backend: Strong expertise in Python and REST API development. Frontend: Proficiency in Node.js, React, and modern JavaScript frameworks. Cloud & DevOps: Hands-on experience with Azure or AWS, Kubernetes, microservices, and Git or Azure Repos for version control. Tools: Familiarity with Jira, Confluence, and CI/CD pipelines. Experience: 5+ years in full-stack development with production-grade applications. Preferred Skills: AI/ML Knowledge: Understanding of GenAI tools (e.g., LangChain, LLMs, RAG/GraphRAG/MCP architectures) and Agenting AI development. Cloud AI Services: Experience with cloud-based AI platforms (e.g., AWS Bedrock, Azure AI). Architecture: Proficiency in designing scalable systems and troubleshooting distributed environments. What We Offer: Competitive salary and comprehensive benefits package. Flexible work hours and a hybrid working model to support work-life balance. Opportunities for professional growth and development in a cutting-edge technology environment. Exposure to Generative AI and other advanced technologies. Inferenz is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
You are a highly skilled Lead Data Engineer (Snowflake) with 7 to 10 years of experience, seeking to join a dynamic team in Ahmedabad or Pune. Your expertise includes extensive knowledge of Snowflake, cloud platforms, ETL processes, data warehousing concepts, and various programming languages. If you are passionate about working with large datasets, designing scalable database schemas, and solving complex data problems, we are excited to welcome you aboard! Your responsibilities will involve designing, developing, and optimizing data pipelines using Snowflake and ELT/ETL tools. You will be tasked with architecting, implementing, and maintaining data warehouse solutions, ensuring high performance and scalability. Additionally, you will design efficient database schemas and data models to support business needs, write and optimize complex SQL queries, and develop data transformation scripts using Python, C#, or Java for process automation. Ensuring data integrity, security, and governance throughout the data lifecycle will be paramount, as you analyze, troubleshoot, and resolve data-related issues at various strategic levels. Collaboration with cross-functional teams to comprehend business requirements and deliver data-driven solutions will be a key aspect of your role. **Qualifications (Must Have):** - Strong experience with Snowflake. - Deep understanding of transactional databases, OLAP, and data warehousing concepts. - Experience in designing database schemas and data models. - Proficiency in one programming language (Python, C#, or Java). - Strong problem-solving and analytical skills. **Good to Have:** - Snowpro Core or Snowpro Advanced certificate. - Experience with cost/performance optimization. - Client-facing experience with the ability to understand business needs. - Ability to work collaboratively in a team environment. **Perks:** - Flexible Timings - 5 Days Working - Healthy Environment - Celebration - Learn and Grow - Build the Community - Medical Insurance Benefit,
As a Senior Full Stack Developer at Inferenz, a pioneering AI and Data Native company based in Ahmedabad and Pune, you will play a crucial role in developing innovative Generative AI (GenAI) based applications. With over 7 years of experience, you will be responsible for designing, building, and maintaining scalable RESTful APIs using Python, as well as modern frontend applications with Node.js or React. Your key responsibilities will include integrating GenAI components using Python, ensuring seamless backend services integration, managing application deployment on Azure or AWS using Kubernetes and microservices architecture, and collaborating with cross-functional teams using Jira and Confluence for project tracking and documentation. Additionally, you will focus on performance optimization to ensure high availability, security, and efficiency of applications through robust coding practices and infrastructure management. To excel in this role, you should have a strong expertise in Python and REST API development for backend, proficiency in Node.js, React, and modern JavaScript frameworks for frontend, hands-on experience with Azure or AWS, Kubernetes, microservices, and Git or Azure Repos for Cloud & DevOps, familiarity with Jira, Confluence, and CI/CD pipelines for Tools, and at least 5 years of experience in full-stack development with production-grade applications. Preferred skills include understanding of GenAI tools such as LangChain, LLMs, RAG/GraphRAG/MCP architectures, experience with cloud-based AI platforms like AWS Bedrock, Azure AI, proficiency in designing scalable systems and troubleshooting distributed environments for Architecture. In return, Inferenz offers a competitive salary, comprehensive benefits package, flexible work hours, and a hybrid working model to support work-life balance. You will have opportunities for professional growth and development in a cutting-edge technology environment, exposure to Generative AI and other advanced technologies. Inferenz is an equal opportunity employer that celebrates diversity and is committed to creating an inclusive environment for all employees. Join Inferenz to enjoy perks like flexible timings, 5 days working week, a healthy work environment, celebrations, learning and growth opportunities, building a community, and medical insurance benefits.,
You are a highly skilled and experienced Solution Architect specializing in Data & AI, with over 8 years of experience. In this role, you will lead and drive the data-driven transformation within the organization. Your main responsibility is to design and implement cutting-edge AI and data solutions that align with the business objectives. Collaborating closely with cross-functional teams, you will create scalable, high-performance architectures utilizing modern technologies in data engineering, machine learning, and cloud computing. Your key responsibilities include architecting and designing end-to-end data and AI solutions to address business challenges and optimize decision-making. You will define and implement best practices for data architecture, data governance, and AI model deployment. Collaborating with data engineers, data scientists, and business stakeholders, you will deliver scalable and high-impact AI-driven applications. Additionally, you will lead the integration of AI models with enterprise applications, ensuring seamless deployment and operational efficiency. It is also part of your role to evaluate and recommend the latest technologies in data platforms, AI frameworks, and cloud-based analytics solutions while ensuring data security, compliance, and ethical AI implementation. Guiding teams in adopting advanced analytics, AI, and machine learning models for predictive insights and automation is also a crucial aspect. Your role requires driving innovation by identifying new opportunities for AI and data-driven improvements within the organization. To excel in this position, you must possess over 8 years of experience in designing and implementing data and AI solutions. Strong expertise in cloud platforms such as AWS, Azure, or Google Cloud is essential. Hands-on experience with big data technologies like Spark, Databricks, Snowflake, etc., is required. Proficiency in TensorFlow, PyTorch, Scikit-learn, etc., is a must. A deep understanding of data modeling, ETL processes, and data governance frameworks is necessary. Experience in MLOps, model deployment, and automation is expected. Proficiency in Generative AI frameworks and strong programming skills in Python, SQL, and Java/Scala (preferred) are essential. Familiarity with containerization and orchestration (Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to work in a fast-paced environment are crucial. Strong communication and leadership skills, with the ability to drive technical conversations, are highly valuable. Preferred qualifications for this role include certifications in cloud architecture, data engineering, or AI/ML, experience with generative AI, a background in developing AI-driven analytics solutions for enterprises, experience with Graph RAG, Building AI Agents, Multi-Agent systems, and additional certifications in AI/GenAI. Proven leadership skills are expected in this position. This role offers various perks such as flexible timings, 5 days working schedule, a healthy environment, celebrations, opportunities for learning and growth, building a community, and medical insurance benefits.,
Location: Ahmedabad,Pune Required Experience: 6+ Years Preferred Immediate Joiner We are seeking a highly skilled and motivated Lead Cloud Engineer with over 6 years of experience in designing, implementing, and managing scalable, secure, and highly available cloud solutions. The ideal candidate will have deep knowledge of cloud platforms (AWS, Azure, or GCP), strong leadership capabilities, and a hands-on approach to infrastructure automation and DevOps practices. Key Responsibilities: Lead the architecture, design, and implementation of enterprise-grade cloud infrastructure solutions. Collaborate with DevOps, Security, and Development teams to ensure robust CI/CD pipelines and cloud-native application deployment. Evaluate and select appropriate cloud services and tools based on business and technical requirements. Drive automation across infrastructure provisioning, configuration management, and application deployment. Monitor system performance, ensure high availability, and proactively resolve any issues. Implement cloud cost optimization strategies and maintain operational budgets. Enforce security best practices and compliance standards across cloud environments. Mentor junior engineers and provide technical leadership across cross-functional teams. Maintain documentation and architectural diagrams for cloud environments and processes. Stay current with emerging technologies and propose innovations that improve business outcomes. Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 6+ years of experience in cloud engineering, including at least 3 years in a leadership or senior technical role. Expertise in at least one major cloud platform: AWS, Azure, or Google Cloud Platform (GCP) . Proficiency in IaC tools such as Terraform, CloudFormation, or Pulumi . Strong scripting skills using Python, Bash, or PowerShell . Experience with CI/CD tools like Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps . Deep understanding of networking, security, identity & access management in the cloud. Familiarity with containerization and orchestration technologies such as Docker and Kubernetes . Strong analytical, problem-solving, and troubleshooting skills. Excellent verbal and written communication skills. Preferred Qualifications (Nice to Have): Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect, GCP Professional Cloud Architect). Experience with serverless architectures and event-driven systems. Familiarity with monitoring tools like Prometheus, Grafana, Datadog, or CloudWatch . Experience leading cloud migration or modernization projects. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
As an accomplished Lead Data Engineer with 7 to 10 years of experience in data engineering, we are looking for you to join our dynamic team in either Ahmedabad or Pune. Your expertise in Databricks will play a crucial role in enhancing our data engineering capabilities and working with advanced technologies, including Generative AI. Your key responsibilities will include leading the design, development, and optimization of data solutions using Databricks to ensure scalability, efficiency, and security. You will collaborate with cross-functional teams to gather and analyze data requirements, translating them into robust data architectures and solutions. Developing and maintaining ETL pipelines, leveraging Databricks, and integrating with Azure Data Factory when necessary will also be part of your role. Furthermore, you will implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensuring data quality, governance, and security practices are adhered to will be essential to maintain the integrity and reliability of data solutions. Providing technical leadership and mentorship to junior engineers to foster an environment of learning and growth will also be a key aspect of your role. It is crucial to stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Your proven expertise in building and optimizing data solutions using Databricks, integrating with Azure Data Factory/AWS Glue, proficiency in SQL, and programming languages like Python or Scala are essential. A strong understanding of data modeling, ETL processes, Data Warehousing/Data Lakehouse concepts, cloud platforms (particularly Azure), and containerization technologies such as Docker are required. Excellent analytical, problem-solving, and communication skills are a must, along with demonstrated leadership ability and experience mentoring junior team members. Preferred qualifications include experience with Generative AI technologies and applications, familiarity with other cloud platforms like AWS or GCP, and knowledge of data governance frameworks and tools. In return, we offer flexible timings, 5 days working week, a healthy environment, celebrations, opportunities to learn and grow, build a community, and medical insurance benefits. Join us and be part of a team that values innovation, collaboration, and professional development.,
Position: Lead Data Engineer (Databricks) Location: Pune, Ahmedabad Required Experience: 7 to 10 years Preferred Immediate Joiners Job Overview: We are looking for an accomplished Lead Data Engineer with expertise in Databricks to join our dynamic team. This role is crucial for enhancing our data engineering capabilities, and it offers the chance to work with advanced technologies, including Generative AI. Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Skills: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools.
Location: Ahmedabad, Pune Required Experience: 10+ Years Preferred Immediate Joiner We are looking for a seasoned Solutions Architect with strong expertise in Snowflake, DBT (Data Build Tool), and Apache Airflow to lead our data architecture strategy, design scalable data pipelines, and optimize our cloud data platform. The ideal candidate will have a deep understanding of modern data stack technologies and a proven track record of delivering enterprise-grade data solutions. Key Responsibilities: Design, architect, and oversee implementation of scalable, secure, and high-performing data solutions using Snowflake, DBT, and Airflow. Collaborate with business stakeholders, data engineers, and analysts to understand data requirements and translate them into technical solutions. Define best practices and governance for data modelling, ELT pipelines, and metadata management. Guide and mentor data engineering teams on architecture standards and coding practices. Evaluate and recommend tools, frameworks, and strategies to enhance the performance and reliability of data infrastructure. Lead architecture reviews, data quality audits, and technical design sessions. Ensure compliance with data security, privacy, and regulatory requirements. Monitor and troubleshoot performance issues related to Snowflake, Airflow DAGs, and DBT transformations. Required Skills & Qualifications: 10+ years of overall experience in data engineering and architecture roles. Strong hands-on experience with: Snowflake : Data warehouse design, performance tuning, role-based access controls. DBT : Model creation, version control, testing, and documentation. Apache Airflow : DAG development, orchestration, scheduling, monitoring. Strong experience in SQL, data modelling (star/snowflake schemas), and ETL/ELT frameworks. Proficient in Python and scripting for data pipelines and automation. Solid understanding of cloud platforms (preferably AWS , Azure , or GCP ). Experience with CI/CD practices for data deployments. Excellent problem-solving, communication, and leadership skills. Good To Have: Snowflake or DBT certification(s) preferred. Experience integrating with BI tools (e.g., Tableau, Power BI). Familiarity with data catalog and lineage tools like Alation , Collibra , or Atlan . Prior experience working in Agile environments. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
Location: Ahmedabad, Pune Required Experience: 10+ Years Preferred Immediate Joiner We are looking for a seasoned Solutions Architect with strong expertise in Snowflake, DBT (Data Build Tool), and Apache Airflow to lead our data architecture strategy, design scalable data pipelines, and optimize our cloud data platform. The ideal candidate will have a deep understanding of modern data stack technologies and a proven track record of delivering enterprise-grade data solutions. Key Responsibilities: Design, architect, and oversee implementation of scalable, secure, and high-performing data solutions using Snowflake, DBT, and Airflow. Collaborate with business stakeholders, data engineers, and analysts to understand data requirements and translate them into technical solutions. Define best practices and governance for data modelling, ELT pipelines, and metadata management. Guide and mentor data engineering teams on architecture standards and coding practices. Evaluate and recommend tools, frameworks, and strategies to enhance the performance and reliability of data infrastructure. Lead architecture reviews, data quality audits, and technical design sessions. Ensure compliance with data security, privacy, and regulatory requirements. Monitor and troubleshoot performance issues related to Snowflake, Airflow DAGs, and DBT transformations. Required Skills & Qualifications: 10+ years of overall experience in data engineering and architecture roles. Strong hands-on experience with: Snowflake : Data warehouse design, performance tuning, role-based access controls. DBT : Model creation, version control, testing, and documentation. Apache Airflow : DAG development, orchestration, scheduling, monitoring. Strong experience in SQL, data modelling (star/snowflake schemas), and ETL/ELT frameworks. Proficient in Python and scripting for data pipelines and automation. Solid understanding of cloud platforms (preferably AWS , Azure , or GCP ). Experience with CI/CD practices for data deployments. Excellent problem-solving, communication, and leadership skills. Good To Have: Snowflake or DBT certification(s) preferred. Experience integrating with BI tools (e.g., Tableau, Power BI). Familiarity with data catalog and lineage tools like Alation , Collibra , or Atlan . Prior experience working in Agile environments. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
You are a highly skilled Lead Data Quality Engineer who is responsible for driving data accuracy, consistency, and integrity across the data ecosystem. Your role involves designing, implementing, and overseeing data quality frameworks, ensuring compliance with best practices, and collaborating with cross-functional teams to maintain high data standards. Your key responsibilities include developing and implementing data quality frameworks, policies, and best practices to enhance data governance and integrity. You will conduct data profiling, anomaly detection, and root cause analysis to identify and resolve data quality issues. Implement automated and manual data validation techniques to ensure completeness, consistency, and accuracy. Ensure adherence to data governance principles, regulatory requirements, and industry standards. Work closely with data engineering teams to maintain and enhance data pipelines with embedded quality checks. Develop automated data quality tests, monitoring dashboards, and alerts using SQL, Python, or other data tools. Partner with data engineers, analysts, and business teams to establish quality metrics and ensure alignment on data quality objectives. Track and report data quality KPIs, create dashboards, and provide insights to leadership. You should have at least 7 years of experience in data quality, data governance, or data engineering roles, with a minimum of 2 years in a leadership capacity. Your technical skills should include expertise in SQL for data querying, validation, and analysis. Experience with ETL processes, data pipelines, and data integration tools. Proficiency in Python, PySpark, or other scripting languages for data automation. Hands-on experience with Data Quality and Governance tools and knowledge of cloud platforms and modern data architectures. Familiarity with Big Data technologies is a plus. Soft skills required include strong problem-solving and analytical skills, excellent communication, and stakeholder management abilities, and the ability to lead and mentor a team of data engineers or analysts. Preferred skills include experience in regulated industries with data compliance knowledge and exposure to machine learning data quality frameworks. A data certification is also a plus. In addition to a challenging role, you will enjoy perks such as flexible timings, 5 days working, a healthy environment, celebrations, opportunities to learn and grow, build the community, and medical insurance benefits.,
Location: Ahmedabad, Pune Required Experience: 5+ Years Preferred Immediate Joiner We are seeking a Senior Data Engineer with deep expertise in Databricks to design, develop, and optimize scalable data solutions. This role will be central to advancing our data engineering capabilities, with opportunities to work on cutting-edge technologies including Generative AI , advanced analytics, and cloud-native architectures. You will collaborate with cross-functional teams, mentor junior engineers, and help shape the future of our data platform. Key Responsibilities: Data Architecture & Development: Lead the design, development, and optimization of scalable, secure, and high-performance data solutions in Databricks . ETL/ELT Pipeline Engineering: Build and maintain robust pipelines, integrating Databricks with Azure Data Factory (or AWS Glue), ensuring reliability and efficiency. Advanced Analytics & AI Integration: Implement machine learning models and Generative AI -powered solutions to deliver business innovation. Collaboration: Partner with data scientists, analysts, and business teams to translate requirements into technical designs and deliverables. Data Quality & Governance: Enforce best practices for data validation, governance, and security to ensure trust and compliance. Technical Leadership: Mentor junior engineers, conduct code reviews, and foster a culture of continuous learning. Innovation & Research: Stay current with the latest trends in Databricks, Azure Data Factory, cloud platforms, and AI/ML to recommend and implement improvements. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 5+ years in data engineering, with at least 3 years of hands-on experience in Proven expertise in: Data modelling, Data Lakehouse architectures, and ELT/ETL processes SQL and at least one programming language ( Python or Scala ) Integration with Azure Data Factory or AWS Glue Strong understanding of cloud platforms (Azure preferred) and containerization (Docker). Excellent analytical, problem-solving, and communication skills. Demonstrated experience mentoring or leading technical team. Good To Have: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP . Knowledge of data governance frameworks and tools. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner We are seeking a Lead Data Engineer with deep expertise in Snowflake , dbt , and Apache Airflow to design, implement, and optimize scalable data solutions. This role involves working on complex datasets, building robust data pipelines, ensuring data quality, and collaborating closely with analytics and business teams to deliver actionable insights. If you are passionate about data architecture, ELT best practices, and modern cloud data stack , we’d like to meet you. Key Responsibilities: Pipeline Design & Orchestration: Build and maintain robust, scalable data pipelines using Apache Airflow , including incremental & full-load strategies, retries, and logging. Data Modelling & Transformation: Develop modular, tested, and documented transformations in dbt , ensuring scalability and maintainability. Snowflake Development: Design and maintain warehouse in Snowflake , optimize Snowflake schemas, implement performance tuning (clustering keys, warehouse scaling, materialized views), manage access control, and utilize streams & tasks for automation. Data Quality & Monitoring: Implement validation frameworks (null checks, type checks, threshold alerts) and automated testing for data integrity and reliability. Collaboration: Partner with analysts, data scientists, and business stakeholders to translate requirements into scalable technical solutions. Performance Optimization: Develop incremental and full-load strategies with continuous monitoring, retries, and logging and tune query performance and job execution efficiency. Infrastructure Automation: Use Terraform or similar IaC tools to provision and manage Snowflake, Airflow, and related environments Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field. 7–10 years of experience in data engineering , with strong hands-on expertise in: Snowflake (data modelling, performance tuning, access control, streams & tasks, external tables) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (modular SQL development, Jinja templating, testing, documentation) Proficiency in SQL and Python (Spark experience is a plus). Experience building and managing pipelines on AWS , GCP , or Azure . Strong understanding of data warehousing concepts and ELT best practices. Familiarity with version control (Git) and CI/CD Exposure to infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving, collaboration, and communication skills, with the ability to lead technical projects. Good To Have: Experience with streaming data pipelines (Kafka, Kinesis, Pub/Sub). Exposure to BI/analytics tools (Looker, Tableau, Power BI). Knowledge of data governance and security best practices Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
You are an experienced and results-driven Sr. Data Scientist with over 5 years of professional experience, looking to join our dynamic data team. Your main responsibilities will include analyzing large datasets, building predictive models, and providing actionable insights to drive strategic decision-making. This role offers an exciting opportunity to apply your expertise in machine learning, statistical analysis, and business intelligence in a fast-paced environment. Your key responsibilities will involve extracting insights and building models to understand and predict key metrics. You will collaborate closely with stakeholders to identify business challenges and translate them into data science problems. Additionally, you will collect, clean, and preprocess structured and unstructured data from various sources, focusing on research analysis and model development. Validating and deploying machine learning models to solve real-world business problems will be a crucial part of your role. To excel in this position, you should possess a Bachelor's or Master's degree in computer science, Data Science, Statistics, Mathematics, or a related field. Along with at least 5 years of experience in a data science or machine learning role, you should have strong proficiency in Python (including PySpark, pandas, scikit-learn, TensorFlow/PyTorch) and expertise in MLflow for model tracking and deployment. Familiarity with Databricks for scalable ML workflows and handling data issues is essential. Your skill set should also include solid SQL skills, experience working with large databases, and a good understanding of statistical modeling, A/B testing, and predictive analytics. Knowledge of supervised/unsupervised learning, deep learning, and optimization techniques is important. Experience with A/B testing, model performance monitoring, cloud platforms (AWS, GCP, or Azure), and MLOps will be advantageous. In addition, you should have excellent problem-solving skills, strong communication abilities, and meticulous attention to detail. Experience deploying models to production using CI/CD pipelines is desired. Preferred skills that would be advantageous for this role include experience with MosaicML, Azure ML, AWS Sagemaker, and a background in the Healthcare Industry. This position offers perks such as flexible timings, a 5-day workweek, a healthy work environment, opportunities for learning and growth, community building, and medical insurance benefits.,
Position: Senior Data Engineer (Snowflake+DBT+Airflow) Location: Pune, Ahmedabad Required Experience: 5+ years Preferred Immediate Joiners Job Overview: We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: · Design, implement, and optimize data pipelines and workflows using Apache Airflow · Develop incremental and full-load strategies with monitoring, retries, and logging · Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage · Develop and maintain data warehouses in Snowflake · Ensure data quality, integrity, and reliability through validation frameworks and automated testing · Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. · Monitor job performance and resolve data pipeline issues proactively · Build and maintain data quality frameworks (null checks, type checks, threshold alerts). · Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Qualifications: · Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) · Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) · Proficiency in SQL, Spark and Python · Experience building data pipelines on cloud platforms like AWS, GCP, or Azure · Strong knowledge of data warehousing concepts and ELT best practices · Familiarity with version control systems (e.g., Git) and CI/CD practices · Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. · Excellent problem-solving skills and the ability to work independently Ability to work collaboratively in a team environment.
The company is looking for a Lead AI Engineer with over 8 years of experience to drive AI initiatives, innovate, and create scalable AI/ML solutions. As a Lead AI Engineer, your responsibilities will include developing enterprise-scale AI/ML solutions, leading cutting-edge AI methodologies, collaborating with stakeholders to align AI strategy with business goals, optimizing model performance, establishing best practices for AI development, exploring new AI techniques, managing and mentoring a team of AI engineers, and presenting insights to leadership teams. You should have a Masters or Ph.D. in Computer Science, AI, Machine Learning, or a related field, along with at least 8 years of experience in AI, ML, and deep learning. Proficiency in LLMs, Generative AI, NLP, and multimodal AI is required, as well as expertise in Python, R, Java, and AI/ML frameworks like TensorFlow, PyTorch, or Hugging Face. Experience with cloud-based AI solutions, MLOps, and scalable architectures is essential. Strong leadership, communication, and stakeholder management skills are also necessary. Preferred skills include experience in multi-agent systems, federated learning, AI ethics, computer vision, reinforcement learning, and AI automation. Contributions to AI research, patents, or publications are a plus. The company offers perks such as flexible timings, 5 days working week, a healthy work environment, celebrations, opportunities to learn and grow, building a community, and medical insurance benefits.,
Location: Ahmedabad, Pune Required Experience: 3 to 6 Years Preferred Immediate Joiner We are seeking an experienced RPA Engineer/Senior RPA Engineer to join our team. In this role, you will be responsible for designing, developing, and optimizing RPA solutions using UiPath and other automation tools/technologies. If you have hands-on experience with RPA tools and strong programming skills, this is a great opportunity for you. Key Responsibilities: Design, develop, and deploy RPA bots using UiPath Collaborate with stakeholders to gather and analyze automation requirements Build reusable automation components and workflows for scalability and performance Troubleshoot, test and optimize automation workflows for error-free execution Document technical designs, processes and bot deployments Ensure efficient integration of RPA solutions with business systems and processes Required Skills & Qualifications: 3+ years of hands-on experience in RPA development, primarily with UiPath Bachelor’s or Master’s degree in Computers or a related field (or equivalent practical experience) Proficiency in C#, VBScript, .NET (any one at least) Strong knowledge of UiPath Orchestrator, REFramework, and other UiPath components Understanding of RPA architecture, queue management, and exception handling Experience with APIs, SQL, and system integration Excellent problem-solving, debugging, and optimization skills Strong communication skills and ability to document technical solutions clearly Knowledge of JIRA for task tracking and project management Proficiency with GitHub, TFS or any other tool for version control and collaboration Good To Have: Exposure to the healthcare domain (but not necessary) Experience with other RPA tools (e.g., Automation Anywhere, Blue Prism or Power Automate) is a plus Exposure to Agentic AI, Gen AI, Action Center and Document Understanding UiPath Certifications (Developer/Advanced Developer) Paid One Knowledge of AI/ML integration for automation Familiarity with Agile methodologies and DevOps practices Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit