Lead Full Stack Developer (GenAI Applications) Position: Lead Full Stack Developer (GenAI Applications) Location: Ahmedabad,Pune Required Experience: 7+ Years Preferred Immediate Joiner Apply now Inferenz is a pioneering AI and Data Native company dedicated to transforming how organizations leverage data and AI to drive innovation and efficiency. As industry leaders, we specialize in delivering cutting-edge AI and data solutions that empower businesses to harness the full potential of their data assets. We are seeking an experienced Senior Full Stack Developer to join our team building innovative Generative AI (GenAI) based applications. The ideal candidate will have a strong background in developing scalable RESTful APIs using Python, as well as modern frontend applications with Node.js or React. Experience with cloud platforms (Azure or AWS), Kubernetes, microservices architecture, and version control (Git or Azure Repos) is essential. Familiarity with AI/ML/GenAI technologies and Agenting AI is highly valued. Key Responsibilities: Full-Stack Development: Design, build, and maintain scalable RESTful APIs using Python and frontend applications using Node.js or React. GenAI Integration: Develop and optimize Agenting AI components using Python, ensuring seamless integration with backend services. Cloud Deployment: Manage application deployment, scaling, and monitoring on Azure or AWS using Kubernetes and microservices architecture. Collaboration: Work with cross-functional teams using Jira and Confluence for project tracking and documentation. Performance Optimization: Ensure high availability, security, and efficiency of applications through robust coding practices and infrastructure management. Required Skills Experience: Backend: Strong expertise in Python and REST API development. Frontend: Proficiency in Node.js, React, and modern JavaScript frameworks. Cloud DevOps: Hands-on experience with Azure or AWS, Kubernetes, microservices, and Git or Azure Repos for version control. Tools: Familiarity with Jira, Confluence, and CI/CD pipelines. Experience: 5+ years in full-stack development with production-grade applications. Preferred Skills: AI/ML Knowledge: Understanding of GenAI tools (e.g., LangChain, LLMs, RAG/GraphRAG/MCP architectures) and Agenting AI development. Cloud AI Services: Experience with cloud-based AI platforms (e.g., AWS Bedrock, Azure AI). Architecture: Proficiency in designing scalable systems and troubleshooting distributed environments. What We Offer: Competitive salary and comprehensive benefits package. Flexible work hours and a hybrid working model to support work-life balance. Opportunities for professional growth and development in a cutting-edge technology environment. Exposure to Generative AI and other advanced technologies. Inferenz is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
Inferenz is a pioneering AI and Data Native company dedicated to transforming how organizations leverage data and AI to drive innovation and efficiency. As industry leaders, we specialize in delivering cutting-edge AI and data solutions that empower businesses to harness the full potential of their data assets. We are seeking an experienced Senior Full Stack Developer to join our team building innovative Generative AI (GenAI) based applications. The ideal candidate will have a strong background in developing scalable RESTful APIs using Python, as well as modern frontend applications with Node.js or React. Experience with cloud platforms (Azure or AWS), Kubernetes, microservices architecture, and version control (Git or Azure Repos) is essential. Familiarity with AI/ML/GenAI technologies and Agenting AI is highly valued. Key Responsibilities: Full-Stack Development: Design, build, and maintain scalable RESTful APIs using Python and frontend applications using Node.js or React. GenAI Integration: Develop and optimize Agenting AI components using Python, ensuring seamless integration with backend services. Cloud Deployment: Manage application deployment, scaling, and monitoring on Azure or AWS using Kubernetes and microservices architecture. Collaboration: Work with cross-functional teams using Jira and Confluence for project tracking and documentation. Performance Optimization: Ensure high availability, security, and efficiency of applications through robust coding practices and infrastructure management. Required Skills & Experience: Backend: Strong expertise in Python and REST API development. Frontend: Proficiency in Node.js, React, and modern JavaScript frameworks. Cloud & DevOps: Hands-on experience with Azure or AWS, Kubernetes, microservices, and Git or Azure Repos for version control. Tools: Familiarity with Jira, Confluence, and CI/CD pipelines. Experience: 5+ years in full-stack development with production-grade applications. Preferred Skills: AI/ML Knowledge: Understanding of GenAI tools (e.g., LangChain, LLMs, RAG/GraphRAG/MCP architectures) and Agenting AI development. Cloud AI Services: Experience with cloud-based AI platforms (e.g., AWS Bedrock, Azure AI). Architecture: Proficiency in designing scalable systems and troubleshooting distributed environments. What We Offer: Competitive salary and comprehensive benefits package. Flexible work hours and a hybrid working model to support work-life balance. Opportunities for professional growth and development in a cutting-edge technology environment. Exposure to Generative AI and other advanced technologies. Inferenz is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
We are seeking a highly skilled and motivated Lead Cloud Engineer with over 6 years of experience in designing, implementing, and managing scalable, secure, and highly available cloud solutions. The ideal candidate will have deep knowledge of cloud platforms (AWS, Azure, or GCP), strong leadership capabilities, and a hands-on approach to infrastructure automation and DevOps practices. Key Responsibilities: Lead the architecture, design, and implementation of enterprise-grade cloud infrastructure solutions. Collaborate with DevOps, Security, and Development teams to ensure robust CI/CD pipelines and cloud-native application deployment. Evaluate and select appropriate cloud services and tools based on business and technical requirements. Drive automation across infrastructure provisioning, configuration management, and application deployment. Monitor system performance, ensure high availability, and proactively resolve any issues. Implement cloud cost optimization strategies and maintain operational budgets. Enforce security best practices and compliance standards across cloud environments. Mentor junior engineers and provide technical leadership across cross-functional teams. Maintain documentation and architectural diagrams for cloud environments and processes. Stay current with emerging technologies and propose innovations that improve business outcomes. Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, or related field. 6+ years of experience in cloud engineering, including at least 3 years in a leadership or senior technical role. Expertise in at least one major cloud platform: AWS, Azure, or Google Cloud Platform (GCP) . Proficiency in IaC tools such as Terraform, CloudFormation, or Pulumi . Strong scripting skills using Python, Bash, or PowerShell . Experience with CI/CD tools like Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps . Deep understanding of networking, security, identity & access management in the cloud. Familiarity with containerization and orchestration technologies such as Docker and Kubernetes . Strong analytical, problem-solving, and troubleshooting skills. Excellent verbal and written communication skills. Preferred Qualifications (Nice to Have): Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect, GCP Professional Cloud Architect). Experience with serverless architectures and event-driven systems. Familiarity with monitoring tools like Prometheus, Grafana, Datadog, or CloudWatch . Experience leading cloud migration or modernization projects.
We are seeking a Lead Data Engineer with deep expertise in Snowflake , dbt , and Apache Airflow to design, implement, and optimize scalable data solutions. This role involves working on complex datasets, building robust data pipelines, ensuring data quality, and collaborating closely with analytics and business teams to deliver actionable insights. If you are passionate about data architecture, ELT best practices, and modern cloud data stack , we d like to meet you. Key Responsibilities: Pipeline Design Orchestration: Build and maintain robust, scalable data pipelines using Apache Airflow , including incremental full-load strategies, retries, and logging. Data Modelling Transformation: Develop modular, tested, and documented transformations in dbt , ensuring scalability and maintainability. Snowflake Development: Design and maintain warehouse in Snowflake , optimize Snowflake schemas, implement performance tuning (clustering keys, warehouse scaling, materialized views), manage access control, and utilize streams tasks for automation. Data Quality Monitoring: Implement validation frameworks (null checks, type checks, threshold alerts) and automated testing for data integrity and reliability. Collaboration: Partner with analysts, data scientists, and business stakeholders to translate requirements into scalable technical solutions. Performance Optimization: Develop incremental and full-load strategies with continuous monitoring, retries, and logging and tune query performance and job execution efficiency. Infrastructure Automation: Use Terraform or similar IaC tools to provision and manage Snowflake, Airflow, and related environments Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, Data Engineering, or a related field. 7 10 years of experience in data engineering , with strong hands-on expertise in: Snowflake (data modelling, performance tuning, access control, streams tasks, external tables) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (modular SQL development, Jinja templating, testing, documentation) Proficiency in SQL and Python (Spark experience is a plus). Experience building and managing pipelines on AWS , GCP , or Azure . Strong understanding of data warehousing concepts and ELT best practices. Familiarity with version control (Git) and CI/CD Exposure to infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving, collaboration, and communication skills, with the ability to lead technical projects. Good To Have: Experience with streaming data pipelines (Kafka, Kinesis, Pub/Sub). Exposure to BI/analytics tools (Looker, Tableau, Power BI). Knowledge of data governance and security best practices
Required Experience: 10+ Years Preferred Immediate Joiner Apply now We are looking for a seasoned Solutions Architect with strong expertise in Snowflake, DBT (Data Build Tool), and Apache Airflow to lead our data architecture strategy, design scalable data pipelines, and optimize our cloud data platform. The ideal candidate will have a deep understanding of modern data stack technologies and a proven track record of delivering enterprise-grade data solutions. Key Responsibilities: Design, architect, and oversee implementation of scalable, secure, and high-performing data solutions using Snowflake, DBT, and Airflow. Collaborate with business stakeholders, data engineers, and analysts to understand data requirements and translate them into technical solutions. Define best practices and governance for data modelling, ELT pipelines, and metadata management. Guide and mentor data engineering teams on architecture standards and coding practices. Evaluate and recommend tools, frameworks, and strategies to enhance the performance and reliability of data infrastructure. Lead architecture reviews, data quality audits, and technical design sessions. Ensure compliance with data security, privacy, and regulatory requirements. Monitor and troubleshoot performance issues related to Snowflake, Airflow DAGs, and DBT transformations. Required Skills Qualifications: 10+ years of overall experience in data engineering and architecture roles. Strong hands-on experience with: Snowflake : Data warehouse design, performance tuning, role-based access controls. DBT : Model creation, version control, testing, and documentation. Apache Airflow : DAG development, orchestration, scheduling, monitoring. Strong experience in SQL, data modelling (star/snowflake schemas), and ETL/ELT frameworks. Proficient in Python and scripting for data pipelines and automation. Solid understanding of cloud platforms (preferably AWS , Azure , or GCP ). Experience with CI/CD practices for data deployments. Excellent problem-solving, communication, and leadership skills. Good To Have: Snowflake or DBT certification(s) preferred. Experience integrating with BI tools (e.g., Tableau, Power BI). Familiarity with data catalog and lineage tools like Alation , Collibra , or Atlan . Prior experience working in Agile environments. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit
We are seeking a Senior Data Engineer with deep expertise in Databricks to design, develop, and optimize scalable data solutions. This role will be central to advancing our data engineering capabilities, with opportunities to work on cutting-edge technologies including Generative AI , advanced analytics, and cloud-native architectures. You will collaborate with cross-functional teams, mentor junior engineers, and help shape the future of our data platform. Key Responsibilities: Data Architecture Development: Lead the design, development, and optimization of scalable, secure, and high-performance data solutions in Databricks . ETL/ELT Pipeline Engineering: Build and maintain robust pipelines, integrating Databricks with Azure Data Factory (or AWS Glue), ensuring reliability and efficiency. Advanced Analytics AI Integration: Implement machine learning models and Generative AI -powered solutions to deliver business innovation. Collaboration: Partner with data scientists, analysts, and business teams to translate requirements into technical designs and deliverables. Data Quality Governance: Enforce best practices for data validation, governance, and security to ensure trust and compliance. Technical Leadership: Mentor junior engineers, conduct code reviews, and foster a culture of continuous learning. Innovation Research: Stay current with the latest trends in Databricks, Azure Data Factory, cloud platforms, and AI/ML to recommend and implement improvements. Required Skills Qualifications: Bachelor s or master s degree in computer science, Information Technology, or a related field. 5+ years in data engineering, with at least 3 years of hands-on experience in Proven expertise in: Data modelling, Data Lakehouse architectures, and ELT/ETL processes SQL and at least one programming language ( Python or Scala ) Integration with Azure Data Factory or AWS Glue Strong understanding of cloud platforms (Azure preferred) and containerization (Docker). Excellent analytical, problem-solving, and communication skills. Demonstrated experience mentoring or leading technical team. Good To Have: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP . Knowledge of data governance frameworks and tools.
We are looking for a Solution Architect AI with deep expertise in designing, deploying, and scaling intelligent systems preferably in healthcare or Hitech domain. You will lead architecture efforts for AI solutions involving Generative AI, Agentic AI, LLMs, and NLP. The ideal candidate brings strong hands-on skills along with enterprise deployment experience and a firm grasp of compliance and ethics in healthcare/similar regulatory domains. Key Responsibilities: Architect and scale AI solutions using Agentic AI, NLP, and LLMs for healthcare workflows. Develop documentation, architecture diagrams, and technical specifications. Lead end-to-end development: data ingestion, model training, deployment (cloud/on-prem), and monitoring (MLOps). Collaborate with cross-functional teams including data scientists, clinicians, and software engineers. Ensure regulatory compliance (HIPAA, GDPR) across all AI solutions. Explore and integrate emerging technologies such as multimodal models, vector search, and autonomous agents. Define reference architectures, reusable patterns, and best practices for AI delivery. Provide technical leadership and mentorship across teams Optimize AI models for performance, scalability, and interpretability in clinical environments. Required Skills & Qualifications: Bachelor s or master s degree in computer science/information technology, AI, Data Science, or related field. Minimum 4 years of experience in AI , 10+ years total in software/data engineering. Proven experience building and deploying production-grade systems with LLMs, NLP and Agentic AI . Cloud-native AI deployment (AWS, Azure, GCP) using tools like SageMaker, Vertex AI, etc. Proficiency in Python, TensorFlow/PyTorch, and cloud platforms (AWS, Azure, GCP). Familiarity with healthcare data standards (HL7, FHIR, DICOM). Familiarity with data privacy, security, and compliance (HIPAA, GDPR) Excellent communication and stakeholder management skills.
Responsibilities: * Design, develop & maintain data pipelines using Snowflake, Airflow & Data Build Tool. * Collaborate with cross-functional teams on project requirements & deliverables.
FIND ON MAP