We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities:MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps , GitHub Actions , or Jenkins . ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML , MLflow , or KubeFlow . ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions , Azure Kubernetes Service (AKS) , or Azure App Services . DataOps: ● Design, build, and maintain scalable data ingestion , transformation , and orchestration pipelines using Azure Data Factory , Synapse Pipelines , or Apache Airflow . ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics , Event Hubs , Databricks , or Kafka . Required Skills: ● Strong hands-on experience with Azure Machine Learning , Azure Data Factory , Azure DevOps , and Azure Storage solutions . ● Proficiency in Python , Bash , and scripting for automation. ● Experience with Docker , Kubernetes , and containerized deployments in Azure. ● Good understanding of CI/CD principles , testing strategies, and ML lifecycle management. ● Familiarity with monitoring , logging , and alerting in cloud environments. ● Knowledge of data modeling , data warehousing , and SQL . Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate , Azure AI Engineer Associate , or Azure DevOps Engineer Expert ). ● Experience with Databricks , Delta Lake , or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less
Job description We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities: MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps: ● Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. Required Skills: ● Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. ● Proficiency in Python, Bash, and scripting for automation. ● Experience with Docker, Kubernetes, and containerized deployments in Azure. ● Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. ● Familiarity with monitoring, logging, and alerting in cloud environments. ● Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). ● Experience with Databricks, Delta Lake, or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less
We’re Hiring: MLOps Engineer (Azure) harshita.panchariya@tecblic.com Location: Ahmedabad, Gujarat Experience: 3–5 Years Employment Type : Full-Time * An immediate joiner will be preferred. Job Summary: We are seeking a skilled and proactive MLOps/DataOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities MLOps : Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. Manage model versioning, performance tracking, and rollback strategies. Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. Monitor and optimize data workflows for performance and cost efficiency. Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. DevOps & Infrastructure Provision and manage infrastructure using Infrastructure-as-Code tools such as Terraform, ARM Templates, or Bicep. Set up and manage compute environments (VMs, AKS, AML Compute), storage (Blob, Data Lake Gen2), and networking in Azure. Implement observability using Azure Monitor, Log Analytics, Application Insights, and Skills : Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. Proficiency in Python, Bash, and scripting for automation. Experience with Docker, Kubernetes, and containerized deployments in Azure. Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. Familiarity with monitoring, logging, and alerting in cloud environments. Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). Experience with Databricks, Delta Lake, or Apache Spark on Azure. Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills Strong problem-solving and communication skills. Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. Passion for automation, optimization, and driving operational excellence. harshita.panchariya@tecblic.com Show more Show less
We are seeking an experienced Data Engineer to join our data team. As a Senior Data Engineer, you will work on various data engineering tasks including designing and optimizing data pipelines, data modeling, and troubleshooting data issues. You will collaborate with other data team members, stakeholders, and data scientists to provide data-driven insights and solutions to the organization. The experience required is of 3+ Years. Responsibilities Design and optimize data pipelines for various data sources. Design and implement efficient data storage and retrieval mechanisms. Develop data modeling solutions and data validation mechanisms. Troubleshoot data-related issues and recommend process improvements. Collaborate with data scientists and stakeholders to provide data-driven insights and solutions. Coach and mentor junior data engineers in the team. Requirements 3+ years of experience in data engineering or related field. Strong experience in designing and optimizing data pipelines, and data modeling. Strong proficiency in programming languages Python. Experience with big data technologies like Hadoop, Spark, and Hive. Experience with cloud data services such as AWS, Azure, and GCP. Strong experience with database technologies like SQL, NoSQL, and data warehousing. Knowledge of distributed computing and storage systems. Understanding DevOps power automation and Microsoft Fabric will be an added advantage. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Bachelor's degree in Computer Science, Data Science, or a Computer related field (Master's degree preferred). This job was posted by Himanshu Chavla from Tecblic.
Job Title: Data Engineer Location: Ahmedabad Work Mode: On-Site Opportunity Experience: 4+ Years Employment Type: Full-Time Availability: Immediate Joiner Preferred Join Our Team as a Data Engineer We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure. As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization. Your Key Responsibilities Architect, build, and maintain scalable and reliable data pipelines from diverse data sources. Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs. Implement data validation, transformation, and quality monitoring processes. Collaborate with cross-functional teams to deliver impactful, data-driven solutions. Proactively identify bottlenecks and optimize existing workflows and processes. Provide guidance and mentorship to junior engineers in the team. Skills & Expertise Were Looking For 3+ years of hands-on experience in Data Engineering or related roles. Strong expertise in Python and data pipeline design. Experience working with Big Data tools like Hadoop, Spark, Hive. Proficiency with SQL, NoSQL databases, and data warehousing solutions. Solid experience in cloud platforms - Azure Familiar with distributed computing, data modeling, and performance tuning. Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus. Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team. Qualifications Bachelors degree in Computer Science, Data Science, or a related field (ref:hirist.tech)
Job Description : Machine Learning Engineer LLM, Agentic AI, Computer Vision, and MLOps Location : Ahmedabad Experience : 4 to 6 years Employment Type : Us : Responsibilities Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, Computer Vision, MLOps, and pushing the boundaries of Agentic AI, wed love to have you on Responsibilities : Research and Development: Design, develop, and fine-tune machine learning models across LLM, computer vision, and Agentic AI use cases. Model Optimization: Fine-tune and optimize pre-trained models, ensuring performance, scalability, and minimal latency. Computer Vision: Build and deploy vision models for object detection, classification, OCR, and segmentation. Integration: Work closely with software and product teams to integrate models into production-ready applications. Data Engineering: Develop robust data pipelines for structured, unstructured (text/image/video), and streaming data. Production Deployment: Deploy, monitor, and manage ML models in production using DevOps and MLOps practices. Experimentation: Prototype and test new AI approaches such as reinforcement learning, few-shot learning, and generative AI. DevOps Collaboration: Collaborate with the DevOps team to ensure CI/CD pipelines, infrastructure-as-code, and scalable deployments are in place. Technical Mentorship: Support and mentor junior ML and data : Core Technical Skills Strong Python skills for machine learning and computer vision. Hands-on experience with PyTorch, TensorFlow, Hugging Face, Scikit-learn, OpenCV. Deep understanding of LLMs (e.g., GPT, BERT, T5) and Computer Vision architectures (e.g., CNNs, Vision Transformers, YOLO, R-CNN). Strong knowledge of NLP tasks, image/video processing, and real-time inference. Experience in cloud platforms: AWS, GCP, or Azure. Familiarity with Docker, Kubernetes, and serverless deployments. Proficiency in SQL, Pandas, NumPy, and data wrangling & MLOps Skills : Experience with CI/CD tools such as GitHub Actions, GitLab CI, Jenkins, etc. Knowledge of Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Pulumi. Familiarity with container orchestration and Kubernetes-based ML model deployment. Hands-on experience with ML pipelines and monitoring tools: MLflow, Kubeflow, TFX, or Seldon. Understanding of model versioning, model registry, and automated testing/validation in ML workflows. Exposure to observability and logging frameworks (e.g., Prometheus, Grafana, ELK Skills (Good to Have) : Knowledge of Agentic AI systems and use cases. Experience with generative models (e.g., GANs, VAEs) and RL-based architectures. Prompt engineering and fine-tuning for LLMs in specialized domains. Working with vector databases (e.g., Pinecone, FAISS, Weaviate). Distributed data processing using Apache Spark, Skills : Strong foundation in mathematics, including linear algebra, probability, and statistics. Deep understanding of data structures and algorithms. Comfortable handling large-scale datasets, including images, video, and multi-modal Skills : Strong analytical and problem-solving mindset. Excellent communication skills for cross-functional collaboration. Self-motivated, adaptive, and committed to continuous learning (ref:hirist.tech)
Job Title : MERN Stack Developer Location : Ahmedabad (On-Site) Onshore Opportunity Experience : 5 to 7 Joining : Immediate Joiners Preferred Employment Type : Full-Time About The Role We are seeking a talented and experienced MERN Stack Developer to join our on-site team in Ahmedabad. As a key member of our development team, you will be responsible for building and maintaining high-performance web applications using MongoDB, Express.js, React.js, and Node.js. Key Responsibilities Develop and maintain scalable, responsive web applications using the MERN stack Write clean, efficient, and well-documented code Integrate APIs and third-party services Work closely with UI/UX designers, QA, and product teams Optimize applications for maximum speed and scalability Perform code reviews and provide constructive feedback Debug and resolve technical issues and bugs Ensure cross-platform and cross-browser compatibility Required Skills & Qualifications 5+ years of professional experience with the MERN stack Strong proficiency in JavaScript, ES6+, HTML5, and CSS3 In-depth experience with React.js (including hooks, Redux, component lifecycle) Strong backend development skills using Node.js and Express.js Proficient in working with MongoDB, including aggregation, indexing, and schema design Experience with RESTful APIs and JSON Familiarity with Git and version control workflows Strong problem-solving and debugging skills Excellent communication and teamwork abilities Must-Have Experience 5+ years working professionally with the MERN stack Has experience in a team lead role Expertise in JavaScript (ES6+), HTML5, CSS3 Deep understanding of React.js (Hooks, Redux, lifecycle) Backend development with Node.js & Express.js Strong hands-on with MongoDB (schemas, indexing, aggregation) API integration, Git, version control, and debugging Preferred Qualifications Experience with deployment on AWS, Heroku, or similar cloud platforms Familiarity with containerization tools like Docker Experience with testing frameworks such as Jest or Mocha Knowledge of agile methodologies (ref:hirist.tech)
🌟 We’re Hiring: MERN Stack Developer | 5+ Years Experience 🌟 👨💻 Position: MERN Stack Developer 📍 Location: Ahmedabad (On-Site) ✈️ Onshore Opportunity 🗓️ Experience: 5+ Years ⚡ Joining: Immediate Joiners Preferred 📌 Employment Type: Full-Time We’re looking for a skilled MERN Stack Developer ready to work on exciting and high-impact web applications. If you're passionate about JavaScript and full-stack innovation, we’d love to meet you! 🛠️ What You’ll Do 🔸 Build and maintain scalable web applications using MongoDB, Express.js, React.js, and Node.js 🔸 Write clean, efficient, and well-documented code 🔸 Integrate third-party services and REST APIs 🔸 Collaborate with UI/UX, QA, and product teams 🔸 Review code, fix bugs, and ensure robust performance 🔸 Ensure cross-browser and cross-platform compatibility 🔸 Lead and mentor team members, ensuring high code quality and collaboration 📚 Must-Have Experience 🌐 5+ years working professionally with the MERN stack 🌐 Has experience in a team lead role 🌐 Expertise in JavaScript (ES6+), HTML5, CSS3 🌐 Deep understanding of React.js (Hooks, Redux, lifecycle) 🌐 Backend development with Node.js & Express.js 🌐 Strong hands-on with MongoDB (schemas, indexing, aggregation) 🌐 API integration, Git, version control, and debugging Bonus Skills 🧩 Experience deploying on AWS / Heroku 🧩 Familiar with Docker and container workflows 🧩 Testing: Jest, Mocha 🧩 Experience with Agile/Scrum environments 📩 Interested? Drop me a DM or apply today!
We a re hiring a Lead MERN Stack Developer with 57 yrs experience! Handle full project delivery, client communication & short-term travel to Nairobi. Bonus + travel perks. Ready to lead & explore global projects? Apply now! Provident fund
As a Machine Learning Engineer at Tecblic, you will be part of a forward-thinking team that specializes in delivering AI-driven solutions to empower businesses in the digital age. If you are passionate about Large Language Models (LLMs), machine learning, and pushing the boundaries of Agentic AI, we invite you to join us on our innovative journey. Your primary responsibilities will include researching, designing, and fine-tuning machine learning models, with a specific focus on LLMs and Agentic AI systems. You will collaborate with software engineers and product teams to integrate AI models into customer-facing applications, perform data preprocessing and exploratory data analysis, and design robust model deployment pipelines. Additionally, you will prototype innovative solutions using cutting-edge techniques like reinforcement learning and generative AI, while also providing technical mentorship to junior team members. To excel in this role, you should have proficiency in Python for machine learning tasks, expertise in ML frameworks like PyTorch and TensorFlow, and a solid understanding of LLMs such as GPT and BERT. Experience with NLP tasks, deep learning architectures, and data manipulation tools is essential. Familiarity with cloud services, deployment tools, and additional skills like Agentic AI exposure and MLOps tools would be advantageous. In addition to technical skills, you should possess a strong analytical and mathematical background, solid understanding of algorithms and data structures, and the ability to handle large datasets using distributed frameworks. Soft skills such as problem-solving, communication, and collaboration are also crucial for success in this role. If you are a self-motivated individual with excellent critical-thinking abilities and a continuous learning mindset, we look forward to having you contribute to our team at Tecblic.,
We are hiring a Sr. Azure Data Engineer with 5 to 6 Years of experience! Handle full project delivery, client communication & short-term travel to Nairobi. Bonus + travel perks. Ready to lead & explore global projects? Apply now! Provident fund
We are hiring a Lead Data Science with 5 to 6 Years of experience! Looking for passionate, goal-oriented individuals who are ready to grow in an innovative tech environment. Provident fund
You will be joining a forward-thinking team at Tecblic, a company specializing in delivering AI-driven solutions to empower businesses in the digital age. If you are passionate about Large Language Models (LLMs), machine learning, and pushing the boundaries of Agentic AI, we welcome you to be a part of our innovative team. As a Machine Learning Engineer at Tecblic, your key responsibilities will include conducting research and development to design and fine-tune machine learning models, with a specific focus on LLMs and Agentic AI systems. You will be responsible for optimizing pre-trained LLMs for domain-specific use cases, collaborating with software engineers and product teams to integrate AI models into customer-facing applications, and performing data preprocessing, pipeline creation, feature engineering, and exploratory data analysis for dataset preparation. Additionally, you will design and implement model deployment pipelines, prototype innovative solutions using cutting-edge techniques, and provide technical mentorship to junior team members. In terms of core technical skills, we require proficiency in Python for machine learning and data science tasks, expertise in ML frameworks and libraries like PyTorch, TensorFlow, and Hugging Face, a solid understanding of LLMs such as GPT, T5, BERT, or Bloom, experience in NLP tasks, knowledge of deep learning architectures, strong data manipulation skills, and familiarity with cloud services and ML model deployment tools. Exposure to Agentic AI, MLOps tools, generative AI models, prompt engineering, few-shot/fine-tuned approaches for LLMs, vector databases, version control, and collaborative development practices would be considered advantageous. Furthermore, we are looking for individuals with a strong analytical and mathematical background, including proficiency in linear algebra, statistics, and probability, as well as a solid understanding of algorithms and data structures to solve complex ML problems. Soft skills such as excellent problem-solving, critical-thinking, communication, collaboration, self-motivation, and continuous learning mindset are also highly valued. If you are ready to contribute your expertise and passion for machine learning and AI to a dynamic and innovative team, Tecblic is the place for you. Join us in pushing the boundaries of technology and creating impactful AI-driven solutions for businesses in the digital era.,
We Are Hiring : Data Engineer | 5+ Years Experience Job Description Job Title : Data Engineer Location : Ahmedabad Work Mode : On-Site Opportunity Experience : 5+ Years Employment Type : Full-Time Availability : Immediate Joiner Preferred Join Our Team as a Data Engineer We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure. As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization. Your Key Responsibilities Architect, build, and maintain scalable and reliable data pipelines from diverse data sources. Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs. Implement data validation, transformation, and quality monitoring processes. Collaborate with cross-functional teams to deliver impactful, data-driven solutions. Proactively identify bottlenecks and optimize existing workflows and processes. Provide guidance and mentorship to junior engineers in the team. Skills & Expertise Were Looking For 4+ years of hands-on experience in Data Engineering or related roles. Strong expertise in Python and data pipeline design. Experience working with Big Data tools like Hadoop, Spark, Hive. Proficiency with SQL, NoSQL databases, and data warehousing solutions. Solid experience in cloud platforms - Azure Familiar with distributed computing, data modeling, and performance tuning. Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus. Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team. Qualifications Bachelors degree in Computer Science, Data Science, or a related field (ref:hirist.tech)
You are looking for a Data Engineer with over 5 years of experience to join our team in Ahmedabad. As a Data Engineer, you will play a key role in transforming raw data into valuable insights and creating scalable data infrastructure. Your responsibilities will include designing data pipelines, optimizing data systems, and supporting data-driven decision-making. Key responsibilities of the role include: - Architecting, building, and maintaining scalable data pipelines from various sources. - Designing effective data storage, retrieval mechanisms, and data models for analytics. - Implementing data validation, transformation, and quality monitoring processes. - Collaborating with cross-functional teams to deliver data-driven solutions. - Identifying bottlenecks, optimizing workflows, and providing mentorship to junior engineers. We are looking for a candidate with: - 4+ years of hands-on experience in Data Engineering. - Proficiency in Python and data pipeline design. - Experience with Big Data tools like Hadoop, Spark, and Hive. - Strong skills in SQL, NoSQL databases, and data warehousing solutions. - Knowledge of cloud platforms, especially Azure. - Familiarity with distributed computing, data modeling, and performance tuning. - Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus. - Strong analytical thinking, collaboration skills, excellent communication skills, and the ability to work independently or as part of a team. Qualifications required for this position include a Bachelor's degree in Computer Science, Data Science, or a related field. If you are passionate about data engineering and have the necessary expertise, we encourage you to apply and be a part of our innovative team in Ahmedabad.,
Responsibilities Oversee the design, development, and implementation of data analysis solutions to meet business needs. Work closely with business stakeholders and the Aviation SME to define data requirements, project scope, and deliverables. Drive the design and development of analytics data models and data warehouse designs. Develop and maintain data quality standards and procedures. Manage and prioritize data analysis projects, ensuring timely completion. Identify opportunities to improve data analysis processes and tools. Collaborate with Data Engineers and Data Architects to ensure data solutions align with the overall data platform architecture. Evaluate and recommend new data analysis tools and technologies. Contribute to the development of best practices for data analysis. Participate in project meetings and provide input on data-related issues, risks and requirements. Qualifications 12+ years of experience as a Data Analytics Lead, with experience leading or mentoring a team. Extensive experience with cloud-based data modelling and data warehousing solutions, using Azure Data Bricks. Proven experience in data technologies and platforms, ETL processes and tools, preferably using Azure Data Factory, Azure Databricks (Spark), Delta Lake. Advanced proficiency in data visualization tools such as Power BI. Data Analysis And Visualization Experience in data analysis, statistical modelling, and machine learning techniques. Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. Experience in implementing Row-Level Security in Power BI. Ability to work with medium-complex data models and quickly understand application data design and processes. Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Data Handling And Processing Proficient in SQL Server and query optimization. Expertise in application data design and process management. Extensive knowledge of data modelling. Hands-on experience with Azure Data Factory, Azure Databricks. Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. Understanding of data governance, compliance, and security measures within Azure environments. (ref:hirist.tech)
Data Architecture and Engineering Lead Job location: Ahmedabad (full-time) Responsibilities: Lead Data Architecture: Own the design, evolution, and delivery of enterprise data architecture across cloud and hybrid environments. Develop relational and analytical data models (conceptual, logical, and physical) to support business needs and ensure data integrity. Consolidate Core Systems: Unify data sources across airport systems into a single analytical platform optimized for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, etc. Implement Microservice Architecture Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Data Modeling Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalize AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including forecasting, pricing models, and personalization. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Operations, Finance, HR, Legal, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Qualifications: 12+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines Extensive experience with Microsoft Azure data services, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory, and related technologies. Strong knowledge of ERP data models, especially SAP and MS Dynamics Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Tools & Technologies Warehousing: Azure Databricks Delta, BigQuery Big Data: Apache Spark Cloud Platforms: Azure (ADLS, AKS, EventHub, ServiceBus) Streaming: Kafka, Pub/Sub RDBMS: PostgreSQL, MS SQL, Oracle MongoDB, Hadoop, ClickHouse Monitoring: Azure Monitoring, App Insight, Prometheus, Grafana
Data Architecture and Engineering Lead Responsibilities Lead Data Architecture : Own the design, evolution, and delivery of enterprise data architecture across cloud and hybrid environments. Develop relational and analytical data models (conceptual, logical, and physical) to support business needs and ensure data integrity. Consolidate Core Systems : Unify data sources across airport systems into a single analytical platform optimised for business value. Build Scalable Infrastructure : Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, etc. Implement Governance Frameworks : Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata & Cataloguing : Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines : Lead data architecture that supports AI/ML initiatives, including forecasting, pricing models, and personalisation. Partner Across Functions : Translate business needs into data architecture solutions by collaborating with leaders in Operations, Finance, HR, Legal, Technology. Optimize Cloud Cost & Performance : Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Qualifications 12+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines Extensive experience with Microsoft Azure data services, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory and related technologies. Strong knowledge of ERP data models, especially SAP and MS Dynamics Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Tools & Technologies Warehousing : Azure Databricks Delta, BigQuery Big Data : Apache Spark Cloud Platforms : Azure (ADLS, AKS, EventHub, ServiceBus) Streaming : Kafka, Pub/Sub RDBMS : PostgreSQL, MS SQL NoSQL : Redis Monitoring : Azure Monitoring, App Insight, Prometheus, Grafana (ref:hirist.tech)
Location : Ahmedabad Employment Type : Full-time Experience Level : 4+ Years Position Overview We are seeking a skilled Full Stack Developer with strong expertise in Java, Spring Boot, and React to join our dynamic team. The ideal candidate will have hands-on experience building scalable applications, integrating services, and working in an agile environment. Experience with Apache Camel and ActiveMQ will be considered a significant advantage. Key Responsibilities Design, develop, and maintain high-quality backend services using Java and Spring Boot. Build responsive, user-friendly front-end interfaces using React. Collaborate with cross-functional teams to gather and refine requirements. Implement and maintain service integrations, preferably using Apache Camel. Manage messaging and asynchronous communication using ActiveMQ. Write clean, maintainable, and well-documented code. Perform unit testing, integration testing, and participate in code reviews. Troubleshoot and resolve application issues in a timely manner. Required Skills & Qualifications Strong proficiency in Java and Spring Boot for backend development. Hands-on experience in building UI components with React. Solid understanding of RESTful API design and development. Familiarity with relational databases (e.g., MySQL, PostgreSQL). Good problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Skills (Good To Have) Experience with Apache Camel for integration. Knowledge of ActiveMQ or other message brokers. Understanding of microservices architecture. Familiarity with cloud platforms (AWS, Azure, GCP). Education Bachelors degree in Computer Science, Information Technology, or related field. (ref:hirist.tech)
Job Title: Lead Technical Architect Location: Ahmedabad Employment Type: Full-time Experience Level: 10+ Years Key Responsibilities 1. Architecture & Design · Develop end-to-end architecture blueprints for large-scale enterprise applications. · Define component-based and service-oriented architectures (Microservices, SOA, Event-Driven). · Create API-first designs using REST, GraphQL, and gRPC with clear versioning strategies. · Establish integration patterns for internal systems, third-party APIs, and middleware . · Design cloud-native architectures leveraging AWS, Azure, or GCP services. · Define coding guidelines , performance benchmarks, and security protocols. Participate in POC projects to evaluate new tools and frameworks. 2. Performance, Security, & Scalability · Implement caching strategies (Redis, Memcached, CDN integrations). · Ensure horizontal and vertical scalability of applications. · Apply security best practices : OAuth 2.0, JWT, SAML, encryption (TLS/SSL, AES), input validation, and secure API gateways. Set up application monitoring and logging using ELK, Prometheus, Grafana, or equivalent. 3. DevOps & Delivery · Define CI/CD workflows using Jenkins, GitHub Actions, Azure DevOps, or GitLab CI . · Collaborate with DevOps teams for container orchestration (Docker, Kubernetes). · Integrate automated testing pipelines (unit, integration, and load testing). Required Technical Skills Programming & Frameworks: · Expertise in one or more enterprise languages: Core, Node.js . · Strong understanding of front-end technologies (Angular, React) for full-stack integration. Architecture & Patterns: · Microservices, Domain-Driven Design (DDD), Event-Driven Architecture (EDA). · Message brokers and streaming: Kafka, RabbitMQ, Azure Event Hub, Azure Service Bus . Databases & Storage: · Relational DBs: PostgreSQL, MySQL, MS SQL Server . · NoSQL DBs: MongoDB . · Caching layers: Redis, Memcached . Cloud & Infrastructure: · Azure (App Services, Functions, API Management, Cosmos DB), Security: · OAuth 2.0, SAML, OpenID Connect, JWT. Secure coding practices, threat modelling, penetration testing familiarity. DevOps & CI/CD: · Azure DevOps, GitLab CI/CD. · Docker, Kubernetes. Testing & Quality Assurance: · Unit testing (JUnit, NUnit, PyTest, Mocha). Performance/load testing (JMeter, Locust). Monitoring & Observability: · Azure Monitoring, App Insight, Prometheus, Grafana Preferred Skills & Certifications · Microsoft Certified: Azure Solutions Architect Expert , · Exposure to AI/ML services and IoT architectures . KPIs for Success · Reduced system downtime through robust architecture designs. · Improved performance metrics and scalability readiness. · Successful delivery of complex projects without major architectural rework. · Increased developer productivity through better standards and tools adoption.