We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less
About the Role We at KrtrimaIQ Cognitive Solutions are looking for a highly experienced and results-driven Senior Data Engineer to design and develop scalable, high-performance data pipelines and solutions in a cloud-native, big data environment. This is a fully remote role, ideal for professionals with deep hands-on experience in PySpark, Google Cloud Platform (GCP), and DataProc. Key Responsibilities:Design, build, and maintain scalable ETL/ELT data pipelines using PySpark Develop and optimize data workflows leveraging GCP DataProc, BigQuery, Cloud Storage, and Cloud Composer Ingest, transform, and integrate structured and unstructured data from diverse sources Collaborate with Data Scientists, Analysts, and cross-functional teams to deliver reliable, real-time data solutions Ensure performance, scalability, and reliability of data platforms Implement best practices for data governance, security, and quality Must-Have Skills:Strong hands-on experience in PySpark and the Apache Spark ecosystem Proficiency in working with GCP services, especially DataProc, BigQuery, Cloud Storage, and Cloud Composer Experience with distributed data processing, ETL design, and data warehouse architecture Strong SQL skills and familiarity with NoSQL data stores Knowledge of CI/CD pipelines, version control (Git), and code review processes Ability to work independently in a remote setup with strong communication skills Preferred Skills:Exposure to real-time data processing tools like Kafka or Pub/Sub Familiarity with Airflow, Terraform, or other orchestration/automation tools Experience with data quality frameworks and observability tools Why Join Us?100% Remote – Work from anywhere High-impact role in a fast-growing AI-driven company Opportunity to work on enterprise-grade, large-scale data systems Collaborative and flexible work culture #SeniorDataEngineer #RemoteJobs #PySpark #GCPJobs #DataProc #BigQuery #CloudDataEngineer #DataEngineeringJobs #ETLPipelines #ApacheSpark #BigDataJobs #GoogleCloudJobs #CloudDataEngineer #HiringNow #DataPipelineEngineer #WorkFromHome #KrtrimaIQ #AIDataEngineering #DataJobsIndia Show more Show less
Job Title: Senior Data Engineer (PySpark | GCP | DataProc) Location: Remote (Work from Anywhere – India Preferred) Experience: 5–8 Years Apply at: 📧 nikhil.kumar@krtrimaiq.ai About the Role We at KrtrimaIQ Cognitive Solutions are looking for a highly experienced and results-driven Senior Data Engineer to design and develop scalable, high-performance data pipelines and solutions in a cloud-native, big data environment. This is a fully remote role, ideal for professionals with deep hands-on experience in PySpark, Google Cloud Platform (GCP), and DataProc. Key Responsibilities:Design, build, and maintain scalable ETL/ELT data pipelines using PySpark Develop and optimize data workflows leveraging GCP DataProc, BigQuery, Cloud Storage, and Cloud Composer Ingest, transform, and integrate structured and unstructured data from diverse sources Collaborate with Data Scientists, Analysts, and cross-functional teams to deliver reliable, real-time data solutions Ensure performance, scalability, and reliability of data platforms Implement best practices for data governance, security, and quality Must-Have Skills:Strong hands-on experience in PySpark and the Apache Spark ecosystem Proficiency in working with GCP services, especially DataProc, BigQuery, Cloud Storage, and Cloud Composer Experience with distributed data processing, ETL design, and data warehouse architecture Strong SQL skills and familiarity with NoSQL data stores Knowledge of CI/CD pipelines, version control (Git), and code review processes Ability to work independently in a remote setup with strong communication skills Preferred Skills:Exposure to real-time data processing tools like Kafka or Pub/Sub Familiarity with Airflow, Terraform, or other orchestration/automation tools Experience with data quality frameworks and observability tools Why Join Us?100% Remote – Work from anywhere High-impact role in a fast-growing AI-driven company Opportunity to work on enterprise-grade, large-scale data systems Collaborative and flexible work culture 📩 Interested candidates, please send your resume to: nikhil.kumar@krtrimaiq.ai #SeniorDataEngineer #RemoteJobs #PySpark #GCPJobs #DataProc #BigQuery #CloudDataEngineer #DataEngineeringJobs #ETLPipelines #ApacheSpark #BigDataJobs #GoogleCloudJobs #CloudDataEngineer #HiringNow #DataPipelineEngineer #WorkFromHome #KrtrimaIQ #AIDataEngineering #DataJobsIndia Show more Show less
Job Description: We are seeking a talented and experienced ServiceNow Developer to join our team in Bangalore. The ideal candidate will have a strong background in ServiceNow development and implementation, with a keen eye for detail and a passion for creating efficient, user-friendly solutions. As a ServiceNow Developer, you will play a crucial role in designing, developing, and maintaining ServiceNow applications to meet the needs of our clients. Key Responsibilities: - Develop and customize ServiceNow applications and services. - Implement new ServiceNow modules (FSM) and features related to FSM. - Aware of Dispatcher workspace(UI Builder Configuration). - Experience in developing UI Page(Includes HTML,CSS and Java script). - Strong experience in writing Server script, Client scripts, Business rules and Scheduled Jobs. - Integrate ServiceNow with other systems and applications. - Create and maintain technical documentation. - Troubleshoot and resolve technical issues in the ServiceNow platform. - Collaborate with cross-functional teams to understand requirements and deliver solutions. - Perform regular system upgrades and ensure optimal performance. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of experience in ServiceNow development and implementation. - Strong knowledge of JavaScript, HTML, CSS, and other web technologies. - Experience with ServiceNow scripting, including Business Rules, Script Includes, Client Scripts, and UI Actions. - Proficient in creating and managing workflows, forms, and reports in ServiceNow. - Familiarity with ITIL processes and best practices . - Excellent problem-solving skills and attention to detail. - Strong communication and interpersonal skills. Preferred Skills: - ServiceNow Certified Application Developer (CAD) or other relevant certifications. - Experience with Agile development methodologies. - Knowledge of REST/SOAP APIs and integration techniques. - Knowledge of FSM Module. - Knowledge of Understanding the requirements from Client and ability of handling foreign Clients. - Understanding of security and compliance requirements in ServiceNow. Job Type: Full-time Pay: ₹500,000.00 - ₹1,500,000.00 per year Shift: Day shift Morning shift Work Days: Monday to Friday Application Question(s): May I know your current CTC ? Experience: FSM: 4 years (Required) ITIL: 4 years (Required) License/Certification: CAD (Required) Location: Basavanagudi, Bengaluru, Karnataka (Required) Work Location: In person
Job Title: MLOps Engineer – AWS SageMaker & MLflow (Remote) Location: Fully Remote (India Preferred) Experience Level: 5+ Years Employment Type: Full-Time Company: KrtrimaIQ Cognitive Solutions 📧 Apply at: nikhil.kumar@krtrimaiq.ai Job Summary: KrtrimaIQ Cognitive Solutions is looking for a Senior MLOps Engineer with deep expertise in AWS SageMaker , MLflow , and end-to-end machine learning lifecycle management . The ideal candidate will have hands-on experience in deploying and managing scalable ML solutions in production environments and a passion for building reliable MLOps systems. This is a fully remote opportunity offering flexibility and the chance to work on innovative AI/ML products for global enterprises. Key Responsibilities: Design, build, and maintain scalable and secure MLOps pipelines using AWS SageMaker and MLflow Automate ML model lifecycle: training, testing, tracking, versioning, and deployment to production Implement CI/CD pipelines and DevOps practices for ML infrastructure Ensure reproducibility, monitoring, and performance optimization of deployed models Collaborate closely with Data Scientists, Data Engineers, and DevOps teams to streamline workflows Contribute to MLOps research and explore new tools and frameworks in the production ML space Ensure compliance with best practices in cloud computing , software development , and data security Must-Have Qualifications: 5+ years of professional experience in Software Engineering or MLOps Expertise in AWS , specifically AWS SageMaker for building and deploying ML models Hands-on experience with MLflow for model tracking, versioning, and deployment Strong programming skills in Python (R, Scala, or Spark is a plus) Solid experience in production-grade development and infrastructure automation Strong problem-solving, analytical, and research skills Preferred Qualifications: Experience with AWS DataZone Familiarity with Docker , Kubernetes , Airflow , Terraform , or other orchestration tools Understanding of data versioning , feature stores , and ML monitoring tools Exposure to MLOps research and experimentation in startup/innovation environments What We Offer: 💻 100% Remote Work Flexibility 🌐 Exposure to enterprise-grade AI and MLOps use cases 🤝 Collaborative work culture focused on innovation and learning 🚀 Fast-paced startup-like environment with global project opportunities How to Apply: Send your updated CV to: nikhil.kumar@krtrimaiq.ai
Job Title: MLOps Engineer – AWS SageMaker & MLflow (Remote) Location: Fully Remote (India Preferred) Experience Level: 5+ Years Employment Type: Full-Time Company: KrtrimaIQ Cognitive Solutions i Job Summary: KrtrimaIQ Cognitive Solutions is looking for a Senior MLOps Engineer with deep expertise in AWS SageMaker, MLflow, and end-to-end machine learning lifecycle management. The ideal candidate will have hands-on experience in deploying and managing scalable ML solutions in production environments and a passion for building reliable MLOps systems. This is a fully remote opportunity offering flexibility and the chance to work on innovative AI/ML products for global enterprises. Key Responsibilities: Design, build, and maintain scalable and secure MLOps pipelines using AWS SageMaker and MLflow Automate ML model lifecycle: training, testing, tracking, versioning, and deployment to production Implement CI/CD pipelines and DevOps practices for ML infrastructure Ensure reproducibility, monitoring, and performance optimization of deployed models Collaborate closely with Data Scientists, Data Engineers, and DevOps teams to streamline workflows Contribute to MLOps research and explore new tools and frameworks in the production ML space Ensure compliance with best practices in cloud computing, software development, and data security Must-Have Qualifications: 5+ years of professional experience in Software Engineering or MLOps Expertise in AWS, specifically AWS SageMaker for building and deploying ML models Hands-on experience with MLflow for model tracking, versioning, and deployment Strong programming skills in Python (R, Scala, or Spark is a plus) Solid experience in production-grade development and infrastructure automation Strong problem-solving, analytical, and research skills Preferred Qualifications: Experience with AWS DataZone Familiarity with Docker, Kubernetes, Airflow, Terraform, or other orchestration tools Understanding of data versioning, feature stores, and ML monitoring tools Exposure to MLOps research and experimentation in startup/innovation environments What We Offer: 💻 100% Remote Work Flexibility 🌐 Exposure to enterprise-grade AI and MLOps use cases 🤝 Collaborative work culture focused on innovation and learning 🚀 Fast-paced startup-like environment with global project opportunities #mlops #awssagemaker #mlflow #python #aws #docker #immediatejoiner #remotemode #ai/mlsolutions
🚀 Job Opening: Senior ServiceNow Developer (FSM & SPM Expert) | 5+ Years Experience | Fully Remote 🌐 Company: KrtrimaIQ Cognitive Solutions Location: Remote (Work from Anywhere) Experience: 5+ Years Job Type: Full-Time | Permanent Domain: IT Services / Enterprise Platforms 🔍 About the Role KrtrimaIQ Cognitive Solutions is actively seeking a Senior ServiceNow Developer with strong expertise in Field Service Management (FSM) and Strategic Portfolio Management (SPM) modules. This is a critical role in our growing team, where you’ll work on enterprise-scale ServiceNow implementations to enable digital transformation for global clients. This is a 100% remote position, ideal for experienced professionals who thrive in dynamic and distributed environments. ✅ Key Responsibilities Design, develop, and customize ServiceNow applications with a strong focus on FSM and SPM. Build scalable, secure, and high-performing solutions based on business needs. Create custom workflows, business rules, UI actions, client/server scripts, and integrations. Partner with stakeholders to gather requirements and implement platform-wide enhancements. Mentor junior developers and ensure adherence to best practices. 🎯 Required Skills & Experience 5+ years of hands-on development experience with ServiceNow. Proven work experience in Field Service Management (FSM) . Strong hands-on expertise in Strategic Portfolio Management (SPM) . Proficient in JavaScript, Flow Designer, Glide APIs, REST/SOAP integrations. Solid experience with ITSM processes (Incident, Change, Problem, CMDB). Familiarity with Agile and Scrum methodologies. Excellent communication and stakeholder management skills. 🌟 Good to Have ServiceNow certifications (CSA, CAD). Experience in ITOM, HRSD, or other advanced ServiceNow modules. Background in enterprise consulting environments. 💼 What We Offer Competitive compensation package Work from anywhere flexibility Exposure to cutting-edge ServiceNow implementations Collaborative and innovative work culture Learning and upskilling opportunities 📩 Interested candidates can apply by sending their resume to: 📧 nikhil.kumar@krtrimaiq.ai
🚨 Job Opening: HEOR / RWE Lead – Senior Scientist / Manager | 5+ Years Experience | Remote/Hybrid Company: KrtrimaIQ Cognitive Solutions Location: Remote / Hybrid (Preferred: Bangalore-based candidates) Experience: 5+ Years Domain: Health Economics & Outcomes Research (HEOR), Real World Evidence (RWE), Life Sciences Analytics 🔍 About the Role KrtrimaIQ Cognitive Solutions is looking for a highly skilled and experienced HEOR / RWE Lead – Senior Scientist / Manager to join our growing team. This role is ideal for someone with a strong background in real-world data analytics, particularly using licensed US claims datasets, and prior experience in a HEOR/RWE vendor environment. You will lead the execution of high-impact projects for global pharmaceutical and life sciences clients, leveraging advanced statistical and epidemiological methodologies to generate actionable insights from real-world evidence. 🧠 Key Responsibilities Data Management: Clean, transform, and prepare analytical datasets using licensed US claims data (e.g., Optum, MarketScan, IQVIA). Advanced Analysis: Conduct robust statistical and epidemiological analyses to derive insights aligned with client objectives. Client-Facing Delivery: Present findings clearly, communicate insights, and support client discussions with high-quality deliverables. Project Leadership: Understand analytical goals, guide junior team members, and manage project timelines and expectations. Cross-Team Collaboration: Work closely with data scientists, statisticians, and domain experts to ensure outcome-focused project execution. ✅ Required Skills & Qualifications Master’s or Ph.D. in Biostatistics, Epidemiology, Health Economics, Data Science, Statistics, or a related quantitative field Minimum 5+ years of experience in a HEOR/RWE vendor environment Hands-on experience working with licensed US claims datasets (e.g., Optum, MarketScan, Truven, IQVIA) Proficiency in SAS, R, Python , or SQL for data analysis Strong understanding of epidemiologic and health economic methodologies Excellent verbal and written communication skills 🌟 Preferred Skills Experience managing or leading HEOR/RWE project teams Exposure to data visualization tools (e.g., Power BI, Tableau) Knowledge of machine learning , GenAI , or automation frameworks applied to healthcare analytics 💼 Why Join KrtrimaIQ Cognitive Solutions? Be part of a disruptive AI & data-driven company in the healthcare domain Work with cutting-edge real-world data technologies and global pharma leaders Collaborative, innovative, and growth-driven work culture Flexible hybrid/remote work model with competitive compensation 📩 Apply now by sending your resume to: nikhil.kumar@krtrimaiq.ai
Job Description We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure. This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines. Key Responsibilities Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation. Implement GenAI use cases using LLMs Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets. Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems. Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience. Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc. Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness. Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning. Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning. Technical Skills Required Core ML & AI Skills (Must-Have) Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization. Experience with real-world model deployment and scaling, not just notebooks or prototypes. Good understanding of ML Ops, model lifecycle, and pipeline orchestration. Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc. SQL proficiency and experience querying large datasets. Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation. Cloud Experience In GCP (preferred), AWS, Or Azure. Cloud & Big Data Stack Hands-on Experience With GCP tools – Vertex AI, Kubeflow, BigQuery, GCS Or equivalent AWS/Azure ML stacks Familiar with Airflow, PySpark, or other pipeline orchestration tools. Experience reading/writing data from/to cloud services. Qualifications Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field. 4+ years of experience in data analytics and machine learning roles. 2+ years of experience in Python or similar programming languages (Java, Scala, Rust). Must have experience deploying and scaling ML models in production. Nice to Have Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures. Background in academic research to production applications. Building APIs and monitoring production ML models. Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory. Communication & Collaboration Strong ability to explain complex models and insights to both technical and non-technical stakeholders. Ask the right questions, clarify objectives, and align analytics with business goals. Comfortable working cross-functionally in agile and collaborative teams. Important Note This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models. Cloud Experience Is Mandatory (GCP Preferred, AWS/Azure Acceptable). Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered. Skills:- Machine Learning (ML), Production management, Large Language Models (LLM), AIML and Google Cloud Platform (GCP)
Job Description Job Title: AI Engineer Location: Bangalore (Hybrid) Experience: 5+ years Employment Type: Full-time Company Overview Join KrtrimaIQ Cognitive Solutions, a leading AI/ML company enabling intelligent enterprises through advanced technologies like machine learning, NLP, computer vision, and automation. Be part of a dynamic team transforming data into powerful business decisions. Key Responsibilities Design, develop, and deploy AI/ML models using Large Language Models (LLMs) and Multimodal LLMs. Work on building and optimizing Retrieval-Augmented Generation (RAG) pipelines for real-world use cases. Implement and fine-tune transformer models using Hugging Face and PyTorch. Conduct experiments, benchmark models, and continuously improve performance and scalability. Collaborate with data engineers and product teams to integrate AI models into production environments. Stay updated with the latest advancements in generative AI, LLM architectures, and open-source frameworks. Required Skills Strong hands-on experience with LLMs (GPT, LLaMA, Mistral, etc.) and Multimodal models (e.g., CLIP, Flamingo). Expertise in PyTorch and Hugging Face Transformers. Proficiency in building RAG-based solutions, integrating vector databases like FAISS, Pinecone, or Weaviate. Solid understanding of deep learning concepts, NLP, and model evaluation techniques. Experience in deploying and scaling AI models in production environments. Good programming skills in Python and familiarity with APIs and ML Ops tools. Preferred Qualifications Master's or Bachelor's degree in Computer Science, AI, Data Science, or related fields. Experience with distributed computing and cloud platforms (AWS/GCP/Azure). Contributions to open-source projects or research papers in the AI/ML domain. Why Join Us? Cutting-edge AI/ML projects Flexible hybrid work culture Competitive Compensation And Benefits Opportunity to grow in a rapidly scaling AI company #aiengineer #aijobs #RAG #Huggingface #NLP #Pytorch #machinelearning #llm #finetuning #immediatejoiner #hybridlocation #bangalorelocation Skills:- RAG, Large Language Models (LLM), Generative AI, Natural Language Processing (NLP), Huggingface and PyTorch
As a ServiceNow Developer at our Bangalore office, you will be a key member of our team, responsible for developing and customizing ServiceNow applications and services. Your role will involve implementing new ServiceNow modules and features related to Field Service Management (FSM) and configuring the UI Builder workspace. Your expertise in UI development using HTML, CSS, and JavaScript will be crucial in creating user-friendly solutions. You will be expected to have a strong background in writing Server scripts, Client scripts, Business rules, and Scheduled Jobs, as well as integrating ServiceNow with other systems and applications. Your responsibilities will also include creating and maintaining technical documentation, troubleshooting technical issues, and collaborating with cross-functional teams to deliver solutions that meet our clients" needs. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field, along with at least 4 years of experience in ServiceNow development and implementation. Proficiency in JavaScript, HTML, CSS, and other web technologies is essential, as is experience with ServiceNow scripting and workflow management. Knowledge of ITIL processes and best practices, strong problem-solving skills, and excellent communication abilities are also required. Preferred qualifications for this position include certification as a ServiceNow Certified Application Developer (CAD) or similar credentials, experience with Agile development methodologies, and familiarity with REST/SOAP APIs and integration techniques. Knowledge of the FSM module, understanding client requirements, and awareness of security and compliance standards in ServiceNow will also be beneficial. If you are looking to join a dynamic team and contribute to the development of innovative ServiceNow solutions, we invite you to apply for this exciting opportunity. Salary Range: 10 to 14 LPA (Note: This job description was sourced from hirist.tech),
Job Title: Python Architect Location: Bangalore | Remote | Full Time Experience: 8+ years Company : krtrimaIQ Cognitive Solutions Role Overview: We are seeking an experienced Python Architect to play a critical role in the evolution of our platform. This role will be primarily responsible for designing and implementing core system capabilities, shaping the software architecture, and establishing coding standards across the development team. You will work closely with engineering leadership and cross-functional teams to build scalable, maintainable, and high-quality systems. In addition, you will actively contribute code, perform code reviews, mentor engineers, and occasionally deploy infrastructure/software on Azure using Terraform and Ansible. This position is designed to reduce the hands-on coding load for the VP of R&D while serving as a technical leader across the organization. Key Responsibilities Architecture & Technical Leadership Design and develop core platform features and system capabilities. Define/enforce coding standards, design patterns, and best practices. Ensure architectural integrity and long-term scalability of solutions. Collaborate with the VP of R&D to align technical execution with business goals. Development & Code Quality Contribute hands-on to critical components of the codebase. Ensure efficient, single-core optimized coding practices (performance-focused). Perform rigorous code reviews and enforce maintainable, high-quality standards. Implement and maintain testing, documentation, and CI/CD best practices. Collaboration & Mentorship Serve as a trusted advisor to leadership and product teams. Mentor junior engineers on architecture, design patterns, and advanced problem-solving. Identify risks early and propose mitigation strategies. Infrastructure & Deployment Oversee Infrastructure-as-Code practices using Terraform and Azure. Maintain CI/CD pipelines with GitHub workflows. Ensure smooth, reliable, and automated deployments. Requirements 8+ years of professional software development experience. Expert-level Python skills with strong experience in architecture/design. Hands-on expertise in: Asyncio (concurrent code, pitfalls, best practices). SQLAlchemy and Alembic (database ORM & migrations). Poetry (dependency and environment management). GraphQL API design & development. AWS/Azure cloud services, especially Azure with Terraform & Ansible. CI/CD automation with GitHub workflows. Proven experience in data engineering pipelines and ML workflows. Experience building scalable distributed systems. Strong focus on single-core efficient coding strategies. Excellent communication skills (written & verbal). Nice to Have Exposure to AI agent frameworks / agentic tooling. Background in supply chain, manufacturing, or industrial SaaS. Knowledge of SaaS platform security and compliance best practices. Experience in fast-growth startups or scaling environments. Why Join Us? Opportunity to take ownership of platform-level architecture. Collaborate directly with leadership to influence product direction. Work with a modern tech stack at the intersection of Python, ML, and Cloud. Shape the engineering culture, coding practices, and innovation within the team. ✅ If you’re a Python expert passionate about high-performance architecture, scaling systems, and mentoring engineering teams, we’d love to hear from you! Apply now. #pythonlead #pythonarchitecture #sql #graphql #sqlalchemy #remotemode #bangalorelocation #aiml #immediatejoiner
We are seeking an experienced and hands-on Senior Developer to play a critical role in the evolution of our platform. This role will take primary responsibility for designing and implementing a range of core system and feature capabilities and will be expected to contribute to the overall information model, software architecture, and coding standards of the product. The position is designed to alleviate the hands-on coding demands on our current VP of R&D. You will work closely with our existing engineering leadership and cross-functional teams to ensure scalable, maintainable, and high-quality code delivery. You will also actively contribute to development work, assist with critical PR reviews, and mentor team members. From time to time, you may be called upon to deploy infrastructure and software on Azure via Terraform and Ansible. This role requires deep expertise in Python-based software architecture, strong data engineering and machine learning understanding, and a demonstrated ability to align technical execution with business goals. Responsibilities Architecture & Technical Leadership Own and develop a variety of core system capabilities and reporting features. Define and enforce coding standards, design patterns, and best practices across the development team. Perform rigorous code reviews to ensure high-quality, maintainable, and scalable codebases. Work closely with the VP of R&D and cross-functional teams to ensure code developed by the team aligns with Oii architecture, product strategy, and long-term scalability. Development & Code Quality Contribute hands-on to critical components of the codebase. Guide the team in balancing technical debt, scalability, and delivery speed. Maintain high standards in code versioning, testing, documentation, and deployment. Collaboration & Mentorship Act as a technical advisor to the product and leadership teams. Mentor and support the development team, helping to upskill engineers in architecture, design patterns, and problem-solving. Proactively identify potential risks and propose mitigation strategies. Infrastructure & Deployment Oversee infrastructure as code practices using Terraform and Azure. Maintain robust CI/CD pipelines leveraging GitHub workflows. Ensure efficient and reliable deployment processes. Requirements 8+ years of professional software development experience. Expert-level Python programming skills. Deep experience with: GraphQL API design and development. Alembic for database migrations. Poetry for dependency and environment management. GitHub workflows, CI/CD deployment pipelines, and PR processes. Azure cloud services, Terraform, and Ansible for infrastructure deployment. Strong understanding of machine learning workflows, data engineering pipelines, and production ML operations. Experience designing scalable distributed systems. Ability to critically assess and review code to maintain architectural integrity. Excellent written and verbal communication skills. Additional Experience (Nice To Have) Experience with AI agent frameworks or agentic tooling. Prior experience in supply chain, manufacturing, or industrial SaaS platforms. Experience contributing to SaaS platform security and compliance practices. Prior experience in fast-growth startups or scaling technology organizations. Skills:- SQLAlchemy, Python and GraphQL
As a Senior Big Data Developer specializing in PySpark, with a robust 57 years of experience, you will be responsible for designing, developing, and optimizing scalable data pipelines. Collaboration with cross-functional teams is essential to deliver reliable, high-performance data solutions in the Apache Big Data ecosystem. Your primary responsibilities will encompass building and enhancing large-scale data processing pipelines using PySpark, working with tools like Apache Hadoop, Hive, HDFS, Spark, and more to manage big data workloads efficiently. Additionally, you will be engaged in ETL development, data ingestion, transformation, integration from diverse sources, and collaborating with Data Scientists, Analysts, and business stakeholders to provide data-driven solutions. Ensuring the availability, performance, and scalability of data jobs will be crucial, along with implementing data quality checks, and lineage tracking for auditability. Developing reusable code in Python and Java, contributing to architectural design, performance tuning, tech discussions, and utilizing Apache NiFi for automated data flow management when applicable are integral parts of this role. Your qualifications should include expertise in PySpark, distributed data processing, and a firm grasp of the Apache ecosystem components such as Hadoop, Hive, HDFS, and Spark. Experience with ETL pipelines, data modeling, data warehousing, proficiency in Python, Java, and familiarity with SQL/NoSQL databases are essential. Hands-on experience with Git, CI/CD tools, code versioning, and knowledge of Apache NiFi, real-time data streaming tools, cloud platforms like AWS, Azure, or GCP, Docker, Kubernetes, or other orchestration tools would be advantageous. Your soft skills should encompass excellent problem-solving, analytical abilities, strong collaboration, communication skills, and the ability to work independently and within a team. A Bachelors or Masters degree in Computer Science, Information Systems, or a related field is required. If you are enthusiastic about big data and wish to contribute to impactful projects, please submit your updated resume to nikhil.kumar@krtrimaiq.ai.,
As a Service Delivery Manager at our organization, you will play a crucial role in overseeing and managing service delivery operations in Bangalore/Pune, India. Your primary responsibility will be to ensure the adherence to SLAs and maintain strong client relationships, while driving continuous improvements in service quality and operational efficiency. You will be responsible for overseeing the end-to-end service delivery for application support, which includes enhancements and various levels of support such as L1, L2, and L3. It will be essential for you to strictly adhere to SLAs, particularly the P1-P4 ticketing structure, and fulfill service commitments. You will also lead and scale up L1, L2, and L3 support teams, initially starting with 5 to 6 members. Additionally, collaborating with the onshore Service Delivery Manager (SDM) will be crucial for seamless coordination. Your role will involve developing and implementing best practices for governance, incident management, knowledge management, and performance monitoring. You will utilize tools such as JIRA, Site24x7, and dashboards for effective monitoring and reporting. Furthermore, you will be expected to drive automation and innovation by recommending and implementing appropriate tools and processes to enhance service efficiency. To excel in this role, you should possess around 10 years of experience in IT Service Delivery Management, preferably in the banking domain. Hands-on experience with JIRA Service Management, AWS, and monitoring tools is essential. A strong understanding of microservices, DevOps, and cloud-native applications is required, along with experience in managing both desktop and mobile applications. An ITIL certification would be preferred. If you are a results-driven professional who is passionate about delivering high-quality service and operational excellence, we encourage you to apply and become a valuable part of our dynamic team. Please share your profile at nikhil.kumar@krtrimaiq.ai.,
Azure Data Engineer Experience: 6-12 years Location: Bangalore/ Pune Employment Type: Full-time Preferred candidates should currently be serving their notice period. Job Summary We are seeking an experienced Azure Data Engineer to design, develop, and maintain scalable data solutions on Microsoft Azure. The ideal candidate will have hands-on expertise with Databricks, Azure Data Factory, PySpark, and cloud platforms including AWS, Azure, and SAP. You will be responsible for managing ETL/ELT pipelines, data modeling, integration, and processing to enable robust data-driven solutions. Key Responsibilities Design, implement, and optimize ELT and ETL pipelines using Azure Data Factory, Databricks, and related tools. Develop and maintain scalable data ingestion, integration, and processing workflows using PySpark and SQL. Build and optimize data models ensuring data integrity, quality, and accessibility. Collaborate with data scientists, analysts, and engineering teams to deliver end-to-end data solutions. Manage data storage layers including Azure SQL DB, Synapse, and Data Lake architectures. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Implement CI/CD workflows using GitHub Actions and Azure DevOps. Use cloud services like Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, and PyTest for analytics, orchestration, and quality checks. Ensure security compliance and best practices in data handling and storage. Document architecture, pipeline designs, and processes for knowledge sharing and auditing. Mandatory Skills Mastery of AWS, Azure, and SAP cloud platforms. Expert in ELT techniques and data pipeline design. Advanced skills in Data Modeling for large-scale analytics environments. Proficient in Data Integration & Ingestion methods. Required Skills Hands-on experience with Azure Data Factory, Databricks, SQL DB, Synapse Analytics, Stream Analytics. Experience with AWS Glue, Apache Airflow, AWS Kinesis, Amazon Redshift. Familiarity with quality and testing tools such as SonarQube and PyTest. Proficiency in GitHub, GitHub Actions, Azure DevOps for CI/CD processes. Strong coding skills in PySpark, SQL, and Python. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. 6 to 12 years of relevant experience in cloud data engineering roles. Proven track record of delivering complex data pipelines and solutions on Azure and/or AWS platforms. Job Type: Full-time Pay: ₹1,000,000.08 - ₹2,500,000.00 per year Benefits: Health insurance Provident Fund Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): May I know your current CTC ? If you're serving the notice period, Kindly let us know how many days remain. Currently you're not please ignore this openings because it's some urgent requirement Experience: Azure: 5 years (Required) SQL: 5 years (Required) Azure Databricks: 5 years (Required) License/Certification: Azure Certification (Required) Work Location: In person