About the Role Own and execute end-to-end development of complex backend systems or AI-powered tools Write clean, well-documented, and efficient Python code with minimal supervision Rapidly prototype and scale agentic workflows, automation scripts, or backend pipelines Collaborate with product or technical leads to align architecture with business needs Proactively identify, debug, and optimize performance bottlenecks Contribute to technical design and architectural decisions Required Qualifications Expert-level Python skills with strong grasp of OOP, async patterns, and performance optimization Demonstrated ability to work independently and take full ownership of projects Strong portfolio (public GitHub or equivalent projects) Familiarity with at least one other programming language (e.g., JavaScript, Rust, Go, C++, etc.) Solid understanding of software engineering principles, testing, and clean code practices Preferred Skills Experience with agentic AI tools (LangChain, CrewAI, AutoGen, LangGraph, etc.) Familiarity with deep learning frameworks (PyTorch, TensorFlow) or model serving platforms Working knowledge of Docker, FastAPI, or microservices-based architecture Exposure to cloud deployment (AWS/GCP), message queues (RabbitMQ/Kafka), or vector databases
Job Title: Data Scientist NLP & Generative AI Location: Chennai -Hybrid Experience: 3–6 years in NLP, LLMs, and applied GenAI Job Summary We are looking for a hands-on Data Scientist with deep expertise in NLP and Generative AI to help build and refine the intelligence behind our agentic AI systems. You will be responsible for fine-tuning , prompt engineering , and evaluating LLMs that power our digital workers across real-world workflows. Key Responsibilities Fine-tune and evaluate LLMs (e.g., Mistral, LLaMA, Qwen) using frameworks like Unsloth , HuggingFace, and DeepSpeed Develop high-quality prompts and RAG pipelines for few-shot and zero-shot performance Analyze and curate domain-specific text datasets for training and evaluation Conduct performance and safety evaluation of fine-tuned models Collaborate with engineering teams to integrate models into agentic workflows Stay up to date with the latest in open-source LLMs and GenAI tools, and rapidly prototype experiments Apply efficient training and inference techniques (LoRA, QLoRA, quantization, etc.) Required Skills 3+ years of experience in Natural Language Processing (NLP) and machine learning applied to text Strong coding skills in python Hands-on experience fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, Qwen) using frameworks like Unsloth , HuggingFace Transformers , PEFT , LoRA , QLoRA , bitsandbytes Proficient in PyTorch (preferred) or TensorFlow , with experience in writing custom training/evaluation loops Experience in dataset preparation , tokenization (e.g., Tokenizer, tokenizers), and formatting for instruction tuning (ChatML, Alpaca, ShareGPT formats) Familiarity with retrieval-augmented generation (RAG) using FAISS , Chroma , Weaviate , or Qdrant Strong knowledge of prompt engineering , few-shot/zero-shot learning , chain-of-thought prompting , and function-calling patterns Exposure to agentic AI frameworks like CrewAI , Phidata , LangChain , LlamaIndex , or AutoGen Experience with GPU-accelerated training/inference and libraries like DeepSpeed , Accelerate , Flash Attention , Transformers v2 , etc. Solid understanding of LLM evaluation metrics (BLEU, ROUGE, perplexity, pass@k) and safety-related metrics (toxicity, bias) Ability to work with open-source checkpoints and formats (e.g., safetensors , GGUF , HF Hub , GPTQ , ExLlama ) Comfortable with containerized environments (Docker) and scripting for training pipelines , data curation , or evaluation workflows Nice to Haves Experience in Linux (Ubuntu) Terminal/Bash Scripting
Overview: We are seeking an experienced Azure Data Engineer to join our team in a hybrid Developer. This role focuses on enhancing and supporting existing Data & Analytics solutions by leveraging Azure Data Engineering technologies. The engineer will work on developing, maintaining, and deploying IT products and solutions that serve various business users, with a strong emphasis on performance, scalability, and reliability. Must-Have Skills: ADF Databricks Key Responsibilities: Incident classification and prioritization Log analysis and trend identification Coordination with Subject Matter Experts (SMEs) Escalation of unresolved or complex issues Root cause analysis and permanent resolution implementation Stakeholder communication and status updates Resolution of complex and major incidents Code reviews (Per week 2 per individual) to ensure adherence to standards and optimize performance Bug fixing of recurring or critical issues identified during operations Gold layer tasks, including enhancements and performance tuning. Design, develop, and support data pipelines and solutions using Azure data engineering services. Implement data flow and ETL techniques leveraging Azure Data Factory, Databricks, and Synapse. Cleanse, transform, and enrich datasets using Databricks notebooks and PySpark. Orchestrate and automate workflows across services and systems. Collaborate with business and technical teams to deliver robust and scalable data solutions. Work in a support role to resolve incidents, handle change/service requests, and monitor performance. Contribute to CI/CD pipeline implementation using Azure DevOps. Technical Requirements: 6 to 8 years of experience in IT and Azure data engineering technologies. Strong experience in Azure Databricks, Azure Synapse, and ADLS Gen2. Proficient in Python, PySpark, and SQL. Experience with file formats such as JSON and Parquet. Working knowledge of database systems, with a preference for Teradata and Snowflake. Hands-on experience with Azure DevOps and CI/CD pipeline deployments. Understanding of Data Warehousing concepts and data modeling best practices. Familiarity with SNOW (ServiceNow) for incident and change management. Non-Technical Requirements: Ability to work independently and collaboratively in virtual teams across geographies. Strong analytical and problem-solving skills. Experience in Agile development practices, including estimation, testing, and deployment. Effective task and time management with the ability to prioritize under pressure. Clear communication and documentation skills for project updates and technical processes. Technologies: Azure Data Factory Azure Databricks Azure Synapse Analytics Unity Catalog PySpark / SQL Azure Data Lake Storage (ADLS), Blob Storage Azure DevOps (CI/CD pipelines) Nice-to-Have: Experience with Business Intelligence tools, preferably Power BI DP-203 certification (Azure Data Engineer Associate)