About the Role We are seeking a Agentic AI Developer with 35 years of total software/AI experience and proven hands-on work in Agentic AI . The ideal candidate has built LLM-powered agents using frameworks like LangChain, AutoGen, CrewAI, or Semantic Kernel, and can design, deploy, and optimize autonomous AI systems for real-world business use cases. Key Responsibilities Architect, build, and deploy LLM-driven agents that can plan, reason, and execute multi-step workflows. Work with agent orchestration frameworks (LangChain, AutoGen, CrewAI, Semantic Kernel, Haystack, etc.). Develop and maintain tools, APIs, and connectors for extending agent capabilities. Implement RAG pipelines with vector databases (Pinecone, Weaviate, FAISS, Chroma, etc.). Optimize prompts, workflows, and decision-making for accuracy, cost, and reliability . Collaborate with product and engineering teams to design use-casespecific agents (e.g., copilots, data analysts, support agents). Ensure monitoring, security, and ethical compliance of deployed agents. Stay ahead of emerging trends in multi-agent systems and autonomous AI research . Required Skills 35 years of professional experience in AI/ML, software engineering, or backend development . Demonstrated hands-on experience in building agentic AI solutions (not just chatbots). Proficiency in Python (TypeScript/JavaScript is a plus). Direct experience with LLM APIs (OpenAI, Anthropic, Hugging Face, Cohere, etc.). Strong knowledge of vector databases and embeddings . Experience integrating APIs, external tools, and enterprise data sources into agents. Solid understanding of prompt engineering and workflow optimization . Strong problem-solving, debugging, and system design skills. Nice to Have Experience with multi-agent systems (agents collaborating on tasks). Prior contributions to open-source agentic AI projects . Cloud deployment knowledge ( AWS/GCP/Azure ) and MLOps practices. Background in reinforcement learning or agent evaluation . Familiarity with AI safety, monitoring, and guardrails . What We Offer Work on cutting-edge AI agent projects with direct real-world impact. Collaborative environment with strong emphasis on innovation & experimentation . Competitive salary and growth opportunities. Opportunity to specialize in one of the fastest-growing areas of AI . Show more Show less
**We are currently hiring for a senior-level position and are looking for immediate joiners only. If you are interested, please send your updated resume to [HIDDEN TEXT] along with details of your CTC, ECTC and notice period. Also please provide a brief of your experience on Predictive Analytics and Machine learning, including the total number of years of hands-on experience in these areas. ** About the Role We are looking for a visionary Senior Data Scientist who excels in predictive analytics and machine learning. You will lead the design, development, deployment, and optimization of data-driven products that drive high business impactsuch as forecasting the price/value of stocks, commodities, or consumer goods (e.g., apparel, retail). This role is for someone who has successfully taken machine learning models from concept to production, iteratively improved them based on user feedback and real-world performance, and thrives on delivering measurable results. Key Responsibilities End-to-End Product Ownership: Lead the development of predictive models from exploration and prototype to full-scale production deployment. Forecasting & Prediction: Build robust time-series and regression models to predict prices/values of financial assets, oil, apparel, and other commodities. Model Optimization: Continuously monitor and fine-tune models for accuracy, performance, and scalability using real-time data feedback. ML Ops & Deployment: Collaborate with engineering to ensure successful deployment and monitoring of models in production environments. Stakeholder Collaboration: Translate business problems into analytical frameworks, working closely with product, strategy, and business teams. Data Strategy: Define and manage pipelines and feature stores using structured and unstructured data sources. Mentorship: Guide and mentor junior data scientists and analysts in best practices and advanced modeling techniques. Required Qualifications 8+ years of experience in data science, with a strong background in predictive analytics and machine learning. Proven experience building and scaling ML models in production environments (not just notebooks or PoCs). Deep expertise in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, LightGBM, TensorFlow/PyTorch). Strong knowledge of time-series forecasting, regression techniques, and feature engineering. Experience in domains such as finance, commodities, retail pricing, or demand prediction is highly preferred. Experience working with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Docker, and MLflow. Ability to define success metrics, conduct A/B tests, and iterate based on measurable KPIs. Excellent communication and storytelling skills with both technical and non-technical stakeholders. Preferred Experience with LLMs, deep learning, or hybrid modeling approaches. Familiarity with data privacy, compliance, and governance in production systems. Publications or thought leadership in applied machine learning or forecasting. Why Join Us Work on high-impact projects with massive data volumes. Shape predictive products that directly influence strategic business outcomes. Join a collaborative, data-first culture with real ownership and innovation. Show more Show less
** Interested candidates: please send your resumes with your salary expectations at [HIDDEN TEXT] ** About the Role Youll ship features across multiple stacks while using AI coding assistants (Cursor/Copilot/ChatGPT/Claude) to scaffold code, write tests, refactor, and debugthen verify and harden the output. Syntax can be generated; you focus on problem-solving, design, and correctness. What you will you Build/extend APIs & small UIs in PHP/Python/TypeScript (e.g., Laravel/FastAPI/Express/Next.js) Work with SQL (MySQL/Postgres) , write migrations, optimize queries; add basic caching (Redis) Use AI tools daily for boilerplate/tests/docs; maintain clean Git history and PRs Add unit/integration tests , set up simple CI, containerize with Docker Follow basic security & privacy practices (secrets, PII, validation, rate limits) Must Have Skills Comfortable in at least one of: PHP, Python, or TypeScript/JavaScript Understanding of HTTP/REST, JSON, SQL, Git, Linux basics Hands-on use of AI coding tools (share examples: prompts, before/after, lessons learned) Ability to read unfamiliar code and write small, safe changes with tests Clear written communication; curiosity and bias to ship Nice to Have React/Next.js or simple UI skills; Tailwind/shadcn Basics of LLMs (prompting, embeddings/RAG), LangChain/LlamaIndex Docker, GitHub Actions, AWS/GCP fundamentals; observability basics Show more Show less
About the Role We are seeking an experienced AWS Data Engineer with 4+ years of experience to design, develop, and maintain scalable data pipelines and transformation frameworks using AWS native tools and modern data engineering technologies. The ideal candidate will have a strong grasp of large, complex, and multi-dimensional datasets , hands-on expertise in AWS Data Engineering services , and deep understanding of data modeling and performance optimization . Exposure to Veeva API integration will be considered an added advantage. Key Responsibilities Design, develop, and optimize data ingestion, transformation, and storage pipelines on AWS. Manage and process large-scale structured, semi-structured, and unstructured datasets efficiently. Build and maintain ETL/ELT workflows using AWS native tools such as Glue, Lambda, EMR, and Step Functions . Design and implement scalable data architectures leveraging Python, PySpark, and Apache Spark . Develop and maintain data models aligned with business and analytical requirements. Collaborate closely with data scientists, analysts, and business stakeholders to ensure data availability, reliability, and quality. Handle on-premises and cloud data warehouse systems , ensuring optimal performance and smooth migration strategies. Maintain awareness of emerging trends and best practices in data engineering, analytics, and cloud computing. Required Skills & Experience Proven hands-on experience with AWS Data Engineering stack , including: AWS Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, and IAM. Proficiency in Python, PySpark, and Apache Spark for data transformation and processing. Strong understanding of data modeling principles (conceptual, logical, and physical). Experience working with modern data platforms such as Snowflake, Dataiku, or Alteryx (good to have, not mandatory). Familiarity with on-premise and cloud data warehouses and migration strategies . Solid knowledge of ETL design patterns, data governance, and data quality/security best practices . Understanding of DevOps for Data Engineering , including CI/CD pipelines and Infrastructure as Code (IaC) using Terraform or CloudFormation (good to have). Excellent problem-solving, analytical, and communication skills . Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering , or a related field. Minimum 4 years of experience in data engineering with a focus on AWS tools and ecosystem. Continual learning mindset with interest in emerging technologies and cloud data innovations .
About the Role The ML Developer will design, build, and maintain machine learning models and data pipelines powering core business use cases. The role is hands-on with Python for model development, feature engineering, and pipeline automation, leveraging Azure ML, and Azure DevOps. Success means robust, production grade models with proven business impact, traceable lineage, and operational excellence at scale. Key Responsibilities Feature Engineering & Model Development o Translate model prototypes from Data Scientists into Azure ML production pipelines, including data ingestion, training, inference, and retraining. o Build and iterate on ML models (forecasting/classification/regression) using modern ML frameworks (scikit-learn, XGBoost, LightGBM, PyTorch/TensorFlow). o Develop robust feature pipelines (deterministic code, modular definitions, reusability) using Pandas/PySpark and orchestrate in AML Pipelines Jobs. o Design experiments with proper sampling, train-test splits, cross-validation, and metrics selection (e.g., RMSE, AUC, MAPE). o Implement model selection, champion/challenger promotion, and versioning strategies. o Document experiment results for reproducibility and regulatory compliance. Model Operationalization & Monitoring o Productionize models as batch or real-time endpoints via Azure ML. o Implement model validation gates (drift/shift, prediction distribution checks, champion vs. challenger results). o Set up model monitoring dashboards for latency, prediction freshness, data drift, and feature importance tracking. o Integrate model deployment/test harnesses with Azure DevOps pipelines for CI/CD. o Develop FastAPIs to invoke and consume ML models. Data Engineering & Quality o Profile, clean, and transform raw data from Snowflake, SQL, and third-party sources. o Implement checks for data quality (nulls, schema validation, outlier handling, time alignment, duplicate detection). o Automate feature extraction and maintain feature store consistency. Collaboration & Quality Ops o Work with Product, Data, and QA teams to agree on model acceptance criteria and experiment reviews. o Contribute to defect taxonomy (data/model/serving), pipeline observability, and SLO dashboards. o Publish model performance reports and SLI/SLO summaries for stakeholders. Required Qualifications 5+ years developing data-focused solutions (3+ years in ML modeling and operations). Advanced proficiency in Python (pandas, NumPy, ML frameworks), SQL, and cloud data tools. Solid experience building production ML pipelines (Azure ML, Databricks, or equivalent). Understanding of model validation, drift detection, and online monitoring. Experience with feature stores, CI/CD (Azure DevOps), and API development (FastAPI/Flask). Bachelor's/Master's degree in Computer Science, Statistics, Information Technology or related field. Certification in Azure Data or ML Engineer Associate is a plus.
** We are considering immediate joiners only. Interested candidates may send their resumes to [HIDDEN TEXT] , including details of their CTC, ECTC, notice period, and a short overview of their relevant ML and Data Engineering work.** About the Role We are seeking an experienced Data Engineer (5+ years) with strong, hands-on experience in ML/AI data workflows. You will play a critical role in building feature pipelines, model-serving data flows, and end-to-end orchestration that powers our production ML systems. This is a fully remote, ownership-driven position. Key Responsibilities ML / AI Data Engineering Build and maintain feature engineering pipelines for ML model training and inference. Develop and optimize model-serving data pipelines ensuring low-latency and reliable delivery. Design and orchestrate end-to-end ML workflows (Airflow, Prefect, Dagster, Kubeflow, etc.). Work closely with Data Scientists and ML Engineers to productionize ML models. Implement automated dataset versioning, feature stores, and reproducibility frameworks. Build scalable data foundations required for MLOps: monitoring, retraining triggers, model data validation. Data Pipelines & ETL Design and build high-performance ETL/ELT pipelines for structured and unstructured data. Manage ingestion from APIs, databases, files, event streams, and cloud storage. Ensure pipelines are fault-tolerant, well-monitored, and automated. Data Modelling & Data Warehousing Build and maintain data models, marts, and warehouse layers to support analytics and ML pipelines. Translate ML feature requirements into clean and optimized data structures. Data Quality & Governance Implement schema validation, data quality checks, and automated monitoring. Maintain metadata, lineage, and documentation for all data flows. Cloud & Infrastructure Develop cloud-native data workflows (AWS / Azure / GCP). Work with data storage and compute systems like S3, BigQuery, Snowflake, Databricks, Redshift, etc. Ensure performance optimization, scaling, and cost-efficiency. DevOps, CI/CD & Automation Build CI/CD pipelines for data and ML workflows. Containerize pipelines using Docker, and manage deployments via Git-based workflows. Automate scheduling, builds, and monitoring for data and ML systems. Required Skills & Experience 5+ years of experience as a Data Engineer. Major Requirement (Non-Negotiable): Strong experience working on ML/AI projects, including: ML feature pipelines Model-serving data workflows ML orchestration (Airflow, Prefect, Dagster, Kubeflow, etc.) Strong in Python, SQL, and ETL frameworks. Experience with big data technologies (Spark, PySpark, Databricks). Hands-on with cloud platforms (AWS/Azure/GCP). Experience with CI/CD, Docker, Git, APIs. Ability to work independently and in cross-functional remote teams. Excellent communication and documentation skills. Nice-to-Have Skills Tools: MLflow, Vertex AI, SageMaker, Azure ML Streaming: Kafka, Kinesis, Pub/Sub Data quality frameworks: Great Expectations, Soda, Pandera Why Join Us Fully remote with flexible schedule Work on real-world ML/AI production systems High ownership + direct architectural influence Opportunity to collaborate with advanced Data Science & ML teams