Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
India
On-site
✅ Job Title: API & Services Engineer 📍 Locations: Chennai, Hyderabad, Bengaluru, Gurugram, Jaipur, Bhopal, Pune (Hybrid – 3 days/week in office) 🕒 Experience: 5–10 Years 🧑💻 Type: Full-time 📩 Apply: Share your resume with the details listed below to vijay.s@xebia.com 🕐 Availability: Immediate joiners or max 2 weeks' notice period only 🚀 About the Role Xebia is hiring a passionate API & Services Engineer who thrives on building scalable backend systems, loves clean code, and enjoys solving complex challenges in cloud environments. You'll work closely with cross-functional teams, using modern tools and frameworks to design, develop, and deploy high-performance APIs. 🔧 Key Responsibilities Develop and maintain RESTful APIs using FastAPI or Flask Design scalable schema and manage databases ( MySQL/PostgreSQL ) Deploy and manage cloud services on Azure or GCP Automate deployments using CI/CD pipelines (GitHub Actions, GitLab CI, etc.) Write unit and integration tests (using Pytest or equivalent) Containerize apps with Docker and orchestrate via Kubernetes and Helm Collaborate with React.js frontend teams for seamless API integrations ✅ Must-Have Skills Python (FastAPI or Flask) SQL Databases (MySQL/PostgreSQL) Cloud Platforms (Azure or GCP) CI/CD tools (GitHub Actions, GitLab CI, etc.) Unit Testing (Pytest or equivalent) Docker, Kubernetes, Helm 🌟 Good-to-Have Skills Frontend knowledge (React.js) API security best practices Microservices performance tuning & observability 💼 Why Xebia? Join a team of top engineers solving real-world problems with scalable architecture, clean code, and the latest tech. Work on global client projects, collaborate with domain experts, and grow in an innovation-driven environment. 📤 To Apply Please share your updated resume and fill in the following details in your email to vijay.s@xebia.com : Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Xebia Location (Chennai / Hyderabad / Bengaluru / Gurugram / Jaipur / Bhopal / Pune): Notice Period / Last Working Day (if serving): Primary Skills: LinkedIn Profile URL: Note: Only candidates who can join within 2 weeks or immediately will be considered. Let’s build something amazing together at Xebia! 🚀
Posted 5 days ago
0 years
0 Lacs
India
Remote
Job Title: Machine Learning Developer Company: Lead India Location: Remote Job Type: Full-Time Salary: ₹3.5 LPA About Lead India: Lead India is a forward-thinking organization focused on creating social impact through technology, innovation, and data-driven solutions. We believe in empowering individuals and building platforms that make governance more participatory and transparent. Job Summary: We are looking for a Machine Learning Developer to join our remote team. You will be responsible for building and deploying predictive models, working with large datasets, and delivering intelligent solutions that enhance our platform’s capabilities and user experience. Key Responsibilities: Design and implement machine learning models for classification, regression, and clustering tasks Collect, clean, and preprocess data from various sources Evaluate model performance using appropriate metrics Deploy machine learning models into production environments Collaborate with data engineers, analysts, and software developers Continuously research and implement state-of-the-art ML techniques Maintain documentation for models, experiments, and code Required Skills and Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field (or equivalent practical experience) Solid understanding of machine learning algorithms and statistical techniques Hands-on experience with Python libraries such as scikit-learn, pandas, NumPy, and matplotlib Familiarity with Jupyter notebooks and experimentation workflows Experience working with datasets using tools like SQL or Excel Strong problem-solving skills and attention to detail Ability to work independently in a remote environment Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Exposure to cloud-based ML platforms (e.g., AWS SageMaker, Google Vertex AI) Understanding of model deployment using Flask, FastAPI, or Docker Knowledge of natural language processing or computer vision What We Offer: Fixed annual salary of ₹3.5 LPA 100% remote work and flexible hours Opportunity to work on impactful, mission-driven projects using real-world data Supportive and collaborative environment for continuous learning and innovation
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description – AI Developer (Agentic AI Frameworks, Computer Vision & LLMs) Location (Hybrid - Bangalore) About the Role We’re seeking an AI Developer who specializes in agentic AI frameworks —LangChain, LangGraph, CrewAI, or equivalents—and who can take both vision and language models from prototype to production. You will lead the design of multi‑agent systems that coordinate perception (image classification & extraction), reasoning, and action, while owning the end‑to‑end deep‑learning life‑cycle (training, scaling, deployment, and monitoring). Key Responsibilities Scope What You’ll Do Agentic AI Frameworks (Primary Focus) Architect and implement multi‑agent workflows using LangChain, LangGraph, CrewAI, or similar. Design role hierarchies, state graphs, and tool integrations that enable autonomous data processing, decision‑making, and orchestration. Benchmark and optimize agent performance (cost, latency, reliability). Image Classification & Extraction Build and fine‑tune CNN/ViT models for classification, detection, OCR, and structured data extraction. Create scalable data‑ingestion, labeling, and augmentation pipelines. LLM Fine‑Tuning & Retrieval‑Augmented Generation (RAG) Fine‑tune open‑weight LLMs with LoRA/QLoRA, PEFT; perform SFT, DPO, or RLHF as needed. Implement RAG pipelines using vector databases (FAISS, Weaviate, pgvector) and domain‑specific adapters. Deep Learning at Scale Develop reproducible training workflows in PyTorch/TensorFlow with experiment tracking (MLflow, W&B). Serve models via TorchServe/Triton/KServe on Kubernetes, SageMaker, or GCP Vertex AI. MLOps & Production Excellence Build robust APIs/micro‑services (FastAPI, gRPC). Establish CI/CD, monitoring (Prometheus, Grafana), and automated retraining triggers. Optimize inference on CPU/GPU/Edge with ONNX/TensorRT, quantization, and pruning. Collaboration & Mentorship Translate product requirements into scalable AI services. Mentor junior engineers, conduct code and experiment reviews, and evangelize best practices. Minimum Qualifications B.S./M.S. in Computer Science, Electrical Engineering, Applied Math, or related discipline. 5+ years building production ML/DL systems with strong Python & Git . Demonstrable expertise in at least one agentic AI framework (LangChain, LangGraph, CrewAI, or comparable). Proven delivery of computer‑vision models for image classification/extraction. Hands‑on experience fine‑tuning LLMs and deploying RAG solutions. Solid understanding of containerization (Docker) and cloud AI stacks (AWS/Azure). Knowledge of distributed training, GPU acceleration, and performance optimization. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Experience: AI, LLM, RAG: 4 years (Preferred) Vector database, Image classification: 4 years (Preferred) containerization (Docker): 3 years (Preferred) ML/DL systems with strong Python & Git: 3 years (Preferred) LangChain, LangGraph, CrewAI: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We are seeking an experienced Python Developer who will oversee a team of skilled engineers while being actively involved in the development of cutting-edge AI solutions. The role requires a blend of technical expertise, leadership ability, and hands-on development skills. The Tech Lead will guide the team through complex projects using Agile, ensuring that all solutions are scalable, maintainable, and of the highest quality. Through mentorship, effective communication, and a passion for innovation, the Tech Lead will drive the team to achieve its full potential and deliver outstanding results. Essential Skills: Proficiency in Python: Strong programming skills in Python. Asynchronous Programming: Experience with asyncio and RabbitMQ. API Development: Expertise in building asynchronous APIs with FastAPI. Database Skills: Proficiency in PostgreSQL or MongoDB. Container Technologies: Experience with Docker/Kubernetes. Version Control: Solid understanding of Git. Preferred Skills: AI Frameworks: Knowledge of PyTorch or TensorFlow, Vector Databases, RAG Data Manipulation: Experience with Pandas for data processes and Jinja for templating. AI and Data Analytics: Understanding of AI/ML and Data analytics technologies like Pandas Team Management Responsibilities: Team Management: Lead a team of engineers, providing mentorship and guidance. Design and Architecture: Oversee design and architecture decisions. Code Reviews: Conduct code and design reviews. Individual Contribution: Balance leadership with hands-on development. Soft Skills: Problem-Solving: Efficiently resolve technical issues. Communication: Strong verbal and written communication skills. Continuous Learning: Passion for learning new technologies and frameworks. Required Skills Python ,GenAI Supported Skills
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role - Java Developer Location - Pune (Hybrid) Type - Permanent Job Description: Responsibilities • Gathering functional requirements, developing technical specifications, and project & test planning • Write high-quality, testable, maintainable and efficient code in backend languages (e.g., Python, Java, Kotlin) and associated frameworks (e.g., Django, FastAPI, Spring Boot.) • Collaborate with the Product team to align on feature development and overall service functional goals • Participate in on-call rotations and contribute to postmortems and system hardening. • Experience with Agile Development, SCRUM, or Extreme Programming methodologies Qualifications • 6+ years of experience building backend systems, preferably in fintech or financial platforms. • Strong programming skills in languages such as Java, Kotlin, or Python. • Solid grasp of software engineering fundamentals and their practical application • Experience with designing or maintaining systems with transactional guarantees (ACID). • Strong experience in leading design and implementation of robust and highly scalable back end services
Posted 5 days ago
8.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Hybrid
Hi All ,we have openings for Senior Database Specialist for Bangalore Location. Role & responsibilities EXP:8Years-12years Location: Bangalore Mode: Hybrid Mode Notice period:30days. Mandatory Skills: Database Specialist,Python, FastAPI ,endtoend database design and implementation, designing implementing and supporting medium to large scale database systems, AWS database services such as S3, Aurora, Managed RDS solutions, Important Note: We are not looking for Database Administrators (DBAs) or generic Database Developers. The key requirement is E2E design and implementation . Mandatory Skills: Strong experience in end-to-end database design and implementation for highly scalable enterprise solutions Proven track record in designing, implementing, and supporting medium to large scale database systems Hands-on experience with AWS database services such as S3, Aurora, and other Managed RDS solutions Proficiency in Python and FastAPI is essential If interested drop cv to sreedharr@xevyte.com. Thanks &Regards sridhar
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position Overview We are seeking an experienced Python Developer with 4+ years of professional experience to join our dynamic development team. The ideal candidate will have strong expertise in backend development, API design, and database management, with excellent communication skills and a collaborative mindset. Key Responsibilities Backend Development Design, develop, and maintain robust backend systems using Python Write clean, efficient, and maintainable code following best practices Optimize application performance and ensure scalability Implement security best practices and maintain code quality standards API Development Design and develop RESTful APIs and web services Create comprehensive API documentation Ensure API security, versioning, and proper error handling Integrate third-party APIs and services Database Management and Design Design and implement efficient database schemas Optimize database queries and performance Manage data migration and database versioning Ensure data integrity and implement backup strategies Communication and Collaboration Collaborate effectively with cross-functional teams, including frontend developers, designers, and product managers Participate in code reviews and provide constructive feedback Communicate technical concepts clearly to both technical and non-technical stakeholders Document technical specifications and system architecture Team Coordination Mentor junior developers and provide technical guidance Participate in agile development processes and sprint planning Coordinate with team members on project deliverables and timelines Contribute to technical decision-making and architecture discussions Required Skills and Experience Technical Skills 4+ years of professional Python development experience Django : Extensive experience with Django framework for web development FastAPI : Proficiency in building modern, fast APIs with FastAPI PostgreSQL : Strong knowledge of PostgreSQL database management Research & Development : Ability to explore new technologies and implement innovative solutions Frontend Technologies : Working knowledge of JavaScript, HTML, and CSS Soft Skills Excellent verbal and written communication skills Strong problem-solving and analytical abilities Ability to work independently and as part of a team Attention to detail and commitment to quality Adaptability and willingness to learn new technologies Good to Have Cloud Services Experience with AWS (Amazon Web Services) or GCP (Google Cloud Platform) Knowledge of cloud deployment, scaling, and monitoring Understanding of serverless architecture and microservices Additional Technical Skills NoSQL Databases : Experience with MongoDB, Redis, or other NoSQL solutions Docker : Containerization and orchestration experience Experience with CI/CD pipelines Knowledge of testing frameworks and test-driven development Understanding of DevOps practices
Posted 5 days ago
8.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What Data Science contributes to Cardinal Health The Data & Analytics Function oversees the analytics life-cycle in order to identify, analyze and present relevant insights that drive business decisions and anticipate opportunities to achieve a competitive advantage. This function manages analytic data platforms, the access, design and implementation of reporting/business intelligence solutions, and the application of advanced quantitative modeling. Data Science applies base, scientific methodologies from various disciplines, techniques and tools that extracts knowledge and insight from data to solve complex business problems on large data sets, integrating multiple systems. Qualifications 8-15 years of experience, preferred Bachelor's degree in related field, or equivalent work experience, preferred Good understanding of descriptive statistics, time series analysis, regression and classification algorithms. Experience in running and monitoring CI/CD pipelines Lead/Ideate GenAI and Classic ML initiative in AICoE Design ML/LLM pipelines good knowledge of RAG, prompt engineering. Monitor system performance, logs, retrieval quality and prompt-effectiveness Design, develop, and maintain robust and scalable cloud functions using Python, FastAPI or Flask, and SQLModel/SQLAlchemy. Integrate microservices with ApigeeX for API management, security (authentication, authorization, rate limiting), monitoring, and analytics. What is expected of you and others at this level Applies advanced knowledge and understanding of concepts, principles, and technical capabilities to manage a wide variety of projects Participates in the development of policies and procedures to achieve specific goals Recommends new practices, processes, metrics, or models Works on or may lead complex projects of large scope Projects may have significant and long-term impact Provides solutions which may set precedent Independently determines method for completion of new projects Receives guidance on overall project objectives Acts as a mentor to less experienced colleagues
Posted 5 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Company: Phenomenal AI Pvt. Ltd. Location: Remote / Mumbai-based (Hybrid options available) Experience Required: 2–3 years Type: Full-time About Us: Phenomenal AI is a pioneering text-to-video platform building state-of-the-art generative AI systems to convert natural language into dynamic, engaging video content. Our mission is to simplify storytelling at scale, enabling creators, educators, and enterprises to generate high-quality videos directly from text prompts. We are a fast-growing team of engineers, researchers, and designers passionate about shaping the future of content creation using AI. Job Description: We are looking for a skilled ML/AI Backend Engineer with 2–3 years of experience to join our core development team. The ideal candidate will be responsible for building the backend infrastructure that supports large generative models and scales the delivery of AI-generated videos to thousands of users. Key Responsibilities: • Develop and maintain robust backend systems for large-scale generative AI models. • Manage and process large datasets, including scraping, cleaning, and versioning. • Design and optimize inference pipelines for transformer or diffusion-based models. • Deploy and monitor AI models using cloud services (AWS, Azure, GCP). • Collaborate closely with ML researchers to integrate and scale new models in production. • Ensure security, reliability, and performance of backend services. • Handle containerization and infrastructure-as-code setups (e.g., Docker, Terraform). What We’re Looking For: • 2–3 years of experience in backend development with exposure to AI/ML systems. • Proficiency in Python and backend frameworks (e.g., FastAPI, Flask). • Experience with cloud platforms like AWS, Azure, or GCP. • Understanding of deploying and scaling ML models in production. • Familiarity with MLOps practices is a strong plus. • Experience working in fast-paced startup environments is a bonus. Working Hours: 10:30 am – 6:30 pm IST (Flexible for the right candidate). Perks & Benefits: • Work on cutting-edge generative AI problems in a rapidly growing domain. • Creative freedom to experiment and implement innovative ideas. • Collaborative, open-minded, and inclusive team culture. • Growth-oriented role with performance-linked opportunities. How to Apply: Send your resume and a short note about why you're interested to: research@phenomenalai.in
Posted 5 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The role involves both coding skills and broader engineering concepts like system design, API integration, and building complete workflows across multiple technology layers (frontend, backend, database, AI, and infrastructure). The responsibilities go beyond just writing code to include designing solutions and integrating various technologies into a cohesive system. Preferred Skills: Experience with AI frameworks including LLMs (ChatGPT, Claude, etc.), TensorFlow, PyTorch, or Hugging Face Familiarity with API development and integration, particularly for data extraction and processing Should have good experience with webscraping (Selenium, Scrapy, Playwright) Understanding of modern web development with React.js for frontend applications Experience with backend frameworks like FastAPI or Node.js Knowledge of database technologies (PostgreSQL, Elasticsearch, Pinecone or similar vector databases) Understanding of cloud infrastructure on Microsoft Azure, AWS, or GCP Experience building AI pipelines and workflows that connect multiple systems Knowledge of media metrics, PR reporting, and data visualization Strong system design thinking to create efficient AI-driven workflows Prior project experience with automation tools or AI assistants Ability to prototype quickly while maintaining code quality and documentation This is a 3- month on-site Internship role only so the selected candidate will be working from our office in DN Nagar, Andheri West, Mumbai.
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1. Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2. Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3. Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4. Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5. MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6. Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7. Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8. Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9. Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10. High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11. Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12. Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost
Posted 5 days ago
9.0 years
0 Lacs
India
On-site
About Us: Teknobloom is a dynamic and innovative software development company that specializes in creating simple and effective solutions for our clients. We are committed to delivering top notch services and products that exceed our customers' expectations, every single time. As we continue to grow, we are seeking a skilled Azure Developer to join our team and help transform data into actionable insights. Responsibilities: ● Publish, secure, and manage APIs in Azure API Management (APIM), including policies for authentication, rate limiting/throttling, caching, and versioning. ● Build and orchestrate cloud integrations using Azure Logic Apps and related messaging/integration services (e.g., Event Grid, Service Bus where applicable). ● Implement secure secrets and configuration management via Azure Key Vault. ● Integrate with Azure SQL Database, Cosmos DB, and other Azure-hosted or external data sources; write efficient queries and data access layers. ● Design, develop, deploy, and maintain RESTful APIs in Python (FastAPI, Flask, or Django REST) hosted on Azure (Function Apps, Containers, App Service). ● Instrument observability with Azure Monitor, Application Insights, and Log Analytics; configure dashboards, alerts, and diagnostics for APIs and integrations. ● Assist with performance testing, tuning, troubleshooting, and production incident resolution; contribute to on-call rotations (as applicable). ● Write and maintain clear API documentation using OpenAPI/Swagger and internal runbooks/playbooks. ● Collaborate within Agile teams—participate in sprint planning, estimation, demos, and retros; partner closely with Business Analysts, QA, and DevOps. ● Follow best practices in source control (Git), code reviews, branch strategy, and secure coding standards. Role Requirements/Expectations: ● 4–9 years of professional software development experience with a strong focus on Python and REST API development. ● Hands-on experience deploying and running workloads on Microsoft Azure—especially Function Apps, APIM, Logic Apps, Key Vault, and Azure Monitor/App Insights. ● Solid grasp of API design principles: resource modeling, idempotency, pagination, versioning, error handling, and standardized response formats. ● Working knowledge of both SQL & NoSQL data models; practical experience with Azure SQL and Cosmos DB. ● Understanding of API security (OAuth2, JWT, Azure AD / Entra ID, managed identities), throttling/rate limiting, and encrypted secrets management. ● Experience with Git-based workflows, pull requests, and collaborative code reviews. ● Strong debugging, analytical, and problem-solving skills. ● Effective written and verbal communication skills; comfortable working with distributed teams. Why Join Us? Be part of a collaborative and high-impact team where your work drives real business decisions. We offer competitive compensation, a dynamic work culture, and continuous opportunities for learning and growth. As part of our team, you won’t just be working with data — you'll be driving insights that directly shape products, influence business decisions, and impact our clients’ growth.About Us: Teknobloom is a dynamic and innovative software development company that specializes in creating simple and effective solutions for our clients. We are committed to delivering top notch services and products that exceed our customers' expectations, every single time. As we continue to grow, we are seeking a skilled Azure Developer to join our team and help transform data into actionable insights. Responsibilities: ● Publish, secure, and manage APIs in Azure API Management (APIM), including policies for authentication, rate limiting/throttling, caching, and versioning. ● Build and orchestrate cloud integrations using Azure Logic Apps and related messaging/integration services (e.g., Event Grid, Service Bus where applicable). ● Implement secure secrets and configuration management via Azure Key Vault. ● Integrate with Azure SQL Database, Cosmos DB, and other Azure-hosted or external data sources; write efficient queries and data access layers. ● Design, develop, deploy, and maintain RESTful APIs in Python (FastAPI, Flask, or Django REST) hosted on Azure (Function Apps, Containers, App Service). ● Instrument observability with Azure Monitor, Application Insights, and Log Analytics; configure dashboards, alerts, and diagnostics for APIs and integrations. ● Assist with performance testing, tuning, troubleshooting, and production incident resolution; contribute to on-call rotations (as applicable). ● Write and maintain clear API documentation using OpenAPI/Swagger and internal runbooks/playbooks. ● Collaborate within Agile teams—participate in sprint planning, estimation, demos, and retros; partner closely with Business Analysts, QA, and DevOps. ● Follow best practices in source control (Git), code reviews, branch strategy, and secure coding standards. Role Requirements/Expectations: ● 4–9 years of professional software development experience with a strong focus on Python and REST API development. ● Hands-on experience deploying and running workloads on Microsoft Azure—especially Function Apps, APIM, Logic Apps, Key Vault, and Azure Monitor/App Insights. ● Solid grasp of API design principles: resource modeling, idempotency, pagination, versioning, error handling, and standardized response formats. ● Working knowledge of both SQL & NoSQL data models; practical experience with Azure SQL and Cosmos DB. ● Understanding of API security (OAuth2, JWT, Azure AD / Entra ID, managed identities), throttling/rate limiting, and encrypted secrets management. ● Experience with Git-based workflows, pull requests, and collaborative code reviews. ● Strong debugging, analytical, and problem-solving skills. ● Effective written and verbal communication skills; comfortable working with distributed teams. Why Join Us? Be part of a collaborative and high-impact team where your work drives real business decisions. We offer competitive compensation, a dynamic work culture, and continuous opportunities for learning and growth. As part of our team, you won’t just be working with data — you'll be driving insights that directly shape products, influence business decisions, and impact our clients’ growth.
Posted 5 days ago
0 years
0 Lacs
India
On-site
Role Overview We are seeking a motivated Data Integration Engineer to join our engineering team. This individual will play a critical role in integrating and transforming large-scale data to power intelligent decision-making systems. Key Responsibilities Design, build, and maintain data pipelines and APIs using Python. Integrate data from various sources including third-party APIs and internal systems. Work with large, unstructured datasets and transform them into usable formats. Collaborate with cross-functional teams to define data requirements and deliver timely solutions. Leverage cloud-based services, especially AWS (EC2, S3), Snowflakes / Databricks to scale data infrastructure. Ensure high performance and responsiveness of applications. Write clean, maintainable code with a focus on craftsmanship. Required Skills & Experience Strong proficiency in Python and data libraries like Pandas. Experience with web frameworks like Django / FastAPI / Flask. Hands-on experience with MongoDB or other NoSQL databases. Proficiency in working with RESTful APIs and JSON. Familiarity with AWS services: EC2, S3, Snowflakes / Databricks. Solid understanding of data mining, data exploration, and troubleshooting data issues. Real-world experience with large-scale data systems in cloud environments. Ability to thrive in a fast-paced, high-growth, deadline-driven setting. Self-starter with a strong sense of ownership and a passion for problem-solving. Comfortable working with messy or unstructured data. Preferred Qualifications Bachelor's or Master’s degree in Computer Science. Exposure to Big Data and Machine Learning technologies is a plus.
Posted 5 days ago
0 years
0 Lacs
India
On-site
Foyr AI Hackathon – Build the Future of Agentic Design Intelligence Ready to build the future of AI-native software? At Foyr AI, we’re crafting a new era of intelligent tools — where AI agents don’t just assist, they collaborate, create, and take action. This isn't just about building prompts or playing with APIs. This is your chance to help build the next generation of intelligent, agent-powered software — tools that think, collaborate, and assist users in real-time. Win the challenge, land a paid internship, and if you kill it — walk into a full-time role on our core AI team. About Foyr AI At Foyr, we’re reimagining software from the ground up — moving away from menus and mouse clicks toward AI-native tools that blend reasoning, language, and real-time interaction. We’ve already launched Foyr Ideate, a generative AI product that creates stunning interior design concepts from simple prompts. Now, we’re building something even more ambitious — agent-first SaaS products that work like co-designers. Imagine Telling An AI “Show me three minimalist layouts using natural wood and hidden lighting — but leave the wall art area blank, I’ll pick that myself.” And watching it respond, build, and adjust in real time — while inviting your creative tweaks. This is the future we’re building. And we want you to be part of it. What You’ll Build You’ll build a traditional application with a UI that can also be controlled by an AI agent through voice, chat, or prompt-based instructions — using MCP (Multi-Channel Protocol). Your app should be usable through direct interaction (clicks, UI actions) and through intelligent agent commands. You’ll be evaluated on how well your agent understands, maps instructions to UI actions, responds intelligently, and integrates with the core app logic. The Problem Statement Create any traditional application that can be used via a user interface or controlled by an AI agent through MCP (Model context protocol) Your task is to build a basic but functional app with a UI — and enable it to be driven by an AI agent via prompts, voice, or chat. The same action should be possible either by the user manually using the UI or by instructing the agent. This shows how classic tools can become agent-friendly, AI-native experiences. Here Are a Few Example Applications You Can Build A shape drawing canvas with standard tools (rectangle, circle, line, move, delete) — but also controllable via agent instructions like “Draw three circles of radius 20 in a row” A to-do list manager where you can add, complete, or sort tasks via UI or say “Mark all today’s tasks as done” to the agent A layout grid tool where you can add/move blocks manually, or say “Add a 2-column layout with 3 text boxes and 1 image” The focus is on showing how a traditional, standalone app can be extended to support agentic interactions — making it more accessible and intelligent without removing manual control. Use any tech stack you’re comfortable with. Voice input/output is optional; a simple text interface with the agent is also fine. Internship Offer Winner: ₹TBD + Internship opportunity at Foyr AI Crack the challenge and score a paid internship with direct mentorship from Foyr’s AI team — ace that, and you could lock in a full-time spot. Work directly on building our next-gen AI design co-pilot What We’re Evaluating This challenge is designed to test your real-world AI engineering skills across the following areas: Skill Area What We Expect Python + FastAPI REST endpoints or backend agent logic or agent interface API Prompt Engineering Clean, composable prompt chains with contextual memory LLM Integration Use of OpenAI, Anthropic, or similar (OpenAI Agents SDK a plus) Agent Design Can your agent understand instructions, map them to UI actions, and adapt to user intent? UI + Agent Parity Same action possible via manual UI and agent control — seamless dual-mode interface Voice / Chat I/O Support for speech input/output or natural language chat VectorDB / MongoDB Optional memory/history tracking AI Pair Programming Optional — use of tools like Cursor, KiloCode for auto-complete, suggestions, etc. Submission Guidelines Submit The Following Live demo (hosted on Replit, Render, Vercel, etc. — or share setup instructions) GitHub Repo 2–3 min walkthrough video (explain flow, architecture, LLM logic) README that explains: Prompt flow and logic How voice input/output is handled Tech stack and tools used Optional: diagram of agent reasoning or skill modules Known issues or improvements Timeline Hackathon duration: 1 Month Submission deadline: 28 August 2025 How to Apply Step 1: Apply Step 2: Build your voice agent prototype Step 3: Submit your solution with the demo hosted link + code + doc + demo video Tech Stack Guidelines Backend: Python, FastAPI, MongoDB, Weaviate/Pinecone/FAISS LLM Tools: OpenAI Agents SDK, LangChain/LangGraph (optional) Voice Tools: OpenAI Whisper, Web Speech API, ElevenLabs, Azure Speech, etc. UI Application: React, HTML5 Canvas, Vue, or any frontend framework to build the traditional interface Agent Integration: REST APIs or WebSocket endpoints to connect agent commands to app actions Infra (Optional): Vercel, Replit, HuggingFace Spaces, Docker Why Join? You’ll work with a team that’s rethinking human-computer interaction from the ground up — not as tools, but as creative, collaborative agents. If you’re excited by real AI reasoning, generative UIs, and building agent-first products — we’d love to work with you. Note: This is a unpaid internship.Skills: ui/ux development,fastapi,prompt engineering,rest apis,python,instructions,voice/chat interaction,app,vectordb,mongodb,agent design,building,llm integration
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title-Backend Engineer (AI/ML) Exp-2-4 Years Location- Hyderabad Responsibilities What We're Looking For (The Essentials) • Strong Backend Foundation: 1-3 years of professional experience in backend software development, building and deploying backend systems professionally. • Python & FastAPI Expertise: Strong knowledge of Python and significant experience with FastAPI. • Architectural Mindset: A solid grasp of both Low-Level Design (LLD) and High- Level Design (HLD) principles. You can think systematically about trade-offs, scalability, and maintainability. • API Craftsmanship: Ability to design, build, and maintain clean, efficient, and well- documented APIs. You'll Really Catch Our Eye If You Have: • Experience with AI/LLMs: You've built products or applications that integrate with LLMs (e.g., using APIs from OpenAI, Anthropic, or open-source models from Hugging Face). • Model Finetuning Experience: You have hands-on experience finetuning pre-trained models to improve their performance on specific tasks. • Familiarity with the AI Ecosystem: You're comfortable with concepts or tools like vector databases (e.g., Pinecone, Chroma), retrieval-augmented generation (RAG), or frameworks like LangChain. General Expectations for All Engineers • Full-Stack Awareness: While you'll specialize in backend, you should be good enough to dive into the frontend to build few things & fix bugs when needed. • Curious and Experimental: You enjoy figuring things out, tinkering, and experimenting until you get it right. • Collaborative Communicator: You have great communication skills and thrive in a collaborative, team-oriented environment. • Proactive and Driven: You’re ready to dive in, take initiative, and make things happen. We're not looking for perfection, but for curiosity, motivation, and an excitement to learn. Why You Should Join Us • Impact: Be one of the first engineers and have a massive impact on the product and company culture. • Growth: Work on challenging problems at the intersection of traditional backend engineering and applied AI. • Great Team: Join a team of smart, ambitious, and friendly people who are passionate about what they do.
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Title: Technical Project Manager – Full Stack Location : Bengaluru, India Experience : 8+ Years in Full Stack Development, 2+ Years in Architecture/Project Management Employment Type : Full-time Company Overview: IAI Solution Pvt Ltd (www.iaisolution.com),operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. We are seeking a Technical Project Manager who thrives in high-velocity environments, enjoys technical problem-solving, and is passionate about building scalable and impactful systems. Position Summary: We are hiring a Technical Project Manager (TPM ) who began their career as a Full Stack Developer (JavaScript, Java, Python, Spring Boot) , progressed to a Technical Lead and has since grown into a Project and Solution Delivery Leader . This person must have strong technical grounding, cloud architecture expertise, and demonstrated success in managing software projects end to end. Experience in a startup environment is preferred, where agility, ownership, and cross-functional collaboration are key. Key Responsibilities Lead software projects from planning through execution and final delivery. Translate business and product goals into Technical Implementation Roadmaps . Coordinate delivery across frontend and backend teams working in Javascript, Java, Python, Spring Boot, and related stacks. Architect and oversee deployments using Azure/AWS/GCP . Handle CI/CD pipelines , infrastructure automation, and cloud-native development using Docker, Kubernetes, Terraform, and GitHub Actions . Manage project timelines, resource planning, and risk mitigation. Work closely with Stakeholders to ensure delivery meets expectations. Maintain focus on Security, Scalability and Operational excellence . Must-Have Qualifications 8+ years of total experience in software engineering. Experience as a Full Stack Developer with JavaScript, Java, Python, and Spring Boot. 2+ years in a Technical Project Manager or Technical Lead role. Exposure to Cloud and Solution Architecture (Azure preferred). Proficiency in managing technical teams and cross-functional delivery. Strong communication and collaboration skills. Familiarity with Agile Project Management using Jira . Startup experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership. Technical Stack Frontend: React.js, Next.js Backend : Python, FastAPI, Django, Spring Boot, Node.js DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform CI/CD: GitHub Actions, Azure DevOps Databases: PostgreSQL, MongoDB, Redis Messaging: Kafka, RabbitMQ, Azure Service Bus Monitoring: Prometheus, Grafana, ELK Stack Good-to-Have Skills & Certifications Exposure to AI/ML projects and MLOps tools like MLflow or Kubeflow Experience with microservices, performance tuning and frontend optimization Certifications: PMP, CSM, CSPO, SAP Activate, PRINCE2, AgilePM, ITIL Perks & Benefits Competitive compensation with performance incentives High-impact role in a product-driven, fast-moving environment Opportunity to lead mission-critical software and AI initiatives Flexible work culture, learning support, and health benefits Job Type: Full-time Pay: Up to ₹3,200,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Fixed shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Bengalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current CTC (in Lakhs)? Expected CTC (in Lakhs)? Notice Period (in Days)? Current Location : Experience: Software development: 4 years (Required) Location: Bengalore, Karnataka (Required) Work Location: In person Speak with the employer +91 9003562294
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JD GenAI 5 yrs - 8yrs 1. proficient in python, fastapi, docker, transformers. 2. Experience with vector databases like pgvector 3. Hands-on experience with model serving and optimization 4. Familiarity with security, privacy and compliance for AI systems like PII redaction , hallucinations 5. Finetune and optimize foundation models eg GPT, mistral 6. Implement and evaluate multimodal AI ( text, image, audio) 7. Ensure model safety, fairness, and bias mitigation. 8. Manage and develop API and conduct testing
Posted 5 days ago
5.0 years
0 Lacs
India
On-site
Job Title: Full-Stack Engineer (Next.js + FAST API) Job Type: Full-Time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a highly skilled Full-Stack Engineer with strong expertise in FastAPI (Python) and Next.js (React) to join our growing engineering team. In this role, you’ll be responsible for building and maintaining modern, scalable applications from backend services to frontend interfaces. If you enjoy owning features end-to-end, solving real-world problems, and collaborating cross-functionally we’d love to hear from you. Key Responsibilities Design, develop, and maintain robust backend services and APIs using Python and FastAPI. Build dynamic and performant frontend applications using React and Next.js. Implement best practices in software architecture, API design, and system performance. Translate product requirements into clean, testable, and maintainable code across the stack. Work closely with product managers, designers, and fellow engineers to deliver end-to-end features. Conduct code reviews, debug production issues, and maintain high code quality standards. Stay up to date with trends and advancements in both frontend and backend technologies. Required Skills and Qualifications 5+ years of experience in full-stack development with production-grade applications. Strong hands-on experience with FastAPI, Python, and asynchronous backend development. Proficient in Next.js, React, and modern JavaScript/TypeScript. Solid understanding of REST APIs, microservices, and scalable backend architecture. Ability to manage and prioritize tasks independently, with clear communication across teams. Proven debugging, optimization, and performance tuning skills. Strong verbal and written communication abilities. Preferred Qualifications Experience with cloud platforms like AWS, GCP, or Azure. Familiarity with Docker, CI/CD pipelines, and DevOps best practices. Past experience in technical leadership, mentoring, or guiding junior developers.
Posted 5 days ago
5.0 years
0 Lacs
India
Remote
About the Role We are seeking a hands-on AI/ML Engineer with deep expertise in Retrieval-Augmented Generation (RAG) agents , Small Language Model (SLM) fine-tuning , and custom dataset workflows . You'll work closely with our AI research and product teams to build production-grade models, deploy APIs, and enable next-gen AI-powered experiences. Key Responsibilities Design and build RAG-based solutions using vector databases and semantic search. Fine-tune open-source SLMs (e.g., Mistral, LLaMA, Phi, etc.) on custom datasets. Develop robust training and evaluation pipelines with reproducibility. Create and expose REST APIs for model inference using FastAPI . Build lightweight frontends or internal demos with Streamlit for rapid validation. Analyze model performance and iterate quickly on experiments. Document processes and contribute to knowledge-sharing within the team. Must-Have Skills 3–5 years of experience in applied ML/AI engineering roles. Expert in Python and common AI frameworks (Transformers, PyTorch/TensorFlow). Deep understanding of RAG architecture, vector stores (FAISS, Pinecone, Weaviate). Experience with fine-tuning transformer models and instruction-tuned SLMs. Proficient with FastAPI for backend API deployment and Streamlit for prototyping. Knowledge of tokenization, embeddings, training loops, and evaluation metrics. Nice to Have Familiarity with LangChain, Hugging Face ecosystem, and OpenAI APIs. Experience with Docker, GitHub Actions, and cloud model deployment (AWS/GCP/Azure). Exposure to experiment tracking tools like MLFlow, Weights & Biases. What We Offer Build core tech for next-gen AI products with real-world impact. Autonomy and ownership in shaping AI components from research to production. Competitive salary, flexible remote work policy, and a growth-driven environment.
Posted 5 days ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
Key Responsibilities Technical Leadership Lead Python development teams on enterprise-grade projects Own and drive architectural decisions and code quality Conduct design and code reviews, and ensure adherence to best practices Backend Development Build and maintain robust, scalable backend services using Python frameworks (Django, FastAPI, Flask) Design APIs, background workers, and data pipelines Team Mentoring Mentor and guide junior and mid-level developers Provide training, performance feedback, and career guidance DevOps and Deployment Work with DevOps to define CI/CD pipelines and deployment strategies Collaborate on containerization using Docker, orchestration with Kubernetes Client and Stakeholder Interaction Translate business requirements into technical solutions Participate in client calls for requirement gathering, demos, and feedback sessions Required Skills 7+ years of Python development experience Strong command over frameworks like Django, FastAPI, Flask Proven experience in API development and integration (REST, GraphQL) Experience with relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases Solid understanding of system architecture, design patterns, and scalability Familiarity with asynchronous programming (e.g., Celery, asyncio) Hands-on experience with Docker, Git, and CI/CD pipelines Exposure to cloud platforms (AWS/GCP/Azure) Good understanding of security best practices (OWASP, data protection) Preferred Skills Experience with AI/ML pipelines, data engineering, or microservices Prior experience in leading Agile/Scrum teams Familiarity with front-end technologies (React/Angular) is a plus Contributions to open-source projects or technical blogs Soft Skills Strong problem-solving and decision-making abilities Excellent communication and stakeholder management skills Ability to multitask and manage priorities in a fast-paced environment Team-oriented with a proactive and collaborative approach
Posted 5 days ago
0.0 - 5.0 years
0 - 0 Lacs
Hyderabad, Telangana
On-site
Job Title: Python + AIML Developer Location: Hyderabad (On-Site) Job Type: Full-Time Experience: 4 to 7 Years Notice Period: Immediate to 15 Days Job Summary We are looking for a talented and motivated Python Developer with strong experience in building APIs using FastAPI and Flask. The ideal candidate will possess excellent problem-solving and communication skills and a passion for delivering high-quality, scalable backend solutions. You will play a key role in developing robust backend services, integrating APIs, and collaborating with frontend and QA teams to deliver production-ready software. Key Responsibilities Design, develop, and maintain backend services using FastAPI and Flask. Write clean, reusable, and efficient Python code following best practices. Work with Large Language Models (LLMs) and contribute to building advanced AI-driven solutions. Collaborate with cross-functional teams to gather requirements and translate them into technical implementations. Optimize applications for maximum speed, scalability, and reliability. Implement secure API solutions and ensure compliance with data protection standards. Develop and maintain unit tests, integration tests, and documentation for code, APIs, and system architecture. Participate in code reviews and contribute to continuous improvement of development processes. Required Skills & Qualifications Strong programming skills in Python with hands-on experience in backend development. Proficiency in developing RESTful APIs using FastAPI and Flask frameworks. Solid understanding of REST principles and asynchronous programming in Python. Good communication skills and the ability to troubleshoot and solve complex problems effectively. Experience with version control tools like Git. Eagerness to learn and work with LLMs, Vector Databases, and other modern AI technologies. Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Nice to Have Experience with LLMs, Prompt Engineering, and Vector Databases. Understanding of Transformer architecture, Embeddings, and Retrieval-Augmented Generation (RAG). Familiarity with data processing libraries like NumPy and Pandas. Knowledge of Docker for containerized application development and deployment. Skills Python, FastAPI, Flask, REST APIs, Asynchronous Programming, Git, API Security, Data Protection, LLMs, Vector DBs, Transformers, RAG, NumPy, Pandas, Docker. If you are passionate about backend development and eager to work on innovative AI solutions, we would love to hear from you! Job Type: Full-time Pay: ₹10,764.55 - ₹65,865.68 per month Benefits: Flexible schedule Health insurance Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: Coding: 5 years (Preferred) Flask Api: 7 years (Required) Rest API: 7 years (Required) Git: 5 years (Required) Pandas/Numpy/Dockers: 7 years (Required) Python developer: 7 years (Required) AIML: 5 years (Required) Machine learning: 5 years (Required) Fast API: 5 years (Required) Generative AI: 5 years (Required) NLP: 5 years (Required) LLM: 5 years (Required) Work Location: In person
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Industry Health Care Salary 0 - 3 K Date Opened 07/28/2025 Job Type Full time Work Experience 1-3 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560048 Job Description Job Description: AI/ML Engineer Key Responsibilities Design, develop, and deploy deep learning and machine learning models, particularly convolutional neural networks (CNNs) for medical image analysis. Build scalable and efficient training and inference pipelines using frameworks like TensorFlow and PyTorch. Manage and deploy DL/ML solutions using local virtual machines (VMs); experience with cloud platforms like AWS and Azure is a plus. Containerize DL/ML workflows using Docker and implement CI/CD pipelines for model delivery. Optimize models for accuracy and speed; conduct A/B testing and validation with real-world radiology data. Collaborate with radiologists and software teams to integrate AI models into teleradiology platforms. Develop and maintain API integrations, including FastAPI-based services and interoperability with legacy systems. Monitor and maintain deployed models to ensure consistent performance in clinical environments. Requirements Must-Have Skills Proficient in Deep Learning and Machine Learning techniques. Strong experience with TensorFlow and PyTorch. Solid background in image processing and computer vision, especially with CNN architectures. Proficiency with Docker and experience with CI/CD pipelines. Excellent programming skills in Python and experience with relevant ML libraries. Understanding of data pipelines and handling large-scale medical imaging datasets. Nice to Have Experience deploying ML models on cloud platforms such as AWS and Azure. Experience with Kubernetes or orchestration frameworks. Familiarity with MLOps tools and workflows. Experience with distributed training or federated learning. Contributions to healthcare AI research or open-source projects. Education & Experience Bachelor’s or Master’s degree in Computer Science, Biomedical Engineering, Data Science, or related field. 1+ years of experience building and deploying machine learning solutions in production, preferably in a healthcare or radiology setting. Why Join Us? Opportunity to work on transformative AI applications in healthcare and radiology. Collaborative and mission-driven team environment. Access to advanced medical imaging datasets. Interested can apply to nanda.k@telradsol.com
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
Karnataka
On-site
Bengaluru, Karnataka, India Sub Business Unit Engineer Job posted on Jul 28, 2025 Employee Type Permanent Experience range (Years) 2 years - 4 years Core Qualifications Experience: 2-4 years of professional experience in the full software development lifecycle. Python Proficiency: Solid, hands-on coding experience in Python. Computer Science Fundamentals: A deep and practical understanding of Data Structures, Algorithms, and Software Design Patterns. Version Control: Proficiency with Git or other distributed version control systems. Analytical Mindset: Excellent analytical, debugging, and problem-solving abilities. Java Knowledge: Basic understanding of Java concepts and syntax. Testing : Familiarity with testing tools and a commitment to Test-Driven Development (TDD) principles. Education: Bachelor's degree in Computer Science, Computer Engineering, or a related technical field. Good to have Generative AI : Experience with frameworks like LangChain, Google ADK/Autogen, or direct experience with APIs from OpenAI, Google (Gemini), or other LLM providers. Web Stacks : Strong knowledge of Python web frameworks like Flask or FastAPI, Celery etc. Data Engineering : Hands-on experience with NumPy, Pandas/Polars, data pipeline tools (e.g., Apache Spark, Kafka), and visualization. Databases : Proficiency with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Elasticsearch, Redis) databases. DevOps & Cloud : Experience with AWS (including EC2, Lambda, EKS/ECS), Docker, and CI/CD best practices and pipelines (e.g., GitLab CI). Operating Systems : Good working knowledge of programming in a UNIX/Linux environment. FinTech Domain : Prior experience or interest in the financial technology sector is a plus. Reporting to Technical Lead
Posted 5 days ago
0 years
0 Lacs
India
Remote
Job Description SETV GLOBAL Position: Product Development Intern Location: Remote Employment Type: Internship (leading to Part-Time Opportunity & Full-Time Opportunity) Duration: 3 Months Unpaid; Performance-Based Stipend from 4th Month (₹5,000 – ₹15,000/month) Working Hours: 5 hours/day + 1 hour lunch/dinner break Shift Options: 2:00 PM – 8:00 PM (IST) 7:00 PM – 1:00 AM (IST) Company Description: SETV.W , a subsidiary of SETV Global , is pioneering the integration of Artificial Intelligence and Machine Learning in healthcare. Our mission is to create smarter, faster, and more accessible healthcare systems worldwide through AI-driven solutions. We are seeking passionate innovators to join us in building technology that truly makes a difference. Duties and Responsibilities Web Developer Intern: Develop responsive and user-centric interfaces using HTML, CSS, JavaScript, and modern frameworks like React, Angular, Vue, React Native, or Nuxt.js . Implement backend systems using Python (Django), PHP (Laravel), Node.js (Express.js), or GoLang. Collaborate with UI/UX and AI teams for seamless integration. Manage databases such as MySQL, PostgreSQL, MongoDB, and Azure Cosmos DB . Work with Redis and other caching technologies. Integrate AI models using Flask, REST, FastAPI, GraphQL, SOAP, gRPC, MQTT, or WebSockets. Conduct performance tuning, scalability testing, and debugging. Maintain clean code and technical documentation. Stay updated with modern frameworks and participate in agile ceremonies. Ensure cross-browser and responsive design compatibility. DevOps Engineer Intern: Assist in building CI/CD pipelines using Jenkins, GitHub Actions, Azure DevOps, or GitLab CI/CD. Manage cloud infrastructure ( AWS or Azure ) and automate provisioning. Gain hands-on experience with Docker and Kubernetes . Write infrastructure automation scripts ( Python, Bash, PowerShell ). Monitor systems using Prometheus, Grafana, or ELK Stack . Streamline deployments and enforce security best practices. Maintain clear documentation and support knowledge-sharing sessions. Conduct security assessments and monitor compliance. Requirements Web Developer Intern: Proficiency in HTML, CSS, JavaScript, and at least three frameworks ( React, Angular, Vue, React Native, Nuxt.js ). Knowledge of backend tools (Django, Laravel, Express.js) and server-side languages (Python, PHP, Node.js) . Experience with databases: MySQL, PostgreSQL, MongoDB, Cosmos DB . API integration and testing skills ( REST, Postman, Selenium ). Version control using Git. Excellent debugging and problem-solving abilities. DevOps Engineer Intern: Familiarity with cloud platforms ( AWS, Azure ). Hands-on with CI/CD tools and containerization ( Docker, Kubernetes ). Scripting skills in Python, Bash, or PowerShell . Understanding of monitoring tools and infrastructure security. Exposure to IaC tools ( Terraform, Ansible ) is a plus. Strong organizational and collaboration skills. Qualifications: Education: Pursuing or recently completed Bachelor’s/Master’s in Engineering or relate d fields.Portf olio: Stro ng project portfolio in Web/DevOps preferred.Healt hcare Domain Awareness: Know ledge of healthcare systems and standards is an added advantage. Selection Process (3 Rounds): Resume Screening – Shortlisting based on technical skills and ex perience.Task Round – Yo u will receive an email with a task that needs to be submitted before the given deadline.Techn ical Interview + HR – In -depth evaluation of development/DevOps capabilities, assessing cultural fit, vision alignment, and strategic thinking. What We Offer: Impactful Experience: Contribute to real-world healthcare in novation.Mento rship: Lear n from experts in AI/ML and health tech.Growt h Path: Conv ersion to paid part-time role based on performance.Flexi ble & Remote: Work from anywhere with aligned hours.Incen tives: Earn loyalty points and recognition for dedication and performance.
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
India
On-site
We’re a Y Combinator-backed startup building world's best AI Code Reviewer. Trusted by some of the leading unicorns and backed by top Silicon Valley Investors, we’re solving tough problems and pushing boundaries. In just the last 60 days alone , we’ve generated over $1.5M in revenue — and we’re just getting started. Role Description Own and develop our entire AI code quality & security platform Own initiatives across frontend, backend, and infrastructure Work closely with our design and engineering teams throughout the process Be hands-on with technical support, customer success, and implementation guidance What are we looking for? 2-5 years of full-time work experience in: Backend: Python (FastAPI, Concurrency) ReactJS, Node Cloud/DevOps: AWS, Docker A degree in Computer Science or a related field from a Tier 1 college (IITs / NITs) Experience with LLMs What’s in it for me? You’ll be engineer #5 building the next billion-dollar devtools company Work side by side with founders and ship real impact from day one High ownership, great pay, real equity, zero fluff YC-backed, fast-growing, and just getting started We want people who: Genuinely enjoy solving very hard engineering problems Love high ownership Learn quickly and do what it takes to deliver results Work really hard (we mean it) Move with urgency
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough