Home
Jobs

1414 Openai Jobs - Page 8

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Strong AI/ML OR Software Developer Profile Mandatory (Experience 1) - Must have 3+ YOE in Core Software Developement (SDLC) Mandatory (Experience 2) - Must have 2+ years of experience in AI/ML, preferably in conversational AI domain (spped to text, text to speech, speech emotional recognition) or agentic AI systems. Mandatory (Experience 3) - Must have hands-on experience in fine-tuning LLMs/SLM, model optimization (quantization, distillation) and RAG Mandatory (Experience 4) - Hands-on Programming experience in Python, TensorFlow, PyTorch and model APIs (Hugging Face, LangChain, OpenAI, etc.) Preferred Preferred (Education) - BTech/MTech in Computer Science, AI/ML, or related fields Preferred (Experience 1) - Exposure to agent-based simulation, reinforcement learning, or behavioral modeling Preferred (Experience 2) - Publications, patents, or open-source contributions in conversational AI or GenAI systems Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

About Illumina Technology Solutions, LLC At Illumina Technology Solutions, we are experiencing rapid growth across the U.S.A., Canada, India, and Pakistan and are proud to be a Microsoft Gold Partner. We are committed to providing technology solutions that help organizations thrive in a fast-changing digital landscape. Our vision is to become a leader in digital transformation for our clients, leveraging the power of the Microsoft digital ecosystem to drive innovation and growth. Visit us at www.illuminatechnology.com About The Role We are seeking a highly skilled AI Lead Engineer—technical Consultant with expertise in designing and developing scalable, high-performance AI solutions. The ideal candidate will have extensive experience in artificial intelligence, machine learning, and generative AI and strong leadership skills to drive innovation and technical excellence. Essential Functions Collaborate with stakeholders, including project managers and technical leads, to define and understand business and technical requirements Design, develop, and optimize AI-powered applications, ensuring scalability, performance, and security Lead discussions on AI architecture, development best practices, and solution design Implement and fine-tune large language models (LLMs) and generative AI applications Apply prompt engineering techniques to enhance AI model interactions and outputs Develop and maintain clean, extensible, and maintainable code using modern programming languages Integrate Azure OpenAI, Copilot Studio, Azure AI Search, and other AI/ML platforms into enterprise solutions Apply debugging tools and analyze system logs to ensure high-quality code and system performance Drive continuous improvements through Agile development, CI/CD, and DevOps best practices Collaborate with cross-functional teams across different geographies to deliver AI-driven solutions that align with business objectives Must Have Skills Hands-on experience with Gen AI development, Prompt engineering, RAG design patterns, and multi-agent workflow automation and integration, preferably via Microsoft AI products like Copilot Exposure and experience working on large-scale data-intensive platforms Hands-on experience with Python and deep learning programming skills using PyTorch, TensorFlow, or similar Exposure to AI performance, consistency, and efficiency improvements Microsoft Specific Certification and Industry Certifications are a must-have Required Qualifications Bachelor’s degree in computer science, Engineering, or a related technical field 10+ years of technical engineering experience with expertise in at least one of the following programming languages: Python, C, C++, C#, Java, JavaScript Experience in Prompt Engineering and LLM fine-tuning Understanding of Azure OpenAI, Copilot Studio, Azure AI Search, and M365 Declarative Agent (Preferred) Experience in large language models (LLM) fine-tuning and prompt engineering Expertise in machine learning frameworks such as PyTorch, TensorFlow, or Scikit-learn Experience with AI-driven applications, generative AI, and NLP models Hands-on experience with Agile methodologies and CI/CD pipelines for AI solutions Strong problem-solving skills, with the ability to handle complex and large-scale AI projects Effective communication and collaboration skills to work with remote and diverse teams Preferred Qualifications Experience with Machine Learning Platforms and Systems Prior experience working with Microsoft Cloud-based AI solutions Prior experience leading AI/ML teams and mentoring junior engineers We are an equal opportunity employer. All applicants will be considered for employment regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, veteran, or disability status. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

About Illumina Technology Solutions, LLC At Illumina Technology Solutions, we are experiencing rapid growth across the U.S.A., Canada, India, and Pakistan and are proud to be a Microsoft Gold Partner. We are committed to providing technology solutions that help organizations thrive in a fast-changing digital landscape. Our vision is to become a leader in digital transformation for our clients, leveraging the power of the Microsoft digital ecosystem to drive innovation and growth. Visit us at www.illuminatechnology.com About The Role We are seeking a highly skilled AI Lead Engineer—technical Consultant with expertise in designing and developing scalable, high-performance AI solutions. The ideal candidate will have extensive experience in artificial intelligence, machine learning, and generative AI and strong leadership skills to drive innovation and technical excellence. Essential Functions Collaborate with stakeholders, including project managers and technical leads, to define and understand business and technical requirements Design, develop, and optimize AI-powered applications, ensuring scalability, performance, and security Lead discussions on AI architecture, development best practices, and solution design Implement and fine-tune large language models (LLMs) and generative AI applications Apply prompt engineering techniques to enhance AI model interactions and outputs Develop and maintain clean, extensible, and maintainable code using modern programming languages Integrate Azure OpenAI, Copilot Studio, Azure AI Search, and other AI/ML platforms into enterprise solutions Apply debugging tools and analyze system logs to ensure high-quality code and system performance Drive continuous improvements through Agile development, CI/CD, and DevOps best practices Collaborate with cross-functional teams across different geographies to deliver AI-driven solutions that align with business objectives Must Have Skills Hands-on experience with Gen AI development, Prompt engineering, RAG design patterns, and multi-agent workflow automation and integration, preferably via Microsoft AI products like Copilot Exposure and experience working on large-scale data-intensive platforms Hands-on experience with Python and deep learning programming skills using PyTorch, TensorFlow, or similar Exposure to AI performance, consistency, and efficiency improvements Microsoft Specific Certification and Industry Certifications are a must-have Required Qualifications Bachelor’s degree in computer science, Engineering, or a related technical field 10+ years of technical engineering experience with expertise in at least one of the following programming languages: Python, C, C++, C#, Java, JavaScript Experience in Prompt Engineering and LLM fine-tuning Understanding of Azure OpenAI, Copilot Studio, Azure AI Search, and M365 Declarative Agent (Preferred) Experience in large language models (LLM) fine-tuning and prompt engineering Expertise in machine learning frameworks such as PyTorch, TensorFlow, or Scikit-learn Experience with AI-driven applications, generative AI, and NLP models Hands-on experience with Agile methodologies and CI/CD pipelines for AI solutions Strong problem-solving skills, with the ability to handle complex and large-scale AI projects Effective communication and collaboration skills to work with remote and diverse teams Preferred Qualifications Experience with Machine Learning Platforms and Systems Prior experience working with Microsoft Cloud-based AI solutions Prior experience leading AI/ML teams and mentoring junior engineers We are an equal opportunity employer. All applicants will be considered for employment regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, veteran, or disability status. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

We're looking for an individual with Prior experience building RESTful APIs. Experience with at least one of the backend API frameworks (preferably FastAPI) Hands-on experience with at least one modern ML/AI framework (PyTorch, TensorFlow) Experience integrating LLMs into applications (OpenAI, Anthropic, or open-source) Strong foundation in deep learning concepts and model training workflows Database expertise: schema design, query optimization, both SQL and NoSQL Experience with vector databases (Pinecone, Weaviate, Chroma) for RAG applications Mathematical aptitude: comfortable with statistics, linear algebra, and algorithmic thinking Your daily work would include Working on product features end-to-end (write backend logic, write API, connect APIs in the front end, and build a basic functional UI) Working with RESTful APIs, SQL, and NoSQL databases Training, fine-tuning, and deploying deep learning models for production use Building robust LLM integration pipelines with proper error handling and fallback mechanisms Implementing MLOps practices: model versioning, A/B testing, monitoring, and automated retraining Working on multiple projects during the tenure across multiple domains Note: The working hours are flexible. We are a small team. Most of the work will happen asynchronously. Culture-wise, we are looking for people who are excited to learn new things, think creatively, are capable of figuring things out on their own (most of the time), and have a "getting things done" kind of attitude. Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

ob Title: AI Agentic Engineer – Remote Company: SEO Scientist AI Industry: AI-Powered SEO & Marketing Automation Location: Remote (India Preferred, Open Globally) Experience Required: 4–8 years Employment Type: Full-Time About Us: SEO Scientist AI is building next-gen AI agents that automate end-to-end SEO and marketing workflows—from crawling and keyword analysis to content briefs and technical execution. Think of us as your AI team member, not just a tool. We're here to challenge platforms like AirOps, Mazzal, AgenticFlow, and Relevance by building smarter, more adaptable agents. --- What You’ll Do: Design, develop, and deploy agentic workflows that solve SEO and marketing tasks end-to-end Use LLMs (OpenAI, Claude, etc.) in combination with AWS services (Lambda, Step Functions, DynamoDB, etc.) Build task-specific tools and integrate third-party APIs (Google Search Console, Ahrefs, Notion, Webflow, etc.) Collaborate with the SEO and content strategy team to automate repetitive, high-effort tasks Ensure scalability, observability, and prompt-tuning of agent-based systems --- Who You Are: 4–8 years of experience in backend or full-stack engineering Solid grasp of Python and cloud-native development (preferably AWS) Worked with vector databases (e.g., Pinecone, FAISS) and embeddings Familiar with LangChain, LlamaIndex, or similar agentic frameworks Understands how marketers and SEOs work and wants to make their jobs easier with AI Bonus if you’ve explored agent runtime frameworks or built personal AI tools --- Tech Stack We Use: Cloud: AWS (Lambda, Step Functions, S3, API Gateway) AI & LLM: OpenAI, Claude, LangChain, LlamaIndex Storage: DynamoDB, Postgres, Pinecone Tools: Notion, Webflow, Google Sheets, Zapier --- Why Join Us? Work remotely and lead innovations in AI workflows Build products used by SEO teams, agencies, and content ops worldwide Be an early part of a fast-growing AI-first company in a highly competitive market Work alongside founders, product managers, and marketing experts to ship fast --- Apply now to help us build the future of SEO with autonomous AI workflows. (Just click “Easy Apply” or drop us a message!) Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

We're looking for an Lead AI Engineer who thrives in fast-paced, early-stage product environments. You'll play a critical role in designing, implementing, and evolving our AI-powered infrastructure and DevOps automation platform. This includes leveraging leading LLMs like Claude or GPT to build agents for Infrastructure-as-Code (IaC), CI/CD orchestration, and Ops workflows while also enabling fine-tuned models with a focus on security, cost optimization, and reliability. This is a hands-on, cross-functional role where you will collaborate closely with backend engineers, DevOps experts, and product leadership. Responsibilities Architect and implement intelligent infrastructure agents using Claude/GPT APIs, with support for prompting, function calling, and fine-tuning. Design and develop AI-enabled workflows for provisioning (e. g., EKS, VPC, IAM), CI/CD pipelines, and operational playbooks. Build connectors to integrate LLM outputs with APIs, CLI tools, Terraform/OpenTofu providers, and observability systems. Work with vector stores (e. g., Milvus, Pinecone) and embedding pipelines to enable contextual memory and tool selection for agents. Collaborate on custom model tuning with secure guardrails and agent sandboxing mechanisms. Develop secure and modular APIs to allow frontend orchestration and runtime control of AI agents. Contribute to code reviews, shared libraries, and internal tooling to accelerate AI-driven development. Requirements 5+ years of experience in backend/AI software development with strong fundamentals in microservices and APIs. Deep experience using OpenAI/Anthropic APIs (e. g., GPT-4 Claude), prompt engineering, and advanced function calling mechanisms. Solid experience with Golang, Python, or Node.js for agent workflows, API services, or tooling. Understanding of LLM deployment strategies, prompt lifecycle, and AI-agent chaining concepts. Experience integrating with REST/gRPC APIs, message queues, and asynchronous job processing. Familiarity with vector stores (e. g., Milvus, FAISS), embeddings, and search optimization. Hands-on experience with secure AI workflows managing API tokens, prompt injection defense, and logging. Exposure to containerization (Docker), GitOps (e. g., GitHub Actions), and infrastructure systems. Nice-to-Have Familiarity with OpenTofu/Terraform, Kubernetes, and IaC principles. Experience fine-tuning models using open-source tools (e. g., LoRA, Hugging Face Transformers). Prior work on DevOps tooling, observability pipelines (OpenTelemetry, Prometheus), or infra agents. Interest in building low-code interfaces for orchestrating AI-driven infrastructure tasks. Why Join Ops0 You'll be part of a ground-floor team building a next-gen DevOps automation platform powered by AI agents. If you enjoy blending AI with infrastructure, architecting intelligent workflows, and creating real impact with hands-on ownership, this role is for you. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About Us \ Hyperbrowser is building the fastest, smartest browser infrastructure for AI agents. We power real‑time web actions - scraping, crawling, extraction, automation - so developers can focus on 🤖 intelligence, not plumbing. We need a growth‑minded engineer to help us hack our way to scale. What you’ll own Growth experiments end‑to‑end: ideate, build, launch, measure. AI‑powered funnels: prototype flows that use OpenAI, Gemini, Claude, etc., to engage signups and convert free users into paid. Automation & tooling: write scripts to identify & outreach to target accounts, track blog/social performance, A/B test landing pages. Data wrangling: scrape usage metrics, assemble dashboards, run cohort analyses to find flywheels. Creative hacking: anything from building a viral Twitter bot to more developer tools - if it moves the needle, you build it. What you bring You’ve shipped code in Python or TypeScript before You’ve played with LLM APIs (OpenAI, Gemini, Claude, etc.) and know rate limits, cost tricks, prompt engineering, tool use etc You think in metrics: you can spin up a quick dashboard, interpret a spike/drop, and act quickly You’re hungry: no task is too small or too menial - if it helps us grow, you do it Hacker mentality: you love shipping something in hours, not weeks, and iterating fast Big Bonus if you’ve built bots, scrapers, or growth‑hack experiments before Why Hyperbrowser? High-leverage problems: reshape how AI agents browse the web. Fast feedback: deploy code today, see metrics tomorrow. Lean team: your impact isn’t diluted; your work drives real business outcomes. Learning: get mentorship in AI, product, biz dev, and metrics. Be practical, be scrappy, be the first growth engineer we can’t live without. Let’s go. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

🎯 Role : Backend Developer (Python/Django + LLM/RAG) 📍 Remote / Hybrid | 🕒 1–3 Years Experience We're building AI-first tools for the insurance ecosystem — from BrokerBuddy (quote comparison over PDFs + chatbot) to intelligent policy assistants and compliance readers. If you're passionate about LLMs , prompt engineering , and building backend systems that turn messy insurance data into actionable insights — let’s talk . What You’ll Be Doing: Building scalable backends using Python & Django Integrating LLM pipelines (OpenAI, open source) with RAG, embeddings & vector DBs Powering tools for brokers, insurers, and policyholders to auto-generate insights from PDFs, policies, and structured data Crafting prompt logic, chunking flows, and retrieval strategies to serve instant, contextual responses What We’re Looking For: ✅ Strong Python + Django fundamentals ✅ Experience working with LLMs and prompt design ✅ Familiarity with RAG concepts (chunking, embedding, query routing) ✅ Comfortable working with MongoDB/PostgreSQL, REST APIs ✅ Curious mindset and product-first thinking Bonus if you've worked with vector DBs (Pinecone, Chroma), LangChain, or insurance domain data. Show more Show less

Posted 5 days ago

Apply

610.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Hiring: Generative AI Senior Developer Location : Gurgaon (Hybrid Mode) Experience : 610 Years (Min. 2+ years in GenAI) Key Responsibilities Design, develop, and optimize Generative AI models using LLMs (e.g., GPT, LLaMA, PaLM, Claude) and diffusion models. Build scalable training and deployment pipelines for custom foundation models and fine-tune existing LLMs on domain-specific datasets. Collaborate cross-functionally with product managers, data scientists, and engineering teams to integrate GenAI into real-world applications. Implement prompt engineering and retrieval-augmented generation (RAG) techniques to improve output quality and relevance. Develop and maintain APIs, tools, and SDKs to enable GenAI-driven features across products. Stay updated with the latest in AI research and translate cutting-edge advancements into production-ready solutions. Ensure responsible and ethical AI usage, actively mitigating model biases and risks. Continuously optimize performance, latency, and scalability of deployed AI services. Required Skills & Experience 6 to 10 years of experience in AI/ML development, including 2+ years in Generative AI. Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow. Strong knowledge of NLP, transformers, embeddings, and attention mechanisms. Hands-on experience with tools like Hugging Face Transformers, LangChain, and OpenAI APIs. Practical exposure to vector databases (e.g., FAISS, Pinecone, Weaviate). Experience in building and deploying LLM-powered applications, such as intelligent chatbots or content generators. Familiarity with MLOps practices, including model versioning and deployment tools (Docker, Kubernetes, MLflow). Experience with cloud platforms: AWS, Azure, or GCP. Preferred Qualifications Masters or Ph.D. in Computer Science, AI/ML, Data Science, or related field. Publications or active contributions to AI research communities (e.g., NeurIPS, ICML, arXiv). Experience working with multi-modal models (text + image/audio). Knowledge of Responsible AI, hallucination mitigation, and bias detection (ref:hirist.tech) Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

E2M is not your regular digital marketing firm. We're an equal opportunity provider, founded on strong business ethics and driven by more than 300 experienced professionals. Our client base is made up of digital agencies that rely on us to solve bandwidth issues, reduce overheads, and boost profitability. We need driven, tech-savvy professionals like you to help us deliver next-gen solutions. If you're someone who dreams big and thrives in innovation, E2M has a place for you. Role Overview As an Python Developer – AI Implementation Specialist/AI Executor , you will be responsible for designing and integrating AI capabilities into production systems using Python and key ML libraries. This role requires a strong backend development foundation and a proven track record of deploying AI use cases using tools like TensorFlow, Keras, or OpenAI APIs. You'll work cross-functionally to deliver scalable AI-driven solutions. Key Responsibilities Design and develop backend solutions using Python, with a focus on AI-driven features. Implement and integrate AI/ML models using tools like OpenAI, Hugging Face, or Lang Chain. Use core Python libraries (NumPy, Pandas, TensorFlow, Keras) to process data, train, or implement models. Translate business needs into AI use cases and deliver working solutions. Collaborate with product, engineering, and data teams to define integration workflows. Develop REST APIs and micro services to deploy AI components within applications. Maintain and optimize AI systems for scalability, performance, and reliability. Keep pace with advancements in the AI/ML landscape and evaluate tools for continuous improvement. Required Skills & Qualifications Minimum 5+ years of overall experience, including at least 1 year in AI/ML integration and strong hands-on expertise in Python for backend development. Proficiency in libraries such as NumPy, Pandas, TensorFlow, and Keras Practical exposure to AI platforms/APIs (e.g., OpenAI, LangChain, Hugging Face) Solid understanding of REST APIs, microservices, and integration practices Ability to work independently in a remote setup with strong communication and ownership Excellent problem-solving and debugging capabilities Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

About This Role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Selected Intern's Day-to-day Responsibilities Include Assist in building and maintaining web applications using React/Angular and .NET Core Develop REST APIs and backend logic Integrate third-party services (e.g., OpenAI, Firebase, Azure) Optimize frontend components for performance and responsiveness Write clean, maintainable, and scalable code Collaborate with senior developers and participate in code reviews About Company: Microzen is a Pune-based service provider, offering services like application modernization, application development services for web, Windows, IOS, and Android platforms. We provide a stop IT solution like application testing services, Cloud migration, and other Cloud services, software maintenance/enhancement, etc. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Founding LLM Engineer Location: Remote About the Role: We are seeking an entrepreneurial and visionary Founding LLM Engineer to pioneer our large language model initiatives. You will have autonomy in identifying and building the right products, selecting optimal LLM architectures, and driving technical decisions from concept to launch. Responsibilities: Evaluate business needs and define innovative use-cases for large language models. Research, select, and implement the most suitable LLM architectures (e.g., GPT, LLaMA, Claude). Prototype, test, and deploy AI-driven solutions and iterate based on feedback. Collaborate closely with product, engineering, and data teams to integrate LLM capabilities effectively. Stay current with cutting-edge research and advancements in NLP and generative AI. Qualifications: Strong experience with large language models (LLMs), including fine-tuning, prompting strategies, and deployment. Proficient in Python and familiar with frameworks like Hugging Face, OpenAI APIs, and LangChain. Demonstrated ability to rapidly prototype and validate AI-driven concepts. Excellent problem-solving skills and capacity to make strategic technical choices independently. Prior startup experience or entrepreneurial mindset highly desirable. What We Offer: A key foundational role with significant impact and autonomy. Opportunity to shape the AI strategy and roadmap from inception. Competitive compensation and equity stake in the company. Dynamic and collaborative team culture. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

About Us Astra is a cybersecurity SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind AI-led offensive Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 13,000+ security tests. CTOs and CISOs love Astra because it helps them to achieve continuous security at scale, fix vulnerabilities in record time, and seamlessly transition from DevOps to DevSecOps with Astra's powerful CI/CD integrations. Astra is loved by 800+ companies across 70+ countries. In 2024 Astra uncovered 2.5 million+ vulnerabilities for its customers, saving customers $110M+ in potential losses due to security vulnerabilities. We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, Dream 11, OLX Autos etc. are a few of Astra’s customers. Job Description This is a remote position. Role Overview As Astra Security’s first AI Engineer, you will play a pivotal role in introducing and embedding AI into our security products. You will be responsible for designing, developing, and deploying AI applications leveraging both open-source models (Llama, Mistral, DeepSeek etc) and proprietary services (OpenAI, Anthropic). Your work will directly impact how AI is used to enhance threat detection, automate security processes, and improve intelligence gathering. This is an opportunity to not only build future AI models but also define Astra Security’s AI strategy, laying the foundation for future AI-driven security solutions. Key Responsibilities Lead the AI integration efforts within Astra Security, shaping the company’s AI roadmap Develop and Optimize Retrieval-Augmented Generation (RAG) Pipelines with multi-tenant capabilities Build and enhance RAG applications using LangChain, LangGraph, and vector databases (e.g. Milvus, Pinecone, pgvector). Implement efficient document chunking, retrieval, and ranking strategies. Optimize LLM interactions using embeddings, prompt engineering, and memory mechanisms. Work with Graph databases (Neo4j or similar) for structuring and querying knowledge bases esign multi-agent workflows using orchestration platforms like LangGraph or other emerging agent frameworks for AI-driven decision-making and reasoning. Integrate vector search, APIs and external knowledge sources into agent workflows. Exposure to end-to-end AI ecosystem like Huggingface to accelerate AI development (while initial work won’t involve extensive model training, the candidate should be ready for fine-tuning, domain adaptation, and LLM deployment when needed) Design and develop AI applications using LLMs (Llama, Mistral, OpenAI, Anthropic, etc.) Build APIs and microservices to integrate AI models into backend architectures.. Collaborate with the product and engineering teams to integrate AI into Astra Security’s core offerings Stay up to date with the latest advancements in AI and security, ensuring Astra remains at the cutting edge What We Are Looking For Exceptional Python skills for AI/ML development Hands-on experience with LLMs and AI frameworks (LangChain, Transformers, RAG-based applications) Strong understanding of retrieval-augmented generation (RAG) and knowledge graphs Experience with AI orchestration tools (LangChain, LangGraph) Familiarity with graph databases (Neo4j or similar) Experience in Ollama for efficient AI model deployment for production workloads is a plus Experience deploying AI models using Docker Hands-on experience with Ollama setup and loading DeepSeek/Llama. Strong problem-solving skills and a self-starter mindset—you will be building AI at Astra from the ground up. Nice To Have Experience with AI deployment frameworks (e.g., BentoML, FastAPI, Flask, AWS) Background in cybersecurity or security-focused AI applications What We Offer Software Engineering Mindset: This role requires a strong software engineering mindset to build AI solutions from 0 to 1 and scale them based on business needs. The candidate should be comfortable designing, developing, testing, and deploying production-ready AI systems while ensuring maintainability, performance, and scalability. Why Join Astra Security? Own and drive the AI strategy at Astra Security from day one Fully remote, agile working environment. Good engineering culture with full ownership in design, development, release lifecycle. A wholesome opportunity where you get to build things from scratch, improve and ship code to production in hours, not weeks. Holistic understanding of SaaS and enterprise security business. Annual trips to beaches or mountains (last one was at Wayanad). Open and supportive culture. Health insurance & other benefits for you and your spouse. Maternity benefits included. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for a Mid-Level LLM Application Developer with 3–5 years of experience in software development, who is passionate about building intelligent applications using Azure OpenAI, Python, and LLM frameworks. This is a full-time role focused on designing, developing, and deploying scalable LLM-powered solutions, including chatbots, knowledge assistants, and RAG-based systems. You’ll be working closely with cross-functional teams to bring innovative AI solutions to life, leveraging the latest in generative AI and agentic technologies. Job Description: Responsibilities : Design and develop LLM-powered applications using Python and Azure OpenAI services. Extend digital products and platforms, and LLM Apps , with new capabilities Support adoption of digital platforms and LLM Apps, by onboarding new clients Driving automation to expedite and accelerate new client adoption Build end-to-end Retrieval-Augmented Generation (RAG) pipelines, integrating vector databases, semantic search and other related tools. Develop conversational agents and virtual assistants, using frameworks like LangChain or LlamaIndex. Craft effective prompts using advanced prompt engineering and prompt design techniques. Integrate LLMs with external tools, APIs, and business data systems. Apply Agentic AI patterns to RAG and AI Workflows, interacting with LLMs by orchestrating various agents together Deploy and manage applications using Azure Functions, Azure AI services, and serverless components. Ensure performance, scalability, and reliability of AI solutions on Azure. Collaborate across teams and participate in agile development processes. Required Skills (Must Have): Strong proficiency in Python programming language. Expertise in Azure OpenAI Service, including foundation model usage, GPT Model family, and integration. Build and deploy AI solutions leveraging Azure AI services (e.g., Cognitive Services, Azure AI Search). Deep understanding of vector databases and hybrid search capabilities built on top of Azure AI Search Deep experience in prompt engineering, including various prompting strategies (few-shot, chain-of-thought, etc.). Hands-on experience and deep expertise in building RAG pipelines with vector databases and tool integrations. Proven experience developing chatbots or virtual assistants using LLMs. Proficiency in at least one LLM application framework (preferably., LangChain,). In-depth understanding of LLM models, their capabilities, and applications. Good understanding of LLM evaluations, and how to evaluate LLM model outputs. Experience deploying with Azure Function Apps and broader Azure ecosystem. Solid grasp of API integrations and data workflow design. Solid experience building automation workflows and automation solutions for LLM Apps and Products, to support new client onboarding Solid experience with data and content indexing pipelines to setup new “knowledge bases” for RAG and LLM solutions Strong problem-solving skills and ability to deliver scalable, efficient code. Excellent communication and team collaboration abilities. Preferred Skills (Good to Have): Good understanding of using AI Agents and Agentic AI Patterns to integrate with LLMs Familiarity with Multi-Agent AI orchestration and agentic workflows. Experience building cloud-native services with serverless architectures. Understanding of NLP techniques and data transformation pipelines. Familiarity with LLMOps concepts and AI model lifecycle. Qualifications : Bachelor’s degree in computer science, Computer Engineering, or a related field. 3+ years of experience in software development. Experience with LLM applications and cloud platforms. Strong understanding of software development methodologies (e.g., Agile). Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less

Posted 5 days ago

Apply

12.0 - 18.0 years

0 Lacs

Tamil Nadu, India

Remote

Linkedin logo

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models, in particular, to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge LLM applications and systems for our clients. Your primary focus will be on driving innovation and leveraging LLMs to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of LLM apps. Job Description: Responsibilities : Develop and extend digital products and creative applications, leveraging LLM technologies at their core. Lead a product development and product operations team to further develop, enhance and extend existing digital products built on top of LLMs Lead client onboarding, client rollout, and client adoption efforts, maximizing use of the product across multiple clients Lead enhancements and extensions for client specific capabilities and requests Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs Ability to work with Base Foundation LLM Models, Fine Tuned LLM models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Large Language Models Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including Azure OpenAI GPT model family primarily, but also Google Gemini, Anthropic Claude, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI. Solid Hands-on, and Deep Experience working with RAG pipelines and Enterprise technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience with LLM evaluation frameworks such as Ragas, and their use to evaluate / improve LLM model outputs Experience in data preprocessing, and post-processing model / results evaluation. Hands-on Experience with API Integration and orchestration across multiple platforms Good Experience with Workflow Builders and Low-Code Workflow Builder tools such as Azure Logic Apps, or n8n (Nodemation) Good Experience with Serverless Cloud Applications, including Cloud / Serverless Functions with Azure Good Experience with Automation Workflows and building Automation solutions to facilitate rapid onboarding for digital products Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have Skills (not essential, but useful) : Good understanding of Transformer Models and how they work. Hands-on Experience with Fine-Tuning LLM models at scale. Good Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents. Some experience with Single-Agent and Multi-Agent Orchestration solutions Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models, in particular, to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge LLM applications and systems for our clients. Your primary focus will be on driving innovation and leveraging LLMs to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of LLM apps. Job Description: Responsibilities : Develop and extend digital products and creative applications, leveraging LLM technologies at their core. Lead a product development and product operations team to further develop, enhance and extend existing digital products built on top of LLMs Lead client onboarding, client rollout, and client adoption efforts, maximizing use of the product across multiple clients Lead enhancements and extensions for client specific capabilities and requests Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs Ability to work with Base Foundation LLM Models, Fine Tuned LLM models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Large Language Models Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including Azure OpenAI GPT model family primarily, but also Google Gemini, Anthropic Claude, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI. Solid Hands-on, and Deep Experience working with RAG pipelines and Enterprise technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience with LLM evaluation frameworks such as Ragas, and their use to evaluate / improve LLM model outputs Experience in data preprocessing, and post-processing model / results evaluation. Hands-on Experience with API Integration and orchestration across multiple platforms Good Experience with Workflow Builders and Low-Code Workflow Builder tools such as Azure Logic Apps, or n8n (Nodemation) Good Experience with Serverless Cloud Applications, including Cloud / Serverless Functions with Azure Good Experience with Automation Workflows and building Automation solutions to facilitate rapid onboarding for digital products Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have Skills (not essential, but useful) : Good understanding of Transformer Models and how they work. Hands-on Experience with Fine-Tuning LLM models at scale. Good Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents. Some experience with Single-Agent and Multi-Agent Orchestration solutions Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

4.0 years

40 - 50 Lacs

Ranchi, Jharkhand, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, Numpy, opencv, PIL, Pytorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 5 days ago

Apply

Exploring OpenAI Jobs in India

OpenAI is a leading artificial intelligence research laboratory known for its cutting-edge work in the field of AI. Job seekers interested in opportunities with OpenAI in India have a wide range of roles to choose from. In this article, we will explore the job market for OpenAI in India, including top hiring locations, average salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech communities and are actively hiring for OpenAI roles.

Average Salary Range

The salary range for OpenAI professionals in India varies based on experience level. Entry-level positions can expect to earn between INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

A typical career path in OpenAI might include roles such as Junior AI Engineer, AI Engineer, Senior AI Engineer, AI Research Scientist, and AI Research Lead.

Related Skills

In addition to expertise in OpenAI, professionals in this field often benefit from having skills in machine learning, natural language processing, neural networks, and programming languages like Python.

Interview Questions

  • What is OpenAI and what are its main goals? (basic)
  • Can you explain the concept of reinforcement learning? (medium)
  • How do you handle bias and fairness in AI models? (medium)
  • What is the difference between supervised and unsupervised learning? (basic)
  • Can you discuss a project where you implemented OpenAI technologies? (advanced)
  • How do you evaluate the performance of an AI model? (medium)
  • What is the importance of ethics in AI research? (basic)
  • Explain the concept of transfer learning in AI. (medium)
  • How would you approach a problem where you need to generate human-like text using AI? (advanced)
  • What are some common challenges faced in AI research and development? (medium)
  • How do you stay updated with the latest trends in AI and machine learning? (basic)
  • Can you discuss a time when you had to troubleshoot a complex AI model? (advanced)
  • What is the role of data preprocessing in AI projects? (basic)
  • How do you handle missing data in a dataset for an AI project? (medium)
  • Can you explain the concept of overfitting in machine learning? (basic)
  • Discuss a project where you had to work with large datasets in AI. (medium)
  • How do you ensure the security and privacy of data in AI projects? (medium)
  • What are some limitations of current AI technologies? (basic)
  • How do you approach hyperparameter tuning in AI models? (medium)
  • Can you discuss a project where you collaborated with a cross-functional team in AI development? (advanced)
  • What is the role of explainability in AI models? (medium)
  • How do you handle imbalanced datasets in AI projects? (medium)
  • Can you explain the difference between AI, machine learning, and deep learning? (basic)
  • How do you assess the performance of an AI model in real-world scenarios? (medium)
  • What do you think the future holds for AI research and development? (basic)

Closing Remark

As you explore opportunities in the OpenAI job market in India, remember to enhance your skills, stay updated with the latest trends, and prepare thoroughly for interviews. With dedication and passion for AI, you can embark on a rewarding career in this exciting field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies