Home
Jobs

4114 Retrieval Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About This Role Wells Fargo is seeking a Software Engineer-Gen AI. In This Role, You Will Participate in low to moderately complex initiatives and projects associated with the technology domain, including installation, upgrades, and deployment efforts Identify opportunities for service quality and availability improvements within the technology domain environment Design, code, test, debug, and document for low to moderately complex projects and programs associated with technology domain, including upgrades and deployments Review and analyze technical assignments or challenges that are related to low to medium risk deliverables and that require research, evaluation, and selection of alternative technology domains Present recommendations for resolving issues or may escalate issues as needed to meet established service level agreements Exercise some independent judgment while also developing understanding of given technology domain in reference to security and compliance requirements Provide information to technology colleagues, internal partners, and stakeholders Required Qualifications: 2+ years of software engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong proficiency in Python/Java and LLM orchestration frameworks (LangChain, LangGraph) Basic Knowledge of model context protocols, RAG architectures, and embedding techniques Experience with model evaluation frameworks and metrics for LLM performance Proficiency in frontend development with React.js for AI applications Experience with UI/UX design patterns specific to AI interfaces Experience with vector databases and efficient retrieval methods Knowledge of prompt engineering techniques and best practices Experience with containerization and microservices architecture Strong understanding of semantic search and document retrieval systems Working knowledge of both structured and unstructured data processing Experience with version control using GitHub and CI/CD pipelines Experience working with globally distributed teams in Agile scrums Experience with Google ADK (preferred but not mandatory) AutoGen, and OpenAI APIs Strong verbal and written communication skills Job Expectations Understanding of enterprise use cases for Generative AI Knowledge of responsible AI practices and ethical considerations Ability to optimize AI solutions for performance and cost Well versed in MLOps concepts for LLM applications Staying current with rapidly evolving Gen AI technologies and best practices Experience implementing security best practices for AI applications Posting End Date: 1 Jul 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-469289

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Who We Are Bynd is redefining financial intelligence through advanced AI, transforming how leading investment banks, private equity firms, and equity researchers globally analyze and act upon critical information. Our founding team includes a Partner from Apollo ($750B AUM) and AI engineers from UIUC, IIT, and other top-tier institutions. Operating as both a research lab and a product company, we build cutting-edge retrieval systems and AI-driven workflow automation for knowledge-intensive financial tasks. Role Overview As an AI Intern at Bynd, you’ll work at the intersection of cutting-edge GenAI systems and rigorous classical ML evaluation methodologies. Your primary responsibility will be to build and refine evaluation pipelines for our existing AI-driven financial intelligence systems. You’ll collaborate closely with the founding team and top financial domain experts to ensure our models are not only powerful—but measurable, explainable, and reliable. If you’re excited by the idea of working hands-on with state-of-the-art LLMs, experimenting with RAG systems, and building frameworks that make AI outputs trustworthy and actionable, this role is made for you. Responsibilities • Design, implement, and iterate on evaluation pipelines for existing AI/ML systems, particularly GenAI-based and RAG-based architectures. • Develop test sets, metrics, and validation frameworks aligned with financial use cases. • Analyze model performance (both quantitative and qualitative) to uncover insights, gaps, and opportunities for improvement. • Work alongside full-stack and ML engineers to integrate evaluation systems into CI/CD workflows. • Assist in data collection, benchmark tasks, and A/B testing setups for LLM responses. • Stay up-to-date with academic and industry advancements in evaluation frameworks, prompt testing, and trustworthy AI. Preferred: • Prior hands-on experience with GenAI systems (e.g., OpenAI, Claude, Mistral, etc.), including prompt design and retrieval-augmented generation (RAG). • Solid understanding of classical ML concepts like training-validation splits, overfitting, data leakage, and cross-validation. • Familiarity with tools such as Weights & Biases, LangSmith, or custom logging/benchmarking suites. • Comfort with Python, evaluation libraries (e.g., sklearn, evaluate, bert-score, BLEU/ROUGE, etc.), and backend integration. • Experience working with unstructured financial data (PDFs, tables, earnings reports, etc.) is a massive plus. What We’re Looking For We’re looking for a fast learner with deep intellectual curiosity and strong fundamentals. You should be comfortable reasoning through ambiguity, rapidly testing hypotheses, and communicating technical decisions with clarity. You’re someone who thinks not just about building intelligent systems—but about how we measure intelligence meaningfully. This is an opportunity to work closely with a high-caliber founding team and ship impactful systems used by decision-makers at global financial institutions. If you’re passionate about building AI that works and works reliably, come build with us.

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Technical Requirements: Must-Have: 3-5 years of hands-on experience in Oracle PL/SQL development. Strong proficiency in writing and optimizing complex SQL queries, stored procedures, functions, packages, and triggers. Solid understanding of Oracle database concepts, including data types, indexes, constraints, and views. Experience with SQL tuning and performance optimization techniques. Familiarity with Oracle development tools such as SQL Developer or Toad. Experience with version control systems (e.g., Git, SVN). Good to Have: Basic understanding of data modeling principles. Exposure to Oracle Forms and Reports is a plus. Familiarity with shell scripting (Unix/Linux) for database automation. Knowledge of Agile/Scrum development methodologies. Experience with large-scale transactional systems. Understanding of data warehousing concepts or ETL processes. Roles and Responsibilities: Development and Implementation: Design, develop, and implement efficient and scalable PL/SQL stored procedures, functions, packages, triggers, and views. Write and optimize complex SQL queries for data retrieval, manipulation, and reporting. Collaborate with business analysts and solution architects to understand requirements and translate them into technical designs. Participate in the full software development lifecycle (SDLC), including requirements analysis, design, coding, testing, and deployment. Code Quality and Standards: Ensure adherence to established coding standards, best practices, and architectural guidelines. Perform thorough unit testing of developed code and assist with integration testing. Identify and debug issues in existing PL/SQL code, providing timely and effective solutions. Performance Tuning and Optimization: Analyze and optimize the performance of existing PL/SQL code and SQL queries. Utilize Oracle performance monitoring tools (e.g., Explain Plan, SQL Trace) to identify and resolve performance bottlenecks. Contribute to discussions on database design improvements and indexing strategies to enhance application performance. Documentation and Support: Create and maintain clear, concise technical documentation for developed modules and features. Provide support for production issues, analyzing and resolving database-related problems. Collaborate with DBAs on schema changes, data migrations, and other database-related activities. Collaboration and Mentorship (Junior Developers): Work effectively within a team environment, actively participating in team meetings and discussions. Potentially mentor junior developers, offering guidance on PL/SQL coding best practices and problem-solving techniques. Communicate technical concepts clearly to both technical and non-technical stakeholders Additional Information: Education: Bachelor's degree in Computer Science, Information Technology, or a related field. Soft Skills: Excellent analytical and problem-solving skills. Strong attention to detail and commitment to quality. Good communication and interpersonal skills. Ability to work independently and collaboratively in a team environment. Proactive and eager to learn new technologies and concepts. Ability to manage time effectively and handle multiple tasks.

Posted 4 days ago

Apply

6.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the intersection of Artificial Intelligence, Cloud Infrastructure, and Enterprise SaaS , we create data-driven products that power decision-making for Fortune 500 companies and high-growth tech firms. Our multidisciplinary teams ship production-grade generative-AI and Retrieval-Augmented Generation (RAG) solutions that transform telecom, finance, retail, and healthcare workflows—without compromising on scale, security, or speed. Role & Responsibilities Build & ship LLM/RAG solutions: design, train, and productionize advanced ML and generative-AI models (GPT-family, T5) that unlock new product capabilities. Own data architecture: craft schemas, ETL/ELT pipelines, and governance processes to guarantee high-quality, compliant training data on AWS. End-to-end MLOps: implement CI/CD, observability, and automated testing (Robot Framework, JMeter, XRAY) for reliable model releases. Optimize retrieval systems: engineer vector indices, semantic search, and knowledge-graph integrations that deliver low-latency, high-relevance results. Cross-functional leadership: translate business problems into measurable ML solutions, mentor junior scientists, and drive sprint ceremonies. Documentation & knowledge-sharing: publish best practices and lead internal workshops to scale AI literacy across the organization. Skills & Qualifications Must-Have – Technical Depth: 6 + years building ML pipelines in Python; expert in feature engineering, evaluation, and AWS services (SageMaker, Bedrock, Lambda). Must-Have – Generative AI & RAG: proven track record shipping LLM apps with LangChain or similar, vector databases, and synthetic-data augmentation. Must-Have – Data Governance: hands-on experience with metadata, lineage, data-cataloging, and knowledge-graph design (RDF/OWL/SPARQL). Must-Have – MLOps & QA: fluency in containerization, CI/CD, and performance testing; ability to embed automation within GitLab-based workflows. Preferred – Domain Expertise: background in telecom or large-scale B2B platforms where NLP and retrieval quality are mission-critical. Preferred – Full-Stack & Scripting: familiarity with Angular or modern JS for rapid prototyping plus shell scripting for orchestration. Benefits & Culture Highlights High-impact ownership: green-field autonomy to lead flagship generative-AI initiatives used by millions. Flex-first workplace: hybrid schedule, generous learning stipend, and dedicated cloud credits for experimentation. Inclusive, data-driven culture: celebrate research publications, OSS contributions, and diverse perspectives while solving hard problems together. Skills: data,modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · Design and implement agentic AI architectures, including planning, memory, tool use, multi-agent collaboration, and feedback loops. · Build and integrate with large language models (LLMs), including fine-tuning, prompt engineering, and retrieval-augmented generation (RAG). · Develop agents capable of autonomous task execution, dynamic decision-making, and long-horizon planning. · Lead development of tools for self-reflection, memory persistence, and contextual awareness in AI systems. · Create or improve pipelines for multimodal generative AI, such as text-to-image, code generation, or synthetic media creation. · Work with APIs, open-source tools (LangChain, AutoGen, OpenAI, Hugging Face), and cloud infrastructure to deploy production-grade agents. · Collaborate with product, design, and research teams to align capabilities with user needs and ethical AI practices. · Stay up to date with the latest research and developments in agentic AI, LLMs, and generative A Mandatory skill sets: LangChain, AutoGen, Preferred skill sets: Langgraph,Langchain Years of experience required: 3-7 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, Analytical Thinking, C++ Programming Language, Communication, Complex Data Analysis, Creativity, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Embracing Change, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Learning Agility, Machine Learning {+ 25 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 4 days ago

Apply

50.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Software AG helps companies to manage and optimize their operations, infrastructure and technology with products that simplify complexity, increase transparency and prepare organizations for change. Trusted by the world’s best brands for more than 50 years, Software AG’s AI-enabled process intelligence, application development, high-performance database, and strategic portfolio management solutions are used by banks, retailers, manufacturers, governments and more. Software AG’s Adabas database & Natural development platform are used by the world’s leading organizations to build and deploy high-performance, mission-critical applications for IBM Z®, Linux® and cloud. Governments and businesses (in finance, manufacturing, retail and more) tailor these applications to give their organization a distinct competitive advantage and optimize them to meet the most demanding operational service level agreements. With a pledge to innovate Adabas & Natural to 2050 and beyond, we ensure our customer’s mission-critical Adabas & Natural applications are Future ready. Now. Senior COBOL Developer – IBM z/OS Mainframe (Performance Focus) Be you, join us . We are seeking a Senior COBOL Developer with strong expertise in developing and maintaining mission-critical, enterprise-grade applications on IBM z/OS Mainframe systems. The successful candidate will have deep experience working with VSAM and Db2 databases, as well as a proven track record in performance optimization within high-volume environments. Essential Skills Design, develop, and maintain mission-critical, enterprise-grade COBOL applications that interact with VSAM datasets and Db2 databases. Analyze and improve performance of complex mainframe programs, with a focus on efficient data access and throughput. Create and optimize SQL queries and access paths for Db2 to support high-performance transaction and batch processing. Maintain and manage VSAM clusters and ensure efficient data retrieval and storage operations. Collaborate with system architects, business analysts, and QA teams to deliver reliable, scalable, and maintainable solutions. Support and troubleshoot production issues related to data access, performance, and batch execution. Document technical designs, workflows, and system configurations. Minimum Requirements: 7+ years of experience developing COBOL applications on IBM z/OS Mainframe. Strong expertise with VSAM (KSDS, ESDS, RRDS) and Db2 (SQL, DCLGEN, BIND/REBIND). Proficient in JCL, utilities (IDCAMS, IEBGENER, SORT, etc.), and integration with batch processes. Experience in performance tuning of COBOL applications, especially regarding I/O efficiency, Db2 access paths, and indexing strategies. Familiarity with tools such as File-AID, IBM Debug Tool/Expeditor, Endevor, and SPUFI/DSNTEP2. Strong analytical and troubleshooting skills. Excellent communication skills and the ability to work both independently and in a team environment. What’s in it for you? Earn competitive total compensation and receive comprehensive country-specific medical and other benefits. Enjoy time and location flexibility with our Hybrid Working Model, which allows a remote workshare of up to 60%. Work anywhere in your country or abroad for up to 10 days per year. Set yourself up for success in your new role by upgrading your home office space using your one-time hybrid work payment. Lean on the Employee Assistance Program for support during some of life’s most common but difficult challenges.

Posted 4 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description S2M Health is a leading provider of risk adjustment solutions for health plans and provider groups. Offering comprehensive risk adjustment analytics, medical record retrieval, coding services, and claims processing, S2M Health empowers healthcare organizations to access holistic risk adjustment solutions. With a commitment to collaboration, quality, and transparency, S2M Health remains a trustworthy partner in risk adjustment services. We are looking for a dynamic and self-driven Pre-Sales Executive to bridge the gap between client needs and business offerings—both for our core RCM services and emerging healthcare products . This role is crucial in qualifying leads, supporting the sales team with solution proposals, and driving business growth. Key Responsibilities Lead Qualification & Engagement Respond to inbound inquiries and qualify potential clients based on fit and business needs Conduct discovery calls to understand pain points and propose relevant service/product offerings Maintain CRM hygiene and track lead status, conversations, and next steps Solutioning & Proposal Development Collaborate with delivery and product teams to create tailored service proposals, presentations, and scope documents Prepare RFP/RFI responses and client-facing decks with precision and professionalism Support demo preparation and assist with solution storytelling during sales conversations 🔹 Product Pre-Sales Support Understand product roadmap, features, and client value proposition Assist in giving product walkthroughs or coordinating with the product manager for detailed demos Act as a bridge between client feedback and internal product improvement discussions 🔹 Collaboration & Reporting Work closely with the sales, delivery, and product teams to align go-to-market strategy Provide weekly reports on lead funnel, pipeline health, and sales support metrics Support in organizing webinars, product launches, and marketing initiatives as needed Qualifications & Skills 2–4 years of experience in pre-sales, business development, or solution consulting in healthcare or IT services Knowledge of RCM (Revenue Cycle Management) , Risk Adjustment , or HCC Coding is preferred Strong communication and presentation skills Familiarity with CRM tools like HubSpot or Zoho A problem-solver with attention to detail and the ability to multitask Bachelor's degree in business, healthcare, or life sciences (MBA is a plus) What We Offer A mission-driven team building solutions that matter in U.S. healthcare Exposure to both product and services side of healthcare operations Opportunities for growth across sales, strategy, and solutioning Dynamic team culture with strong leadership support To apply, write to us at: 📩 hr@s2mhealth.com 📍Subject: Application for Pre-Sales Executive – RCM & Product

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Amazon’s Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. Our systems process billions of products to algorithmically find products not already sold on Amazon and programmatically add them to the Amazon catalog. We apply parallel processing, machine learning and deep learning algorithms to evaluate products and brands in order to identify and prioritize products and brands to be added to Amazon’s catalog. The datasets produced by our team are used by teams across Amazon to improve: product information, search and discoverability, pricing, and delivery experience. Our work involves building state-of-the-art Information Retrieval (IR) systems to mine the web and automatically create structured entities from un-structure/semi-structured data. We constantly stretch the boundaries of large scale distributed systems, Elastic Computing, Big Data, and SOA technologies to tackle challenges at Amazon’s global scale. Come join us in our journey to make everything – and yes, we do mean *everything* – that anyone wants to buy, available on Amazon! We are looking for SDEs with strong technical knowledge, established background in engineering large scale software systems, and passion for solving challenging problems. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from design to coding, testing, and deployment, to achieve results in a fast paced environment. Key job responsibilities Work with Sr.SDEs and Principal Engineers to drive the technical and architectural vision of SM systems responsible for generation of structured domain entities from structured/semi-structured data. Develop systems and extensible frameworks for complete lifecycle management of domain entities and inter-entity relationships Build scalable platform capabilities for data processing, meta data generation and guardrails . Solve complex problems in automated identity generation, web-to-Amazon namespace translation, and classification of products. Design and develop solutions for efficient storage and vending/search of products and related information. Utilize serverless and big data technologies to develop efficient algorithms that operate of large datasets. Lead and mentor junior engineers, and drive best practices around design, coding, testability, and security. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Bachelor's Degree in Computer Science, advanced degrees preferred. Experience building complex software systems that have been successfully delivered to customer. Deep technical expertise and hands-on architectural understanding of distributed and service-oriented architectures. Has delivered large-scale enterprise software systems or large scale online services. Solid programming skills in OO languages (Java/Scala/C++/Python etc) and a deep understanding of object oriented design. Advanced knowledge of data structures and at ease in optimizing algorithms. Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Master's degree in computer science or equivalent A deep understanding of software development life cycle and a good track record of shipping software on time Experience in data mining, machine learning algorithms, rules engines, and workflow systems. Deep understanding of SOA with proven ability in building highly scalable and fault tolerant systems using Cloud computing technologies. Deep understanding of Map Reduce paradigm with experience in building solutions using Big Data technologies like Spark, Hive etc . Experience in developing efficient algorithms that operate on large datasets. Exposure to AWS technologies is a big plus. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3017885

Posted 4 days ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

Linkedin logo

About the Role: V-Accel is actively building Agentic AI into our SaaS products. We are looking for ML Engineers who can fine-tune LLMs, implement RAG pipelines, and build AI agents for automation, chat, and task execution. Responsibilities: Build, fine-tune, and deploy LLMs for custom business use cases Create RAG (Retrieval Augmented Generation) pipelines with vector DBs Develop AI agents and workflows for intelligent task execution Integrate OpenAI, LangChain, Pinecone, or similar stacks into web apps Work with backend and frontend teams to deliver AI features Requirements: Strong foundation in Python, LangChain, LLMs, HuggingFace, OpenAI Experience with vector databases (e.g., Pinecone, Weaviate, FAISS) Applied experience building chatbots, agents, RAG apps Knowledge of prompt engineering, few-shot learning, embeddings Bonus: Experience in AI-based SaaS or automation tools

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Minimum qualifications: PhD degree in Computer Science, a related technical field, or equivalent practical experience. Experience coding in one of the following programming languages including but not limited to: C, C++, Java, or Python. Experience in one or more of the following: architecting or developing distributed systems, concurrency, multi-threading, or synchronization. Preferred qualifications: Experience with performance, reliability, systems data analysis, visualization tools, or debugging. Experience in code and system health, diagnosis and resolution, and software test engineering. Research experience in algorithms, architecture, artificial intelligence, compilers, database, data mining, distributed systems, machine learning, networking, or systems. Experience with performance, reliability, systems data analysis, visualization tools, architecture, compilers, database, data mining, networking or systems. Experience with Unix/Linux, Kernel development, microcontrollers, SoC, device drivers, hardware, power management, ARM processors, performance optimization, file systems, bootloading, firmware, x86 assembly, system BIOS, or hardware/software integration. About The Job Google Cloud's software engineers build the next-generation technologies that transform how billions of users connect, explore, and interact with information and each other. We're looking for engineers who bring fresh ideas across areas like information retrieval, distributed computing, large-scale system design, networking, data storage, security, AI, and natural language processing—the list keeps growing. As a Software Engineer, you’ll work on projects critical to Google Cloud’s evolving needs, with the flexibility to move between teams and initiatives as both you and our business grow. You'll be empowered to think like an owner, proactively identifying customer needs, taking action, and driving innovation. We value engineers who are versatile, display leadership, and eagerly handle challenges across the full stack. Within Google Cloud, the Machine Learning, Systems, and Cloud AI (MSCA) organization creates category-defining AI/ML capabilities built on Google’s frameworks, infrastructure, and services. We design and manage the software, hardware, and ML systems infrastructure that power Google services like Search and YouTube, and Google Cloud products. As a PhD Software Engineer in MSCA, your research expertise will help solve real-world problems at a massive scale. You'll collaborate on innovative projects in areas such as AI, ML, and distributed systems, contributing to products used by billions. With thousands of PhDs across Google, your academic background will be part of a strong community of researchers and engineers shaping the future of technology. We prioritize security, efficiency, and reliability in everything we do, from developing TPUs to operating one of the world’s largest networks, while shaping the future of hyperscale computing. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Write product or system development code. Participate in, or lead design reviews with peers and stakeholders to decide on available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Lead and collaborate on team projects to carry out design, analysis, and development across the stack using your research expertise. Study, diagnose and resolve complex technical modeling and systems issues by analyzing the sources of the issues and the impact on quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CWX is looking for a dynamic SENIOR AI/ML ENGINEER to become a vital part of our vibrant PROFESSIONAL SERVICES TEAM , working on-site in Hyderabad . Join the energy and be part of the momentum! At CloudWerx, we're looking for a Senior AI/ML Engineer to lead the design, development, and deployment of tailored AI/ML solutions for our clients. In this role, you'll work closely with clients to understand their business challenges and build innovative, scalable, and cost-effective solutions using tools like Google Cloud Platform (GCP), Vertex AI, Python, PyTorch, LangChain, and more. You'll play a key role in translating real-world problems into robust machine learning architectures, with a strong focus on Generative AI, multi-agent systems, and modern MLOps practices. From data preparation and ensuring data integrity to building and optimizing models, you'll be hands-on across the entire ML lifecycle — all while ensuring seamless deployment and scaling using cloud-native infrastructure. Clear communication will be essential as you engage with both technical teams and business stakeholders, making complex AI concepts understandable and actionable. Your deep expertise in model selection, optimization, and deployment will help deliver high-performing solutions tailored to client needs. We're also looking for someone who stays ahead of the curve — someone who's constantly learning and experimenting with the latest developments in generative AI, LLMs, and cloud technologies. Your curiosity and drive will help push the boundaries of what's possible and fuel the success of the solutions we deliver. This is a fantastic opportunity to join a fast-growing, engineering-led cloud consulting company that tackles some of the toughest challenges in the industry. At CloudWerx, every team member brings something unique to the table, and we foster a supportive environment that helps people do their best work. Our goal is simple: to be the best at what we do and help our clients accelerate their businesses through world-class cloud solutions. This role is an immediate full time position. Insight on your impact Conceptualize, Prototype, and Implement AI Solutions: Design and deploy advanced AI solutions using large language models (LLMs), diffusion models, and multimodal AI systems by leveraging Google Cloud tools such as Vertex AI, AutoML, and AI Platform (Agent Builder). Implement Retrieval-Augmented Generation (RAG) pipelines for chatbots and assistants, and create domain-specific transformers for NLP, vision, and cross-modal applications. Utilize Document AI, Translation AI, and Vision AI to develop full-stack, multimodal enterprise applications. Technical Expertise: models via LoRA, QLoRA, RLHF, and Dreambooth. Build multi-agent systems using Agent Development Kit (ADK), Agent-to-Agent (A2A) Protocol, and Model Context Protocol (MCP). Provide thought leadership on best practices, architecture patterns, and technical decisions across LLMs, generative AI, and custom ML pipelines, tailored to each client's unique business needs. Stakeholder Communication: Effectively communicate complex AI/ML concepts, architectures, and solutions to business leaders, technical teams, and non-technical stakeholders. Present project roadmaps, performance metrics, and model validation strategies to C-level executives and guide organizations through AI transformation initiatives. Understand client analytics & modeling needs: Collaborate with clients to extract, analyze, and interpret both internal and external data sources. Design and operationalize data pipelines that support exploratory analysis and model development, enabling business-aligned data insights and AI solutions. Database Management: Work with structured (SQL/BigQuery) and unstructured (NoSQL/Firestore, Cloud Storage) data. Apply best practices in data quality, versioning, and integrity across datasets used for training, evaluation, and deployment of AI/ML models. Cloud Expertise: Architect and deploy cloud-native AI/ML solutions using Google Cloud services including Vertex AI, BigQuery ML, Cloud Functions, Cloud Run, and GKE Autopilot. Provide consulting on GCP service selection, infrastructure scaling, and deployment strategies aligned with client requirements. MLOps & DevOps: Lead the implementation of robust MLOps and LLMOps pipelines using TensorFlow Extended (TFX), Kubeflow, and Vertex AI Pipelines. Set up CI/CD workflows using Cloud Build and Artifact Registry, and deploy scalable inference endpoints through Cloud Run and Agent Engine. Establish automated retraining, drift detection, and monitoring strategies for production ML systems. Prompt Engineering and fine tuning: Apply advanced prompt engineering strategies (e.g., few-shot, in-context learning) to optimize LLM outputs. Fine-tune models using state-of-the-art techniques including LoRA, QLoRA, Dreambooth, ControlNet, and RLHF to enhance instruction-following and domain specificity of generative models. LLMs, Chatbots & Text Processing: Develop enterprise-grade chatbots and conversational agents using Retrieval-Augmented Generation (RAG), powered by both open-source and commercial LLMs. Build state-of-the-art generative solutions for tasks such as intelligent document understanding, summarization, and sentiment analysis. Implement LLMOps workflows for lifecycle management of large-scale language applications. Consistently Model and Promote Engineering Best Practices: Promote a culture of technical excellence by adhering to software engineering best practices including version control, reproducibility, structured documentation, Agile retrospectives, and continuous integration. Mentor junior engineers and establish guidelines for scalable, maintainable AI/ML development. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Title: Machine Learning Engineer Primary Location: Hyderabad/Trivandrum(Onsite) Job Type: Contract Secondary Location: Any Infosys Office Location In this position you will: • Design and implement NLP pipelines for document analysis and artifact generation. • Perform data cleaning and transformation on unstructured text using industry-standard techniques. • Develop embeddings and semantic search pipelines using OpenAI, HuggingFace, or custom models. • Integrate vectorized data with retrieval systems such as MongoDB Vector, FAISS or Pinecone. • Fine-tune and evaluate LLMs for use cases like test case generation, user story summarization, etc. • Monitor model performance and conduct regular evaluations with precision/recall/F1/BLEU. • Collaborate with backend developers to expose ML outputs via APIs. • Participate in architectural design and PoCs for GenAI-based solutions. • Adhere to and implement Responsible AI principles in all ML workflows. • Work closely with product owners and testers to ensure the quality and usability of generated outputs. Required Qualifications: • 5+ years of experience in in data science and AI/ML engineering with strong proficiency in Python and applied NLP • Deep expertise in NLP techniques including: Text classification, Named Entity Recognition (NER), Summarization, Sentiment analysis, Topic modeling • Strong experience in data preprocessing and cleaning :Tokenization, stop-word removal, stemming/lemmatization, normalization. • Strong Experience in vectorization methods: TF-IDF, Word2Vec, GloVe, BERT, Sentence Transformers. Demonstration experience of vectorization and implement solutions to contextual search is must • Hands on Experience in implementing Lang Chain, RAG architecture, and multi-agent orchestration, Agentic AI, scikit learn, python is must • Hands-on with embedding models (e.g., OpenAI, Hugging Face Transformers) and chunking strategies • Experience with vector stores: MongoDB atlas Vector DB, FAISS, Pinecone, Chroma DB. • Skilled in building and fine-tuning LLMs and prompt engineering is must • Experience with MLOps frameworks for model lifecycle, versioning, deployment, and monitoring. • Strong knowledge of LLMOps, NumPy, PySpark for data wrangling. • Experience deploying models on Azure (preferred), AWS, or GCP. • Understanding of Responsible AI practices including model fairness, transparency, and auditability. • Strong knowledge of machine learning frameworks, deep learning architectures, natural language processing and generative models (e.g., GANs, transformers). Preferred Qualifications: • 3 + years of experience building, scaling, and optimizing training and inferencing systems for deep neural networks and/or transformer architectures. • Demonstrated ability in research and development teams with a focus on generative AI technologies and suggesting new ideas or opportunities. • Experience in managing production scale pre training models (private or public cloud) or setting up GPU clusters for In house LLM deployments • Familiarity with AI Governance, ethics, compliance, and regulatory considerations. Education: • Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline. • Master’s degree or PhD preferred. Thanks Aatmesh aatmesh.singh@ampstek.com

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Hi, Hope you are doing well. Please do let me know if you are interested and looking for a job change at this moment. find the detailed job description, and it will be really appreciated if you can share with me your updated resume and best number to reach. ML Engineer Hyderabad/Trivandrum-Onsite Contract Position Summary: We are seeking a dynamic Senior Machine Learning Engineer focused on advancing our generative AI capabilities. You will contribute to building scalable AI systems that impact real world Enterprise applications, while promoting responsible AI practices and collaborating across teams to accelerate innovation. In this position you will: • Design and implement NLP pipelines for document analysis and artifact generation. • Perform data cleaning and transformation on unstructured text using industry-standard techniques. • Develop embeddings and semantic search pipelines using OpenAI, HuggingFace, or custom models. • Integrate vectorized data with retrieval systems such as MongoDB Vector, FAISS or Pinecone. • Fine-tune and evaluate LLMs for use cases like test case generation, user story summarization, etc. • Monitor model performance and conduct regular evaluations with precision/recall/F1/BLEU. • Collaborate with backend developers to expose ML outputs via APIs. • Participate in architectural design and PoCs for GenAI-based solutions. • Adhere to and implement Responsible AI principles in all ML workflows. • Work closely with product owners and testers to ensure the quality and usability of generated outputs. Required Qualifications: • 5+ years of experience in in data science and AI/ML engineering with strong proficiency in Python and applied NLP • Deep expertise in NLP techniques including: Text classification, Named Entity Recognition (NER), Summarization, Sentiment analysis, Topic modeling • Strong experience in data preprocessing and cleaning :Tokenization, stop-word removal, stemming/lemmatization, normalization. • Strong Experience in vectorization methods: TF-IDF, Word2Vec, GloVe, BERT, Sentence Transformers. Demonstration experience of vectorization and implement solutions to contextual search is must • Hands on Experience in implementing Lang Chain, RAG architecture, and multi-agent orchestration, Agentic AI, scikit learn, python is must • Hands-on with embedding models (e.g., OpenAI, Hugging Face Transformers) and chunking strategies • Experience with vector stores: MongoDB atlas Vector DB, FAISS, Pinecone, Chroma DB. • Skilled in building and fine-tuning LLMs and prompt engineering is must • Experience with MLOps frameworks for model lifecycle, versioning, deployment, and monitoring. • Strong knowledge of LLMOps, NumPy, PySpark for data wrangling. • Experience deploying models on Azure (preferred), AWS, or GCP. • Understanding of Responsible AI practices including model fairness, transparency, and auditability. • Strong knowledge of machine learning frameworks, deep learning architectures, natural language processing and generative models (e.g., GANs, transformers). Preferred Qualifications: • 3 + years of experience building, scaling, and optimizing training and inferencing systems for deep neural networks and/or transformer architectures. • Demonstrated ability in research and development teams with a focus on generative AI technologies and suggesting new ideas or opportunities. • Experience in managing production scale pre training models (private or public cloud) or setting up GPU clusters for In house LLM deployments • Familiarity with AI Governance, ethics, compliance, and regulatory considerations. Education: • Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline. • Master’s degree or PhD preferred. Thanks, and Regards Snehil Mishra snehil@ ampstek.com linkedin.com/in/snehil-mishra-1104b2154 Desk-6093602673Extension-125 www.ampstek.com https://www.linkedin.com/company/ampstek/jobs/ Ampstek – Global IT Partner Registered Offices: North America and LATM: USA|Canada|Costa Rica|Mexico Europe:UK|Germany|France|Sweden|Denmark|Austria|Belgium|Netherlands|Romania|Poland|Czeh Republic|Bulgaria|Hungary|Ireland|Norway|Croatia|Slovakia|Portugal|Spain|Italy|Switzerland|Malta| Portugal APAC:Australia|NZ|Singapore|Malaysia|South Korea|Hong Kong|Taiwan|Phillipines|Vietnam|Srilanka|India MEA :South Africa|UAE|Turkey|Egypt

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: NLP Engineer Job Description Job Title: NLP Engineer (Applied AI & Agentic Systems) About Us We’re building the next generation of AI-powered agentic systems — intelligent bots, assistants, and custom AI agents that integrate with APIs, databases, and business logic to drive real value. Our platforms focuses on practical, scalable AI applications rather than pure research — and we’re looking for NLP engineers who can help us push the boundaries. What You’ll Do Build and fine-tune LLM-powered agents and assistants using cutting-edge NLP techniques. Design and implement workflows that combine natural language processing with API calls, database queries, and business logic. Evaluate and apply the right LLMs (OpenAI, Claude, Mistral, etc.) and fine-tune when necessary. Collaborate with cross-functional teams to understand product needs and shape AI-based solutions. Help teams integrate AI into their projects by providing technical guidance and best practices. Develop prompt engineering strategies and reusable agent components. Stay up to date with the latest trends in NLP, LLMs, RAG (Retrieval-Augmented Generation), and agentic frameworks like LangChain, AutoGen, etc. Participate actively in Agile/Scrum processes — sprint planning, standups, retrospectives, and continuous delivery. Be a strong team player — communicative, proactive, and focused on shared success. What We’re Looking For Bachelor’s or Master’s degree in Computer Science, Linguistics, Data Science, AI or a related field. 2+ years of hands-on experience working with NLP or LLMs in a production or applied research setting Solid understanding of NLP concepts — embeddings, tokenization, entity recognition, summarization, etc. Experience with modern LLMs and foundational models (OpenAI, Hugging Face, etc.). Familiarity with agentic frameworks (LangChain, Semantic Kernel, AutoGen, CrewAI or similar). Comfortable integrating with REST APIs, SQL/NoSQL databases, and basic backend logic. Experience with prompt engineering and chaining LLM calls into workflows. Strong problem-solving skills and the ability to architect solutions without excessive boilerplate code. Bonus: Experience with vector databases (e.g., Pinecone, Weaviate), RAG pipelines, or fine-tuning models. Nice to Have Knowledge of Python and working with AI SDKs. Qualifications: Bachelor’s or Master’s degree in Computer Science, Linguistics, Data Science, or a related field. 2/3+ years of experience in natural language processing or a related field. Proficiency in programming languages such as .NET or Python, with experience in NLP libraries. Experience with text preprocessing techniques and tools. Familiarity with cloud platforms and services for deploying/configuring the Gen AI Bot . Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work collaboratively in a team environment. Location: IND Gurgaon - Bld 14 IT SEZ (GST) Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents R1627766

Posted 4 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Role Title - Team Lead and Lead Developer – AI and Data engineering Role Type - Full time Role Reports to Chief Technology Officer Work Location - Plenome Technologies 8 th floor, E Block, IITM Research Park, Taramani Job Overview The Technical Lead will drive our AI strategy and implementation while managing a team of developers. Key responsibilities include architecting LLM solutions, ensuring scalability, implementing multilingual capabilities, and developing healthcare-specific AI models. You will oversee the development of AI agents that can understand and process medical information, interact naturally with healthcare professionals, and handle complex medical workflows. This includes ensuring data privacy, maintaining medical accuracy, and adapting models for different healthcare contexts. Job Specifications Educational Qualifications - Any UG/PG graduates Professional Experience 4+ years of Data Engineering/ML development experience 2+ years of team leadership experience 2+ years of Scrum/Agile management experience Key Job Responsibilities ML applications & training · Understanding of machine learning concepts and experience with ML frameworks like PyTorch, Tensorflow, or others · Experience with production of ML applications on web or mobile platforms NLP & feature engineering · Experience in developing customized AI powered features from scratch to production involving NLP and other models · Designing, deploying and subsequent training of multimodal applications based on clinical requirements LLMs & fine-tuning · Experience with open-source LLMs (preferably Llama models) and fine-tuning through client data and open-source data · Experience with LLM frameworks like LangChain, Llama Index or others, and with any vector databases · Implement RAG architecture to enhance model accuracy with real-time retrieval from clinical databases and medical literature Data pipelines & architecture · Design end-to-end clinical AI applications, from data ingestion to deployment in clinical settings with integrations · Experience with Docker and Kubernetes for application serving at large scale, and developing data pipelines and training workflows API development · Experience with deploying LLM models on cloud platforms (AWS, Azure or others) · Experience with backend and API developments for external integrators Documentation & improvements · Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Behavioral competencies Attention to detail · Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics · Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management · Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility · Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration · Experience presenting to stakeholders and executive teams · Ability to bridge technical and non-technical communication · Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building · Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards · Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking · Ability to develop and implement long-term goals and strategies aligned with the organization’s vision · Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making · Capable of making informed and effective decisions, considering both short-term and long-term impacts · Insight into resource allocation and sprint building for various projects Team Building · Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews · Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency · Research and integrate AI-powered development tools (GitHub Copilot, Amazon Code Whisperer) Added advantage points Regulatory compliances · Experience with HIPAA, GDPR compliant software and data storage systems · Experience in working with PII data and analytical data in highly regulated domains (finance, healthcare, and others) · Understanding of healthcare data standards and codes (FHIR, SNOMED) for data engineering AI safety measures · Knowledge of privacy protection and anti-data leakage practices in AI deployments Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh - Senior HR Professional Email ID - careers@plenome.com

Posted 4 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Purpose - The role is supposed to streamline service roadmap and start bringing in rigor that is at par with the new products that Kohler commercializes. The service roadmap will focus on bringing and refining offerings which shall: Engage Kohler customers and prospects and enmesh deeper in their homes and lives (move from bathrooms to kitchens, bedrooms, and common areas) through relevant categories. Plan for customer retention and create opportunities to cross sell and upsell. Bring methods, processes, calendarize and create KPIs to track various sales enabling and confidence building activities like plumber training, service camps etc. Present service as a differentiator, create stories/highlights and be embedded in regular Kohler content calendar. The incumbent shall be SPOC for the service function for all commercial functions (Sales, Channel Marketing, Product, and Field service teams) Roles & Responsibilities Knowledge Management: Category specific knowledge to be sorted and organized in a repository for easy retrieval and dissemination. This shall include: Regular products Bathroom configurations Training Manuals Build Kohler Care: Logically organize all service offerings under the Umbrella of Kohler Care. Create verticals under Kohler Care Accompanying Elements: Pages on Kohler.co.in. Look, feel and attire of the service agents. Customer Outreach Program: Bring processes, Calendarize, and introduce KPIs to track efficacy of the customer outreach program. This will need to be achieved in sync with the BD, Sales, L&D, and Service teams. The outreach programs include: Plumber trainings Service Camps Specific Demos on need base for B2B Kohler Key Accounts Service Specific Communication: Curate content in collaboration with the Marcom team and ensure dissemination across relevant channels. This will include: Finalizing content strategy for service vertical in line with the entire repositioning plan of the service vertical. Create content calendar for social postings (organic/paid) Integrate relevant content in Kohler Catalogue Finalise service Branding elements across Kohler Dealers/Stores Post Sales CRM (Customer Retention Marketing) Create relevant customer clusters from the existing Kohler customer database. The customer clusters shall include Arch/ID, Retail, and B2B customers. Create a by-segment customer engagement plan. The plan to include: Strategies to cross and upsell. Exploit opportunities in the renovation market with existing Kohler customers and prospects. Finish the pilot and build up the Customer Engagement platform ‘Renew plus plan’ to be deployed across channels – Online D2C, Kohler dealers. Skills And Knowledge Should have 6-8 years of relevant experience. Should have managed: Service Marketing Product marketing (Ideation to commercialization) Customer acquisition through multi-channel activations (Online + Offline). Should have done at least two out of the three roles mentioned above. Background in Quick Commerce (Zepto, Blinkit), Daily Grocery delivery services (Big basket, Milk Basket etc.), Customer Service/Acquisition in digital first brands (Noise, Boat etc.)

Posted 4 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Enhancing your leadership style, you motivate, develop and inspire others to deliver quality. You are responsible for coaching, leveraging team member’s unique strengths, and managing performance to deliver on client expectations. With your growing knowledge of how business works, you play an important role in identifying opportunities that contribute to the success of our Firm. You are expected to lead with integrity and authenticity, articulating our purpose and values in a meaningful way. You embrace technology and innovation to enhance your delivery and encourage others to do the same. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Analyse and identify the linkages and interactions between the component parts of an entire system. Take ownership of projects, ensuring their successful planning, budgeting, execution, and completion. Partner with team leadership to ensure collective ownership of quality, timelines, and deliverables. Develop skills outside your comfort zone, and encourage others to do the same. Effectively mentor others. Use the review of work as an opportunity to deepen the expertise of team members. Address conflicts or issues, engaging in difficult conversations with clients, team members and other stakeholders, escalating where appropriate. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Requirement Expertise in IBM ECM platforms and related technologies to provide application support, troubleshooting, performance tuning, and integrations. Core Technical Skills 1️ ⃣ IBM Content Management Administration & Support Installing, configuring, and maintaining IBM FileNet P8, CMOD, Case Manager, and Datacap. Managing repository structures, metadata, and object stores. Performance tuning, log analysis, and system monitoring. Troubleshooting and resolving content indexing, retrieval, and access control issues. Ensuring high availability, backups, and disaster recovery planning. 2️ ⃣ Database Management Strong understanding of SQL and NoSQL databases used by ECM: DB2, Oracle, Microsoft SQL Server (for IBM FileNet, CMOD) MongoDB, Cloud-based DB solutions (for modern ECM integrations) Query optimization and database performance tuning. Managing database schema changes for ECM repositories. 3️ ⃣ Application & Server Management Working with WebSphere Application Server (WAS) or WebLogic for ECM deployments. Managing content storage and retrieval via IBM Content Services API. Configuring LDAP/Active Directory for authentication & access control. Integrating ECM systems with third-party applications (SAP, Salesforce, SharePoint, etc.). 4️ ⃣ Development & Customization Java/J2EE Development – Customizing IBM Content Navigator (ICN) and FileNet applications. IBM Content Navigator Plugins – Extending ECM functionality. REST and SOAP Web Services – API integrations for content retrieval and indexing. Scripting (Python, PowerShell, Shell Scripting) – Automating ECM tasks and monitoring. 5️ ⃣ DevOps & CI/CD CI/CD Pipelines (Jenkins, GitHub Actions, Azure DevOps) for ECM application deployments. Containerization (Docker, Kubernetes) for IBM Cloud Pak ECM solutions. Infrastructure as Code (Terraform, Ansible) for automated deployments. 6️ ⃣ Cloud & Hybrid Deployments IBM Cloud, AWS, or Azure for ECM SaaS/hybrid deployments. Managing IBM Cloud Pak for Business Automation. Experience with IBM Watson AI for Cognitive Content Management. 7️ ⃣ Monitoring & Security Implementing ECM security policies, role-based access control (RBAC). Monitoring ECM logs using Splunk, ELK Stack, or Dynatrace. Ensuring compliance with records retention policies and regulatory standards (GDPR, HIPAA, ISO 27001).

Posted 5 days ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Solve organizational information problems and needs through identification and analysis of requirements, creation of project and process specifications, design and development of the output. Solve complex system and system-related problems, providing timely and accurate documentation and communication Key Responsibilities Gather business information and incorporate information into project plans with project manager or lead. Provide timely update and accurate documents and communication to project team through life-cycle of change. Work on process improvements and complex projects in identifying business and system requirements, creating project and process specifications for new and/or enhanced systems and drive quantifiable results through facilitating interaction with the business unit. Support and solve a variety of complex system and system-related problems for the business unit and liaise with technology peers, as it relates to business requirements, technology, in addition to design, development or implement systems. Perform and/or support system administration tasks including but not limited to: - Change Management - Maintenance and Monitoring of the applications - Install and upgrading of software - Back-ups and archives - Configuration and troubleshooting - Maintain documentations (new user’s guide, policy and procedure documents, disaster recovery plans, administrative procedures, user access) - Help and educate users - Baselining and application capacity monitoring Required Qualifications Demonstrated excellent hands-on personal computer and organizational skills, Familiar with advanced features in MS Word, MS PPT Familiar with formulas and complex spreadsheets. Ability to write embedded formulas are essential. Exposure to VBA macro development within MS Excel Understanding of SQL and data sets Ability to write SQL queries and understand data retrieval, formatting, and integration. Ability to understand Database architecture concepts. Familiarity with Back and Middle Office technology. Solid analytical, quantitative and problem-solving skills, with the ability to interpret data, reach conclusions and take action. Ability to understand technology as it relates to business and may require product or system certifications. Ability to communicate technology related information clearly to different audiences and clearly detail implementation processes. Strong relationship within the department and across business functions. Bachelor’s degree or equivalent work experience. 1+ years’ experience in the financial services industry Strong leadership competencies and execution skills by way of cross-collaboration and workflow facilitation with multiple internal business partners. Must be highly responsive and proactive in a fast-paced changing environment. Preferred Qualifications Exposure to Vermilion Reporting Suite (VRS) is preferred, in absence of which PowerBI, Tableau, SSRS or any other reporting tool experience will be considered. Knowledge of Java, ETL, SQL, RDBMS (Oracle, SQL Server etc) Experience in Asset Management area. Implementation experience with financial services technology, preferably in a consultative role with an eye towards system design and integration. About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Business Support & Operations

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

We're looking for a Senior Data Engineer to design and build scalable data solutions using Azure Data Services, Power BI, and modern data engineering best practices. You'll work across teams to create efficient data pipelines, optimise databases, and enable smart, data-driven decisions. If you enjoy solving complex data challenges, collaborating globally, and making a real impact, we'd love to hear from you. Be a part of our Data Management & Reporting team and help us deliver innovative solutions that make a real impact. At SkyCell, we're on a mission to change the world by revolutionizing the global supply chain. Our cutting-edge temperature-controlled container solutions are designed to ensure the safe and secure delivery of life-saving pharmaceuticals, with sustainability at the core of everything we do. We're a fast-growing, purpose-driven scale-up where you'll make an impact, feel empowered, and thrive in a diverse, innovative environment. Why SkyCell? 🌱 Purpose-Driven Work: Make a real difference by contributing to a more sustainable future in global logistics and healthcare 🚀 Innovation at Heart: Work with cutting-edge technology and be at the forefront of supply chain innovation 🌎 Stronger together: Join a supportive team of talented individuals from over 40 countries, where we work together every step of the way 💡 Growth Opportunities: We believe in investing in our people - continuous learning and development are key pillars of SkyCell 🏆 Award-Winning Culture: Join a workplace recognized for its commitment to excellence with a ‘Great Place to work' award, as well as a Platinum Ecovadis rating highlighting our sustainability and employee well-being What You'll Do: Design, build, and maintain scalable data pipelines and databases using Azure Data Services for both structured and unstructured data Optimise and monitor database performance, ensuring efficient data retrieval and storage Develop, optimise, and maintain complex SQL queries, stored procedures, and data transformation logic Develop efficient workflows, automate data processes, and provide analytical support through data extraction, transformation, and interpretation Create and optimise Power BI dashboards and models, including DAX tuning and data modelling; explore additional reporting tools as needed Integrate data from multiple sources and ensure accuracy through quality checks, validation, and consistency measures Implement data security measures and ensure compliance with data governance policies Investigate and support the business in providing solutions for data issues Collaborate with cross-functional teams, contribute to code reviews, and uphold coding standards Continuously evaluate tools, document data flows and system architecture, and improve engineering practices Provide technical leadership, mentor junior engineers, and support in hiring, onboarding, and training Requirements What You'll Bring: Bachelor's degree in Computer Science or a related field (Master Degree is a plus) Proven expertise in designing and implementing data solutions using Azure Data Factory, Azure Databricks, and Azure SQL Database; certifications are a plus Strong proficiency in SQL development and optimization, good knowledge of NoSQL databases Extensive experience in developing complex dashboards and reports including DAX Knowledge of at least one data analysis language (Python, R, etc.) Knowledge of SAP Analytics Cloud is advantageous Ability to design and implement data models to ensure efficient storage, retrieval, and analysis of structured and unstructured data Benefits What's In It For You? ⚡ Flexibility & Balance: Flexible working hours and work-life balance allow you to tailor work to fit your life 🌟 Recognition & Growth: Opportunities for career advancement in a company that values your contributions 💼 Hybrid Workplace: Modern workspaces (in Zurich, Zug and Hyderabad as well as our Skyhub in Basel) and a remote-friendly culture to inspire collaboration amongst a globally diverse team 🎉 Company-wide Events: Join us for company events to celebrate successes, build teams, and share our vision. Plus, new joiners experience SkyWeek, our immersive onboarding program 👶 Generous Maternity & Paternity Leave: Support for new parents with competitive maternity and paternity leave 🏖️ Annual Leave & Bank Holidays: Enjoy a generous annual leave package, plus local bank holidays to recharge and unwind Ready to Make an Impact? We're not just offering a job; we're offering a chance to be part of something bigger. At SkyCell, you'll help build a future where pharmaceutical delivery is efficient, sustainable, and transformative. Stay Connected with SkyCell Visit http://www.skycell.ch and explore #WeAreSkyCell on LinkedIn How To Apply Simply click ‘apply for this job' below! We can't wait to meet you and discuss how you can contribute to our mission! Please note, we are unable to consider applications sent to us via email. If you have any questions, you can contact our Talent Team (talent@skycell.ch). SkyCell AG is an equal opportunity employer that values diversity and is committed to creating an inclusive environment for all. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, gender identity, age, disability, or any other legally protected characteristic. For this position, if you are not located in, or able to relocate (without sponsorship) to one of the above locations, your application cannot be considered.

Posted 5 days ago

Apply

5.0 - 7.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

The Kardex Group is one of the world’s leading manufacturers of dynamic storage, retrieval and distribution systems. With over 2,500 employees worldwide, we develop and manufacture logistics solutions that are used in many different sectors such as industrial manufacturing, retail and administration. Kardex India Pvt Ltd is seeking a motivated self-starter to join our New Business Team in the role of Territory Sales Person to be based remotely at Kolkata in India. The purpose of the roles is: - Develop the Market for Kardex Products and solutions in the East region of India - Reach and exceed sales targets in the territory and relevant segments - Create, qualify and develop leads and close sales according to Kardex Sales process. - Fully utilize the Kardex CRM tool to track all leads and opportunities - Actively contribute to the growth of Kardex in Eastern part of the Indian market Major task and responsibilities: TARGETS Net Sales Offers (value) Order intake (units/solutions/value) Net Sales Others to be elaborated during induction process Customers Giving proactive support to existing customers Identify and develop new customer for Kardex Solutions Follow the Kardex industry segment focus and develop solutions in these targeted segments Internal Forecast precision (Bookings/Net Sales) Deliver tasks within agreed time Defined reporting delivered on time Follow the Kardex sales process using the Miller Heiman sales methodology Report all sales activities via the Kardex CRM tool RESPONSIBILITIES Reach and exceed agreed sales volume Support and develop territory in lead generations, qualification, and order intake Customer visits Develop solution and value proposition for customer Offer making, contract and price negotiations. Initiate and participate in business development projects Initiate, implement and follow up sales campaigns Reporting Sales Force/ CRM updated weekly Monthly forecasting/weekly forecast update Other sales reporting requested. REQUIREMENTS Education: Tertiary education in related field. Minimum 5-7 years of experience in Intralogistics with high exposure to wholesale, retail, e-commerce, 3PL, electronics and/or Bio-Pharmaceutical industries. Multi-year experience of high level and complex B2B sales, with solution selling. Commercial background with good technical understanding or vice versa. Formally trained in sales and key account management. Experience in development and negotiation of complex contracts. Good understanding of logistical processes, and software supported working processes. Experienced in using Strategic Selling Framework like Miller Heiman, SPIN selling and CRM, e.g. SalesForce Experienced in solution selling at high level. Creative and solution oriented. Patient, persistent and enduring working style. Behaviours required to perform this role: Able to extract diagnostic data in order to ascertain root cause of reported fault. Logical and forward thinking Logical approach to fault analysis/problems. Able to follow laid down procedures and policies. Able to evaluate situations and respond appropriately. Self-motivated, self-disciplined and maintain positive attitude. Able to cope with varying levels of stress and pressure. Able to make decisions/judgements. Able to work beyond working hours if required (on weekends and public holidays).

Posted 5 days ago

Apply

0 years

1 - 2 Lacs

Delhi

Remote

GlassDoor logo

Digital Health Associates Pvt. Ltd. is looking for an AI/ML & Backend Developer Intern excited about building intelligent and interactive AI systems. You'll work on real-world use cases involving agentic AI, LLMs, and retrieval-augmented generation (RAG) using tools like LangChain, LangGraph, and FastAPI. Responsibilities: Build and experiment with agentic AI workflows using LangChain and LangGraph Integrate open-source LLMs via tools like Ollama, LM Studio, etc. Create backend services and APIs using FastAPI Work with embedding models and vector search for intelligent retrieval tasks Collaborate with team members to prototype and deploy AI-driven features Requirements: Proficiency in Python and backend development with FastAPI Familiarity with LangChain, LangGraph, and agent-based AI concepts Experience using open-source LLMs (e.g., Mistral, LLaMA, Zephyr) locally or through inference tools like Ollama/LM Studio Basic understanding of RAG (Retrieval-Augmented Generation) and vector databases Comfortable with Git, Docker, and basic API integrations Good to Have: Exposure to prompt engineering and LLM fine-tuning Knowledge of tools like Weaviate, Qdrant, ChromaDB Familiarity with DevOps or cloud deployment (AWS/GCP) Job Type: Internship Contract length: 3 months Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Paid time off Location Type: Remote Schedule: Day shift Fixed shift Work Location: Remote Speak with the employer +91 9911100774

Posted 5 days ago

Apply

8.0 years

1 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Minimum qualifications: Bachelor's degree in Computer Science, a similar technical field of study, or equivalent practical experience. 8 years of experience with software development. Experience with one or more general purpose programming languages (e.g., Java, C/C++, Python or Go). Preferred qualifications: Experience with distributed processing and systems engineering. Experience with open source technologies, such as Apache Spark. About the job Google Cloud's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google Cloud's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. You will anticipate our customer needs and be empowered to act like an owner, take action and innovate. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Design and develop software in Data Integration domain working on cloud native distributed systems stack. Drive the launch of quality new features. Manage individual project priorities, deadlines and deliverables and participate in design and code review. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 5 days ago

Apply

3.0 years

2 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language Our team is building low latency, highly scalable storage layer to support puch-to-pay data across multiple businesses and regions. We use technologies from AWS and continuously challenge ourselves in building the right solution. Timehub team's charter is to build a world-class product which meets attendance and pay computation needs for over 2 Million hourly associates across Amazon businesses. People Technology is the central hub for all Amazon.com people data. Our technology provides the foundation and orchestration for a multitude of key human resource processes, from on-boarding of tens of thousands of temporary employees during peak holiday season to integrating critical employee data to internal and external systems. We implement and build highly secure, global software that allows Amazon.com to effectively manage the workforce, resulting in a better employee experience and a better bottom line. Timehub is looking for talented Software Development Engineers (SDE) to join their team at Hyderabad, India. Amazon continuously pushes the limit to deliver packages and goods to customers as fast as possible. Gaining efficiencies in tracking productivity, time, and attendance is paramount to achieving this goal. You will get a chance to invent new technologies and build custom solutions to help Amazon track time, attendance, and productivity of employees and impact the employee experience. How hard can it be to pay people for the right number of hours worked considering the compliance policies, business policies which vary across country, state, city, business ? Would you be excited to dive into surprisingly complicated space that is tangible to all Amazonians, with the real-time analytics, surge-traffic handling, fault detection, and data processing by developing new solutions on Server less platforms? Then you are the person, People Technology is looking for. Key job responsibilities As Software Development Engineer, You will contribute to all aspects of an agile software development lifecycle including design, architecture, development, documentation, testing and operations. You will push your design and architecture limits by owning all aspects of solutions end-to-end, through full stack software development. You have strong verbal and written communication skills, are self-driven, and can deliver high quality results in a fast-paced environment. As a part of Timehub team, you will deliver robust feature sets, elegant designs, and intuitive user interfaces that make it easy for Amazonians to excel at performing critical business functions... You are obsessed with delighting customers, and have a demonstrated track record of passion for leveraging technologies to build incredible products. You understand back-end services and know how to conceptualize, design, implement, and maintain them. You go through life thinking about how to use right technology to solve problems. You understand the untapped power of utilizing Internet technologies to support Amazon’s goal to be the most customer-centric company in the world. Most importantly, you have a passion for learning and are driven to be the best at what you do. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Experience in machine learning, data mining, information retrieval, statistics, natural language processing or GenAI. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 5 days ago

Apply

0 years

1 - 5 Lacs

India

On-site

GlassDoor logo

Key Responsibilities: Document Preparation : Drafting, formatting, and editing documents such as reports, forms, and correspondence. Record Management : Organizing and maintaining both physical and digital files, ensuring easy retrieval and secure storage. Compliance Checks : Verifying that documents meet company-specific requirements. Data Entry : Inputting information into databases or document management systems with a high degree of accuracy. Coordination : Collaborating with departments like Project, Engineering, Client Co-ordination. Required Skills and Qualifications: Written Communication : Ability to draft, edit, and proofread documents clearly and professionally. Organizational Skills : Efficiently manage and retrieve both digital and physical records. Computer Proficiency : Familiarity with MS Office, document management systems, and data entry tools. Time Management : Prioritize tasks and meet deadlines in fast-paced environments. Educational Background: a diploma or degree in business administration, communications, or a related field is often preferred Job Type: Full-time Pay: ₹10,904.10 - ₹41,780.83 per month Benefits: Health insurance Schedule: Day shift Work Location: In person

Posted 5 days ago

Apply

5.0 - 8.0 years

7 Lacs

Srīnagar

On-site

GlassDoor logo

Key Responsibilities Embryology Procedures Perform and oversee all laboratory procedures including: Oocyte retrieval handling Insemination via conventional IVF and ICSI Embryo grading, culturing, and development monitoring Embryo transfer preparation Vitrification and thawing of embryos and gametes Semen analysis and sperm preparation techniques Quality & Compliance Ensure compliance with national and international ART regulations (ICMR, ESHRE, ASRM). Maintain accurate and detailed documentation of all procedures. Implement and monitor strict quality control (QC) and quality assurance (QA) protocols. Lab Management & Equipment Handling Calibrate and maintain lab equipment and instruments. Monitor cryopreservation systems and ensure proper inventory management of gametes and embryos. Maintain a sterile and contamination-free environment. Team Leadership Guide and mentor junior embryologists and lab technicians. Collaborate with fertility specialists, counselors, and nursing staff to coordinate patient care. Participate in case discussions, scientific meetings, and training sessions. Patient Care & Communication Provide patients with transparent updates (where required) on embryo development and outcomes in coordination with doctors. Support counseling sessions with embryology-related information when necessary. Qualifications & Experience Master’s Degree in Clinical Embryology / Biotechnology / Reproductive Biology or equivalent. Minimum 5–8 years of hands-on experience in embryology and IVF lab procedures. Certification from recognized bodies (e.g., ESHRE, ASRM, ICMR registration) is highly desirable. Key Skills Expertise in IVF, ICSI, embryo biopsy, vitrification. Strong understanding of embryology lab protocols, SOPs, and documentation practices. Attention to detail, manual dexterity, and high ethical standards. Good communication and leadership skills. Job Types: Full-time, Permanent Pay: Up to ₹60,000.00 per month Benefits: Internet reimbursement Schedule: Day shift Work Location: In person

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies