Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
12 - 20 Lacs
hyderābād
On-site
Position Overview: IN PERSON INTERVIEW MUST We are seeking a highly skilled AWS Systems Engineer to join the MDThink team. The selected candidate will be responsible for designing, implementing, and managing scalable AWS infrastructure with a strong focus on AI/ML services, observability, and automation. This role requires hands-on expertise with AWS services such as DataZone, SageMaker, Bedrock, Lex, and Textract , as well as proven leadership in system engineering and DevSecOps best practices. Responsibilities Design, implement, and manage AWS infrastructure (EC2, S3, VPC, Lambda, IAM, CloudFormation, Step Functions). Lead the configuration and governance of AWS DataZone for secure data cataloging and sharing. Build and manage ML environments using Amazon SageMaker and Bedrock for model training, deployment, and inference. Deploy and integrate Bedrock foundation models and SageMaker pipelines to support AI/ML teams. Configure and support Lex chatbots and Textract for intelligent document processing. Collaborate with data scientists and ML engineers to provision and optimize model-serving infrastructure. Implement monitoring and observability using CloudWatch, OpenTelemetry, X-Ray, AWS Health Dashboard , and third-party tools (Datadog, Grafana). Automate infrastructure provisioning with Terraform, AWS CDK, or CloudFormation . Build and maintain CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions) for infrastructure and ML workflows. Apply DevSecOps best practices to ensure security, compliance, and cost optimization. Ensure encryption, audit logging, and compliance across all deployed services. Participate in security reviews, architecture assessments, and governance initiatives. Required Qualifications Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field (Master’s preferred). 6+ years of experience in systems engineering, DevOps, or cloud infrastructure. 5+ years hands-on experience with AWS core services (EC2, IAM, Lambda, S3, VPC). Strong expertise in at least two or more AWS services : SageMaker, Glue, Athena, Bedrock, Lex, Textract, DataZone. Proficiency with IaC tools: Terraform, AWS CDK, CloudFormation . Experience in observability (monitoring, logging, tracing) with CloudWatch and third-party tools. Strong scripting/programming skills ( Python, Spark, Iceberg, SQL ). Preferred Qualifications 3+ years of experience supervising or leading system engineering teams. Strong leadership, communication, and project management skills. Experience working on government or public sector projects in Health and Human Services (Child Support, Integrated Eligibility, Child Welfare, Adult Protective Services, Juvenile Justice, MDM, R360, EDMS) . Background in AI/ML projects with cross-functional team collaboration. Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Benefits: Food provided Health insurance Provident Fund Expected Start Date: 25/09/2025
Posted 23 hours ago
3.0 - 7.0 years
3 - 7 Lacs
cochin, kerala, india
On-site
We are seeking talented AWS AI Engineers / Developers to join our team in building innovative agentic AI solutions. This role focuses on implementing intelligent, automated workflows that enhance clinical data processing and decision-making, leveraging the power of AWS services and cutting-edge open-source AI frameworks. Experience in healthcare or Life Sciences, with a strong focus on regulatory compliance, is highly desirable. Key Responsibilities: Design, develop, and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation to enhance data quality and reliability. Build and integrate LLM-powered agents using AWS Bedrock in combination with open-source frameworks like LangChain and AutoGen. Create robust, event-driven pipelines leveraging AWS Lambda, Step Functions, and EventBridge for seamless orchestration of data and processes. Optimize prompt engineering techniques, implement retrieval-augmented generation (RAG), and facilitate efficient multi-agent communication. Integrate AI agents securely with external applications and services through well-defined, secure APIs. Collaborate with Data Engineering teams to design and maintain PHI/PII-safe data ingestion pipelines, ensuring compliance with privacy regulations. Continuously monitor, test, and fine-tune AI workflows focusing on improving accuracy, reducing latency, and ensuring compliance with industry standards. Document solutions and contribute to the establishment of best practices and governance frameworks. Required Qualifications: Bachelor's degree in Computer Science, Engineering, or a related technical field. 36 years of hands-on experience in AI/ML engineering, with specific expertise in LLM-based or agentic AI development and deployment. Strong programming skills in Python and/or TypeScript. Practical experience with agentic AI frameworks such as LangChain, LlamaIndex, or AutoGen. Solid understanding of AWS AI services including Bedrock, SageMaker, Textract, and Comprehend Medical. Proven experience in API development and integration, as well as designing event-driven system architectures. Knowledge of healthcare or Life Sciences domains, including regulatory compliance requirements (HIPAA, GDPR, etc.), is preferred. Strong problem-solving mindset, with a focus on experimentation, iteration, and delivering innovative solutions rapidly. Preferred Skills: Effective communication and collaboration skills, with the ability to work closely with cross-functional teams. Passion for emerging AI technologies and cloud innovations. Prior exposure to clinical or life sciences data workflows is a strong advantage.
Posted 1 day ago
3.0 - 6.0 years
2 - 4 Lacs
cochin
On-site
We are seeking talented AWS AI Engineers / Developers to join our team in building innovative agentic AI solutions. This role focuses on implementing intelligent, automated workflows that enhance clinical data processing and decision-making, leveraging the power of AWS services and cutting-edge open-source AI frameworks. Experience in healthcare or Life Sciences, with a strong focus on regulatory compliance, is highly desirable. Requirements Design, develop, and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation to enhance data quality and reliability. Build and integrate LLM-powered agents using AWS Bedrock in combination with open-source frameworks like LangChain and AutoGen. Create robust, event-driven pipelines leveraging AWS Lambda, Step Functions, and EventBridge for seamless orchestration of data and processes. Optimize prompt engineering techniques, implement retrieval-augmented generation (RAG), and facilitate efficient multi-agent communication. Integrate AI agents securely with external applications and services through well-defined, secure APIs. Collaborate with Data Engineering teams to design and maintain PHI/PII-safe data ingestion pipelines, ensuring compliance with privacy regulations. Continuously monitor, test, and fine-tune AI workflows focusing on improving accuracy, reducing latency, and ensuring compliance with industry standards. Document solutions and contribute to the establishment of best practices and governance frameworks. What we Expect from you? Bachelor’s degree in Computer Science, Engineering, or a related technical field. 3–6 years of hands-on experience in AI/ML engineering, with specific expertise in LLM-based or agentic AI development and deployment. Strong programming skills in Python and/or TypeScript. Practical experience with agentic AI frameworks such as LangChain, LlamaIndex, or AutoGen. Solid understanding of AWS AI services including Bedrock, SageMaker, Textract, and Comprehend Medical. Proven experience in API development and integration, as well as designing event-driven system architectures. Knowledge of healthcare or Life Sciences domains, including regulatory compliance requirements (HIPAA, GDPR, etc.), is preferred. Strong problem-solving mindset, with a focus on experimentation, iteration, and delivering innovative solutions rapidly. What you've got? Effective communication and collaboration skills, with the ability to work closely with cross-functional teams. Passion for emerging AI technologies and cloud innovations. Prior exposure to clinical or life sciences data workflows is a strong advantage.
Posted 3 days ago
8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Greetings from TCS!!! TCS is hiring for Solution architect -AI/genai/ML Role Solution architect -AI/GenAI/ML (Azure/AWS/Google) Required Technical Skill Set Expertise in designing GenAI architectures including LLM selection, RAG pipelines, vector databases, and integration patterns. GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Desired Experience Range Bachelor’s or Master’s in Computer Science, AI/ML, Engineering, or related field. 8+ years of experience in solution architecture or AI/ML roles, with 1–2 years focused on LLM or GenAI applications. Desired Competencies (Technical/Behavioral Competency) Must-Have Experience architecting AI solutions on at least one cloud platform: Azure: Azure OpenAI, Azure ML, Cognitive Services, Synapse AWS: Bedrock, SageMaker, Comprehend, Textract, Kendra GCP: Vertex AI, PaLM API, LangChain + BigQuery + Looker Hands-on with GenAI frameworks and tools: LangChain, LlamaIndex, Haystack, Hugging Face Transformers, etc. Familiarity with prompt engineering, fine-tuning, RLHF, and MLOps workflows. Knowledge of cloud-native architecture, REST APIs, containers (Docker, Kubernetes), and CI/CD. Experience with data privacy, model safety, bias mitigation, and AI governance principles. Good-to-Have Cloud certifications. Experience integrating GenAI into enterprise application Understanding of vector DBs (e.g., Pinecone, Weaviate, Chroma, Qdrant) and embedding models. Familiarity with Guardrails for LLMs, model monitoring, and LLMOps platforms. Responsibility of / Expectations from the Role Architect end-to-end Generative AI solutions for enterprise use cases such as: Agentic solutions, Chatbots and copilots, Knowledge assistants (e.g., RAG), Document summarization, generation, translation, Vision, speech, or code generation models Lead the design and integration of LLM pipelines with cloud-native services (e.g., serverless, containers, APIs). Select and fine-tune foundation models (OpenAI, Claude, Mistral, LLaMA, PaLM, etc.) as needed. Implement retrieval-augmented generation (RAG) using vector databases and hybrid search (e.g., FAISS, Pinecone, Weaviate). Design architectures that ensure scalability, security, and governance for GenAI applications. Build reference implementations, proof of concepts (PoCs), and reusable solution templates. Collaborate with data engineers, MLOps engineers, UI/UX designers, and product teams. Stay current with emerging GenAI trends, tools, models, and patterns.
Posted 4 days ago
6.0 years
0 Lacs
kochi, kerala, india
On-site
AWS AI Engineer / Developer We are seeking talented AWS AI Engineers / Developers to join our team in building innovative agentic AI solutions. This role focuses on implementing intelligent, automated workflows that enhance clinical data processing and decision-making, leveraging the power of AWS services and cutting-edge open-source AI frameworks. Experience in healthcare or Life Sciences, with a strong focus on regulatory compliance, is highly desirable. Requirements Design, develop, and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation to enhance data quality and reliability. Build and integrate LLM-powered agents using AWS Bedrock in combination with open-source frameworks like LangChain and AutoGen. Create robust, event-driven pipelines leveraging AWS Lambda, Step Functions, and EventBridge for seamless orchestration of data and processes. Optimize prompt engineering techniques, implement retrieval-augmented generation (RAG), and facilitate efficient multi-agent communication. Integrate AI agents securely with external applications and services through well-defined, secure APIs. Collaborate with Data Engineering teams to design and maintain PHI/PII-safe data ingestion pipelines, ensuring compliance with privacy regulations. Continuously monitor, test, and fine-tune AI workflows focusing on improving accuracy, reducing latency, and ensuring compliance with industry standards. Document solutions and contribute to the establishment of best practices and governance frameworks. What we Expect from you? Bachelor’s degree in Computer Science, Engineering, or a related technical field. 3–6 years of hands-on experience in AI/ML engineering, with specific expertise in LLM-based or agentic AI development and deployment. Strong programming skills in Python and/or TypeScript. Practical experience with agentic AI frameworks such as LangChain, LlamaIndex, or AutoGen. Solid understanding of AWS AI services including Bedrock, SageMaker, Textract, and Comprehend Medical. Proven experience in API development and integration, as well as designing event-driven system architectures. Knowledge of healthcare or Life Sciences domains, including regulatory compliance requirements (HIPAA, GDPR, etc.), is preferred. Strong problem-solving mindset, with a focus on experimentation, iteration, and delivering innovative solutions rapidly. What you've got? Effective communication and collaboration skills, with the ability to work closely with cross-functional teams. Passion for emerging AI technologies and cloud innovations. Prior exposure to clinical or life sciences data workflows is a strong advantage.
Posted 4 days ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role: Senior Data Scientist- Agentic AI Experience: 5+ years in applied data science, with financial domain AI Mandatory Skills: Python, NLP, Pytorch, Tensorflow, Financial domain Location: Hyderabad Role Overview We are hiring a Senior Data Scientist to develop and deploy advanced AI/ML models powering mortgage automation agents . This role involves designing models for creditworthiness prediction, fraud detection, income verification, document intelligence, and conversational agents . The candidate will ensure models are accurate, explainable, and compliant with regulatory frameworks. Required Skills & Experience Strong experience with Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers . NLP expertise: NER, summarization, Q&A, OCR (Tesseract, Amazon Textract, Azure Form Recognizer) . Deep understanding of statistical modeling, risk scoring, credit analytics, fraud detection . Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus). Knowledge of MLOps tools (MLflow, Kubeflow, Weights & Biases). Familiarity with mortgage underwriting, risk modeling, KYC/AML checks . Strong foundation in responsible AI practices (fairness, accountability, transparency)
Posted 5 days ago
8.0 - 12.0 years
3 - 7 Lacs
cochin
On-site
We are seeking a highly skilled Senior AWS Solutions / AI Architect to lead the design and implementation of advanced agentic AI solutions within cloud environments, specifically leveraging AWS services. The ideal candidate will combine deep technical expertise in AI/ML, cloud-native architectures, and healthcare domain experience to deliver scalable, compliant, and innovative solutions that drive business outcomes. Requirements Architect and lead the design of agentic AI systems utilizing AWS services such as Bedrock, SageMaker, Lambda, Step Functions, EventBridge, OpenSearch, and DynamoDB. Design and implement multi-agent orchestration frameworks for clinical data workflows, including source verification, discrepancy resolution, and data integration. Evaluate architectural trade-offs between LLM-based reasoning, rules engines, and hybrid AI solutions to determine the best approach per use case. Provide thought leadership in healthcare and Life Sciences AI solutions, ensuring compliance with relevant regulatory frameworks. Mentor and guide AI engineers, developers, and DevOps teams on best practices for building agent-based architectures. Collaborate closely with clinical Subject Matter Experts (SMEs), compliance teams, and project sponsors to ensure that technical solutions align with strategic business objectives. What we Expect from you? Strong problem-solving aptitude with an analytical mindset. Effective communication skills, both written and verbal, especially when engaging with cross-functional teams. Passion for cutting-edge AI technologies and cloud innovations. What you've got? Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or a related field (PhD is a strong advantage). 8–12 years of industry experience in AI/ML engineering, including 3–5 years focused on cloud-native AI architecture and solution design. Hands-on experience with AWS AI services: Bedrock, SageMaker, Comprehend Medical, Textract, Kendra, and Lex. Strong knowledge of agentic AI frameworks such as LangChain, LlamaIndex, Haystack, CrewAI, or AutoGen. Proven expertise in workflow orchestration tools like AWS Step Functions, Apache Airflow, or Temporal. Deep understanding of multi-agent pipeline design, data processing, and automation in regulated environments. Demonstrated experience in delivering AI solutions for healthcare and Life Sciences domains, with a strong focus on regulatory compliance. Excellent leadership skills, with the ability to influence stakeholders, provide technical governance, and deliver high-impact solutions.
Posted 5 days ago
7.0 years
0 Lacs
hyderābād
On-site
Opportunity Overview: We are seeking a highly skilled and technically sound Data Science Team Manager to lead and drive our data science initiatives. This is a hands-on leadership role, ideal for someone with strong experience in designing and architecting data solutions, streamlining workflows, and directly contributing to team deliverables. This is not a traditional people management role; instead, we're looking for a technologically proficient leader who thrives on building solutions, mentoring others, and aligning data strategies with business goals. This role requires hands-on experience building and architecting end-to-end NLP, data science, and machine learning solutions using tools such as Python, SQL, and AWS services, including EC2, Lambda, Glue, SNS, SQS, Sagemaker, EMR, and Athena. Adapt leveraging these technologies to design scalable, production-ready pipelines and intelligent systems. Ability to architect the design and deployment of scalable, production-ready machine learning pipelines and intelligent systems. Required Qualifications: 7+ years of professional experience in data science, machine learning, data engineering, or related technical discipline. Proven experience in managing or leading technical teams, with a strong emphasis on mentoring and delivery. Lead by example as a hands-on manager, actively engaging in coding, solution design, and problem-solving alongside the team. Demonstrated experience in designing and implementing complex data architectures and pipelines. Partner with senior leadership to define the organization's data strategy and ensure alignment with business objectives. Develop and enforce data governance frameworks, policies, and procedures to ensure data quality, compliance, and risk mitigation. Streamline workflows and optimize processes to improve the efficiency, quality, and output of the data science team. Actively mentor and upskill team members, providing knowledge transfer and support for continuous learning and development. Foster cross-functional collaboration by working closely with engineering, product, analytics, and business teams to ensure data solutions are aligned with organizational goals and integrated seamlessly across platforms. Drive a culture of responsibility, ownership, and accountability within the team. Balance multiple concurrent projects while ensuring high-quality outcomes and timely delivery. Maintain a deep understanding of current technologies and industry best practices to ensure the team remains innovative and competitive. Exceptional analytical, problem-solving, and critical thinking skills. Solid understanding of Object-Oriented Programming (OOP) principles with experience in writing clean, modular, and reusable code for scalable data science applications. Proficient in Git for version control, including branching strategies, pull requests, and collaborative code reviews. Hands-on experience using JIRA (or similar tools) for sprint planning, task tracking, and agile workflow management. Strong verbal and written communication skills; capable of interacting effectively with both technical and non-technical stakeholders. Self-motivated, proactive, and driven, with a strong sense of ownership and accountability. Good to Have Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom-trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Ability to commute/relocate: Nacharam, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Interview Process*: Connect with Talent Acquisition for a Preliminary Phone Screening Meet your Hiring Manager! Live Design Exercise Cross-Functional Interview Subject to change About ZignaAI, a Cohere Health Company: ZignaAI, a Cohere Health company, is focused on delivering innovative solutions that transform healthcare payment operational processes. We empower payers, providers, and patients with AI powered software solutions that drive transparency in healthcare payment services. Built-in intelligence-enabled machine learning algorithms deliver pre-billing payment accuracy solutions and avoid provider abrasion. We differ from traditional payment services solutions by resolving issues at the root by ensuring accurate payments and automating processes with nudges delivered to billing coders. Our innovative and scalable solutions cover Medicaid, Medicare, and Commercial policies, and deliver results in weeks. Cohere Health is a fast-growing clinical intelligence company that's improving lives at scale by promoting the best patient-specific care options, using cutting-edge AI combined with deep clinical expertise. In only four years our solutions have been adopted by health plans covering over 15 million lives, while our revenues and company size have quadrupled. That growth combined with capital raises totaling $106M positions us extremely well for continued success. Our awards include: 2023 and 2024 BuiltIn Best Place to Work; Top 5 LinkedIn™ Startup; TripleTree iAward; multiple KLAS Research Points of Light awards, along with recognition on Fierce Healthcare's Fierce 15 and CB Insights' Digital Health 150 lists. The Coherenauts, as we call ourselves, who succeed here are empathetic teammates who are candid, kind, caring, and embody our core values and principles. We believe that diverse, inclusive teams make the most impactful work. Cohere is deeply invested in ensuring that we have a supportive, growth-oriented environment that works for everyone. We can't wait to learn more about you and meet you at ZignaAI, a Cohere Health company! Equal Opportunity Statement: Cohere Health is an Equal Opportunity Employer. We are committed to fostering an environment of mutual respect where equal employment opportunities are available to all. To us, it's personal.
Posted 5 days ago
4.0 - 9.0 years
12 - 19 Lacs
ambattur
Work from Office
Looking for AI Developer( Amazon Bedrock/ Agentic AI ) AWS Certification Experience in orchestration frameworks Knowledge of Vector Database
Posted 5 days ago
12.0 years
0 Lacs
kochi, kerala, india
On-site
Senior AWS Solutions / AI Architect We are seeking a highly skilled Senior AWS Solutions / AI Architect to lead the design and implementation of advanced agentic AI solutions within cloud environments, specifically leveraging AWS services. The ideal candidate will combine deep technical expertise in AI/ML, cloud-native architectures, and healthcare domain experience to deliver scalable, compliant, and innovative solutions that drive business outcomes. Requirements Architect and lead the design of agentic AI systems utilizing AWS services such as Bedrock, SageMaker, Lambda, Step Functions, EventBridge, OpenSearch, and DynamoDB. Design and implement multi-agent orchestration frameworks for clinical data workflows, including source verification, discrepancy resolution, and data integration. Evaluate architectural trade-offs between LLM-based reasoning, rules engines, and hybrid AI solutions to determine the best approach per use case. Provide thought leadership in healthcare and Life Sciences AI solutions, ensuring compliance with relevant regulatory frameworks. Mentor and guide AI engineers, developers, and DevOps teams on best practices for building agent-based architectures. Collaborate closely with clinical Subject Matter Experts (SMEs), compliance teams, and project sponsors to ensure that technical solutions align with strategic business objectives. What we Expect from you? Strong problem-solving aptitude with an analytical mindset. Effective communication skills, both written and verbal, especially when engaging with cross-functional teams. Passion for cutting-edge AI technologies and cloud innovations. What you've got? Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or a related field (PhD is a strong advantage). 8–12 years of industry experience in AI/ML engineering, including 3–5 years focused on cloud-native AI architecture and solution design. Hands-on experience with AWS AI services: Bedrock, SageMaker, Comprehend Medical, Textract, Kendra, and Lex. Strong knowledge of agentic AI frameworks such as LangChain, LlamaIndex, Haystack, CrewAI, or AutoGen. Proven expertise in workflow orchestration tools like AWS Step Functions, Apache Airflow, or Temporal. Deep understanding of multi-agent pipeline design, data processing, and automation in regulated environments. Demonstrated experience in delivering AI solutions for healthcare and Life Sciences domains, with a strong focus on regulatory compliance. Excellent leadership skills, with the ability to influence stakeholders, provide technical governance, and deliver high-impact solutions.
Posted 6 days ago
6.0 years
0 Lacs
pune, maharashtra, india
On-site
About the Company: Apexon is a digital-first technology services firm specialising in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies – in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalise on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centres) across four continents. We enable #HumanFirstDigital About the Role: We are seeking a skilled AI/ML Engineer with expertise in agentic AI workflows and LLM-powered applications. The ideal candidate will design, build, and optimise AI-driven solutions for the healthcare and life sciences domain, ensuring accuracy, scalability, and compliance. Responsibilities: Design and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation. Build and integrate LLM-powered agents using AWS Bedrock and open-source frameworks (LangChain, AutoGen, LlamaIndex). Develop event-driven data pipelines leveraging AWS Lambda, Step Functions, and EventBridge. Apply prompt engineering, retrieval-augmented generation (RAG), and multi-agent communication techniques to enhance workflow performance. Integrate AI agents securely with external systems via APIs. Collaborate with data engineers to design PHI/PII-safe ingestion pipelines, ensuring regulatory compliance. Monitor, test, and fine-tune AI workflows for accuracy, latency, scalability, and compliance. Work cross-functionally with product and domain experts to deliver regulatory-compliant healthcare/Life Sciences AI solutions. Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred). 3–6 years of experience in AI/ML engineering with proven hands-on expertise in LLM/agentic AI development. Strong programming skills in Python and/or TypeScript. Practical experience with frameworks such as LangChain, LlamaIndex, and AutoGen. Familiarity with AWS AI/ML services: Bedrock, SageMaker, Textract, Comprehend Medical. Proficiency in API integration and event-driven architectures. Experience in healthcare/life sciences AI solutions with an understanding of regulatory and compliance standards (HIPAA, GDPR, etc.) preferred. Strong problem-solving mindset with the ability to experiment, iterate, and optimise rapidly. Required Skills: Expertise in agentic AI workflows. Experience with LLM-powered applications. Strong programming skills in Python and/or TypeScript. Familiarity with AWS AI/ML services. Preferred Skills: Experience in healthcare/life sciences AI solutions. Understanding of regulatory and compliance standards. Equal Opportunity Statement: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK. Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, colour, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy. Our Commitment to Environment: Actively contribute to Apexon's commitment to environmental responsibility by following sustainable practices and supporting ESG initiatives. Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognise your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. Group Health Insurance covering a family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leave Learning & Career Development Employee Wellness
Posted 1 week ago
7.0 years
0 Lacs
hyderabad, telangana
On-site
Opportunity Overview: We are seeking a highly skilled and technically sound Data Science Team Manager to lead and drive our data science initiatives. This is a hands-on leadership role, ideal for someone with strong experience in designing and architecting data solutions, streamlining workflows, and directly contributing to team deliverables. This is not a traditional people management role; instead, we're looking for a technologically proficient leader who thrives on building solutions, mentoring others, and aligning data strategies with business goals. This role requires hands-on experience building and architecting end-to-end NLP, data science, and machine learning solutions using tools such as Python, SQL, and AWS services, including EC2, Lambda, Glue, SNS, SQS, Sagemaker, EMR, and Athena. Adapt leveraging these technologies to design scalable, production-ready pipelines and intelligent systems. Ability to architect the design and deployment of scalable, production-ready machine learning pipelines and intelligent systems. Required Qualifications: 7+ years of professional experience in data science, machine learning, data engineering, or related technical discipline. Proven experience in managing or leading technical teams, with a strong emphasis on mentoring and delivery. Lead by example as a hands-on manager, actively engaging in coding, solution design, and problem-solving alongside the team. Demonstrated experience in designing and implementing complex data architectures and pipelines. Partner with senior leadership to define the organization's data strategy and ensure alignment with business objectives. Develop and enforce data governance frameworks, policies, and procedures to ensure data quality, compliance, and risk mitigation. Streamline workflows and optimize processes to improve the efficiency, quality, and output of the data science team. Actively mentor and upskill team members, providing knowledge transfer and support for continuous learning and development. Foster cross-functional collaboration by working closely with engineering, product, analytics, and business teams to ensure data solutions are aligned with organizational goals and integrated seamlessly across platforms. Drive a culture of responsibility, ownership, and accountability within the team. Balance multiple concurrent projects while ensuring high-quality outcomes and timely delivery. Maintain a deep understanding of current technologies and industry best practices to ensure the team remains innovative and competitive. Exceptional analytical, problem-solving, and critical thinking skills. Solid understanding of Object-Oriented Programming (OOP) principles with experience in writing clean, modular, and reusable code for scalable data science applications. Proficient in Git for version control, including branching strategies, pull requests, and collaborative code reviews. Hands-on experience using JIRA (or similar tools) for sprint planning, task tracking, and agile workflow management. Strong verbal and written communication skills; capable of interacting effectively with both technical and non-technical stakeholders. Self-motivated, proactive, and driven, with a strong sense of ownership and accountability. Good to Have Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom-trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Ability to commute/relocate: Nacharam, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Interview Process*: Connect with Talent Acquisition for a Preliminary Phone Screening Meet your Hiring Manager! Live Design Exercise Cross-Functional Interview Subject to change About ZignaAI, a Cohere Health Company: ZignaAI, a Cohere Health company, is focused on delivering innovative solutions that transform healthcare payment operational processes. We empower payers, providers, and patients with AI powered software solutions that drive transparency in healthcare payment services. Built-in intelligence-enabled machine learning algorithms deliver pre-billing payment accuracy solutions and avoid provider abrasion. We differ from traditional payment services solutions by resolving issues at the root by ensuring accurate payments and automating processes with nudges delivered to billing coders. Our innovative and scalable solutions cover Medicaid, Medicare, and Commercial policies, and deliver results in weeks. Cohere Health is a fast-growing clinical intelligence company that's improving lives at scale by promoting the best patient-specific care options, using cutting-edge AI combined with deep clinical expertise. In only four years our solutions have been adopted by health plans covering over 15 million lives, while our revenues and company size have quadrupled. That growth combined with capital raises totaling $106M positions us extremely well for continued success. Our awards include: 2023 and 2024 BuiltIn Best Place to Work; Top 5 LinkedIn™ Startup; TripleTree iAward; multiple KLAS Research Points of Light awards, along with recognition on Fierce Healthcare's Fierce 15 and CB Insights' Digital Health 150 lists. The Coherenauts, as we call ourselves, who succeed here are empathetic teammates who are candid, kind, caring, and embody our core values and principles. We believe that diverse, inclusive teams make the most impactful work. Cohere is deeply invested in ensuring that we have a supportive, growth-oriented environment that works for everyone. We can't wait to learn more about you and meet you at ZignaAI, a Cohere Health company! Equal Opportunity Statement: Cohere Health is an Equal Opportunity Employer. We are committed to fostering an environment of mutual respect where equal employment opportunities are available to all. To us, it's personal.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
As a Data Engineer, your main objective will be to build data pipelines for crawling, parsing, and connecting external systems and interfaces. This includes developing crawling and fetching pipelines using an API-first approach and tools like playwright and requests. You will also be responsible for parsing and normalizing job postings and CVs, implementing deduplication and delta logic, and working on embeddings and similarity search. Additionally, you will be involved in integrating with various systems such as HR4YOU, SerpAPI, BA job board, and email/SMTP. Your role will also require you to work on batch and stream processing using Azure Functions or container jobs, implementing retry/backoff strategies, and setting up dead-letter queues for error handling. Monitoring data quality metrics such as freshness, duplicate rate, coverage, and cost per 1,000 items will be crucial. You will collaborate with the frontend team for data exports and admin configuration, ensuring seamless data flow across the system. The ideal candidate for this role should have at least 4 years of experience in backend/data engineering. Proficiency in Python, especially with FastAPI, pydantic, httpx/requests, and Playwright/Selenium, as well as solid experience in TypeScript for smaller services and SDKs is required. Familiarity with Azure services like Functions/Container Apps, Storage/Blob, Key Vault, and Monitor/Log Analytics is essential. Experience with messaging systems like Service Bus/Queues, databases such as PostgreSQL and pgvector, and clean ETL/ELT patterns is highly desirable. Knowledge of testability using pytest, observability with OpenTelemetry, and NLP/IE experience with tools like spaCy, regex, and rapidfuzz will be advantageous. Moreover, experience with license/ToS-compliant data retrieval, captcha/anti-bot strategies, and a working method focused on API-first approach, clean code, and trunk-based development will be beneficial. Familiarity with tools like GitHub, Docker, GitHub Actions/Azure DevOps, pnpm/Turborepo, Jira/Linear, and Notion/Confluence is a plus. This role may involve rotating on-call support responsibilities and following the "you build it, you run it" approach to ensure operational efficiency and accountability.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
cochin
Remote
Responsibilities - Lead architecture design for agentic AI systems leveraging AWS services (Bedrock, SageMaker, Lambda, Step Functions, EventBridge, OpenSearch, DynamoDB). - Define multi-agent orchestration strategies for clinical data workflows, source verification, and discrepancy resolution. - Evaluate trade-offs between LLM-based reasoning, rules engines, and hybrid AI architectures. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance preferred - Mentor AI engineers, developers, and DevOps teams on agent-based architectures. - Collaborate with clinical SMEs, compliance teams, and sponsors to align technical design with business outcomes. Qualifications - Bachelor’s/Master’s in Computer Science, AI, or related field (PhD a plus). - 8–12 years in AI/ML engineering with 3–5 years in cloud-native AI architecture. - Hands-on experience with AWS AI stack (Bedrock, SageMaker, Comprehend Medical, Textract, Kendra, Lex). - Strong understanding of agentic AI frameworks (LangChain, LlamaIndex, Haystack, CrewAI, AutoGen). - Proven expertise in workflow orchestration (Step Functions, Airflow, Temporal) and multi-agent pipelines. - Experience in healthcare/Life Sciences AI solutions with regulatory compliance. - Strong leadership, stakeholder communication, and solution governance skills. Job Types: Full-time, Permanent Benefits: Flexible schedule Health insurance Provident Fund Work from home Application Question(s): Do you have hands-on expertise in AWS architecture (VPC, ECS, EKS, API Gateway, Lambda, Bedrock, SageMaker)? Experience: Solution architecture: 8 years (Required)
Posted 1 week ago
8.0 years
0 Lacs
mysore, karnataka, india
On-site
Enkefalos Technologies LLP is a pioneering AI company that builds cutting-edge AI solutions, leveraging LLMs, Generative AI, and advanced Machine Learning techniques. Job Title : Techincal Team Lead Experience : 8-10 years (Minimum 8 years of hands-on experience) Location : Mysuru, Karnataka Employment Type : Full-time About the Role We are seeking an experienced Technical Team Lead with 8 – 10 years of professional experience in designing and leading large-scale web development and AI applications. The ideal candidate will have a strong background in LLM-based solutions (Claude, OpenAI, Gemini, Bedrock), application architecture, and hands-on development expertise. This role requires both technical leadership and team management, as well as strong client-facing communication skills to gather requirements and drive solutions end-to-end. Key Responsibilities Design and deliver scalable application architectures across microservices, APIs and backend databases. Collaborate with cross-functional teams to define solution blueprints combining application engineering and AI/ML requirements. Architect and lead implementation strategies for deploying applications on AWS or Azure using services such as ECS, AKS, Lambda, API Gateway, Azure App Services, Cosmos DB, etc. Guide engineering teams in application modernization, including monolith to microservices transitions, containerization and serverless. Define and enforce best practices around security, performance, and maintainability of solutions. Integrate AI/ML solutions (e.g., inference endpoints, custom LLMs, or ML Ops pipelines) within broader enterprise applications. Evaluate and recommend third-party tools, frameworks, or platforms for optimizing application performance and AI integration. Support pre-sales activities and client engagements with architectural diagrams, PoCs, and strategy sessions. Mentor engineering teams and participate in code/design reviews when necessary. Key Responsibilities System Architecture & Design Define and design scalable, secure, and efficient architectures for web applications and AI-driven solutions. Ensure architectural alignment with business goals and technical feasibility. Evaluate and integrate LLM-based services (Claude, OpenAI, Gemini, AWS Bedrock) into product ecosystems. AI & ML Development Work with fine-tuning of small/medium language models and deep learning models. Leverage AI/ML services for intelligent automation, NLP, and document/data intelligence use cases. Web & Backend Development Hands-on expertise with Django and FastAPI frameworks. Proficiency in relational and non-relational databases: PostgreSQL, MySQL, and familiarity with NoSQL. Build and maintain APIs and services to support enterprise-grade applications. Cloud & Infrastructure (AWS) Hands-on experience with AWS services including Lambda, RDS, EC2, Batch, Textract, and Bedrock. Design, deploy, and maintain cloud-native applications with scalability, cost optimization, and security in mind. Leadership & Client Engagement Lead, mentor, and manage technical teams to deliver high-quality solutions. Collaborate with clients to understand requirements, translate them into technical designs, and oversee implementation. Ensure project timelines, deliverables, and best practices are followed. Qualifications & Skills Experience: 8 – 10 years in software development and system architecture. Technical Expertise: Strong knowledge of LLMs, AI services, and fine-tuning workflows. Proficiency in Django, FastAPI, PostgreSQL, MySQL, and familiarity with NoSQL databases. Deep understanding of AWS cloud services (Lambda, RDS, EC2, Batch, Textract, Bedrock). Leadership: Proven ability to lead teams, manage resources, and deliver complex projects. Communication: Excellent communication and interpersonal skills for engaging with clients and stakeholders. Problem-Solving: Strong analytical and solution-oriented mindset. Nice to Have Exposure to multi-tenant architectures and enterprise SaaS platforms. Experience with CI/CD, DevOps workflows and containerization (Docker, Kubernetes). Familiarity with security best practices and compliance standards.
Posted 1 week ago
8.0 years
0 Lacs
mumbai, maharashtra, india
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Join PwC US - Acceleration Center as a Manager of GenAI Data Science to lead innovative projects and drive significant advancements in GenAI solutions. We offer a competitive compensation package, a collaborative work environment, and ample opportunities for professional growth and impact. Years of Experience: Candidates with 8+ years of hands on experience Responsibilities Lead and mentor a team of data scientists in understanding business requirements and applying GenAI technologies to solve complex problems. Oversee the development, implementation, and optimization of machine learning models and algorithms for various GenAI projects. Direct the data preparation process, including data cleaning, preprocessing, and feature engineering, to ensure data quality and readiness for analysis. Collaborate with data engineers and software developers to streamline data processing and integration into machine learning pipelines. Evaluate model performance rigorously using advanced metrics and testing methodologies to ensure robustness and effectiveness. Spearhead the deployment of production-ready machine learning applications, ensuring scalability and reliability. Apply expert programming skills in Python, R, or Scala to develop high-quality software components for data analysis and machine learning. Utilize Kubernetes for efficient container orchestration and deployment of machine learning applications. Design and implement innovative data-driven solutions such as chatbots using the latest GenAI technologies. Communicate complex data insights and recommendations to senior stakeholders through compelling visualizations, reports, and presentations. Lead the adoption of cutting-edge GenAI technologies and methodologies to continuously improve data science practices. Champion knowledge sharing and skill development within the team to foster an environment of continuous learning and innovation. Requirements 8-10 years of relevant experience in data science, with significant expertise in GenAI projects. Advanced programming skills in Python, R, or Scala, and proficiency in machine learning libraries like TensorFlow, PyTorch, or scikit-learn. Extensive experience in data preprocessing, feature engineering, and statistical analysis. Strong knowledge of cloud computing platforms such as AWS, Azure, or Google Cloud, and data visualization techniques. Demonstrated leadership in managing data science teams and projects. Exceptional problem-solving, analytical, and project management skills. Excellent communication and interpersonal skills, with the ability to lead and collaborate effectively in a dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Proven track record of developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations in a corporate setting. Relevant advanced certifications in data science or GenAI technologies. Nice To Have Skills Experience with specific tools such as Azure AI Search, Azure Document Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, and AWS Bedrock. Familiarity with LLM backed agent frameworks like Autogen, Langchain, Semantic Kernel, and experience in chatbot development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 1 week ago
3.0 - 5.0 years
5 - 9 Lacs
gurgaon
On-site
Job Title: RPA Developer (UiPath / Power Automate) – 3-5 Years Experience Location: Gurugram Experience: 3–5 Years Job Summary: We are seeking a skilled and detail-oriented RPA Developer with 3–5 years of experience in automation development, particularly using UiPath and Power Automate. The ideal candidate will have hands-on experience in automating Oracle ERP processes and document processing workflows. A strong understanding of OCR technologies, file management automation, and enterprise-grade RPA deployment is essential. Key Responsibilities: Design, develop, and deploy RPA solutions using UiPath Studio, Orchestrator, and other UiPath components. Build automation workflows using Power Automate for business process optimization. Automate Oracle ERP processes, especially invoice processing and document workflows. Implement file management and human task automation solutions. Automate Excel-based tasks to ensure data accuracy and operational efficiency. Develop advanced web automation using OCR, image search, Textract, keystrokes, and mouse click automation. Configure and maintain UiPath environments across development, testing, and production. Extract data from handwritten documents, forms, tables, invoices, receipts, and scanned images using tools like Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Collaborate with cross-functional teams to identify automation opportunities and deliver scalable solutions. Ensure high-quality documentation and adherence to best practices in RPA development. Required Skills & Qualifications: 3–5 years of hands-on experience in RPA development using UiPath and Power Automate. Proficiency in Oracle ERP systems, especially in invoice and document processing. Strong knowledge of document processing tools and OCR technologies. Experience with Ephesoft, ABBYY FlexiCapture, and Microsoft Form Recognizer. Expertise in Excel automation and web-based automation techniques. Solid understanding of RPA deployment, monitoring, and maintenance. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Preferred Qualifications: UiPath RPA Developer Certification. Experience in enterprise-level RPA deployments. Familiarity with Agile/Scrum methodologies. Globally, our policy is to recruit individuals from wide and diverse backgrounds. However, certain positions require access to controlled goods and technologies subject to various export control regulations. Applicants for these positions may be limited (by, for example, their countries of citizenship, country of origin, or immigration status) where required by law or governmental contact, and/or employment made contingent upon the issuance of appropriate governmental licensing. MKS Inc. and its affiliates and subsidiaries (“MKS”) is an affirmative action and equal opportunity employer: diverse candidates are encouraged to apply. We win as a team and are committed to recruiting and hiring qualified applicants regardless of race, color, national origin, sex (including pregnancy and pregnancy-related conditions), religion, age, ancestry, physical or mental disability or handicap, marital status, membership in the uniformed services, veteran status, sexual orientation, gender identity or expression, genetic information, or any other category protected by applicable law. Hiring decisions are based on merit, qualifications and business needs. We conduct background checks and drug screens, in accordance with applicable law and company policies. MKS is generally only hiring candidates who reside in states where we are registered to do business. MKS is committed to working with and providing reasonable accommodations to qualified individuals with disabilities. If you need a reasonable accommodation during the application or interview process due to a disability, please contact us at: accommodationsatMKS@mksinst.com . If applying for a specific job, please include the requisition number (ex: RXXXX), the title and location of the role
Posted 1 week ago
3.0 years
0 Lacs
pune, maharashtra, india
On-site
About Position: We are hiring for Lead Python engineer with hands on experience with Apache Beam, databricks etc., Role: Lead Python Platform Engineer – Architecture & Performance Location: All PSL Locations Experience: 3 to 7 Years Job Type: Full Time Employment What You'll Do: Architecture and reuse: Design and build a shared component library/SDK for pipelines: ingestion, parsing/OCR, extraction (RegEx now, LLM/SLM later), validation, enrichment, publishing. Define patterns/templates for Apache Beam pipelines and Databricks jobs; standardize configuration, packaging, versioning, CI/CD, and documentation. Create pluggable interfaces so multiple teams can swap extractors (Regex/LLM), OCR providers, and EMR publishers without code rewrites. Define repo strategy - shared/child repos for each use case Performance and reliability: Own end-to-end profiling and tuning: cProfile/py-spy/line_profiler, memory (tracemalloc), CPU vs I/O analysis. Instrument services with Elastic APM and correlate traces/metrics with Splunk logs; build dashboards and runbooks. Implement concurrency best practices: asyncio for I/O-bound, ThreadPool/ProcessPool for CPU-bound, batching, rate limiting, retries, etc. Implement robust LLM API rate limiting/governance: enforce provider TPM and concurrency caps, request queueing/token budgeting, and emit APM/Splunk metrics (throttle rate, queue depth, cost per job) with alerts. Establish SLOs/alerts for throughput, latency, error rates; set up DLQs and recovery patterns. Team enablement: Mentor devs, lead design reviews, codify best practices, write clear docs and examples. Partner with ML engineers on the future LLM/SLM path (evaluation harness, safety/PII, cost/perf). Expertise You'll Bring: 7+ years Python with strong depth in performance and concurrency (asyncio, concurrent.futures, multiprocessing), profiling and memory tuning. Observability expertise: Elastic APM instrumentation and dashboarding; Splunk for logs and correlation; OpenTelemetry familiarity. Must have implemented LLM based solutions and supported them in production API engineering for high-throughput integrations (REST, OAuth2), resilience patterns, and secure handling of sensitive data. Strong architecture/design skills: clean interfaces, packaging shared libs, versioning, CI/CD (GitHub Actions/Azure DevOps), testing. 3+ years building large-scale data pipelines with Apache Beam and/or Spark, including hands-on Databricks experience (Jobs, Delta Lake, cluster tuning). Document processing: OCR (Tesseract, AWS Textract, Azure Form Recognizer), PDF parsing, text normalization. LLM/SLM integration experience (e.g., OpenAI/Azure AI, local SLMs), prompt/eval frameworks, PII redaction/guardrails. Cloud and tooling: AWS/Azure/GCP, Dataflow/Flink, Terraform, Docker; cost/performance tuning on Databricks. Security/compliance mindset (HIPAA), secrets management, least-privilege access. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let’s unleash your full potential at Persistent - persistent.com/careers “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
0.0 - 8.0 years
10 - 20 Lacs
mysuru, karnataka
On-site
Enkefalos Technologies LLP is a pioneering AI company that builds cutting-edge AI solutions, leveraging LLMs, Generative AI, and advanced Machine Learning techniques. Job Title : Technical Team Lead Experience : 8+ years (Minimum hands-on experience) Location : Mysuru, Karnataka Employment Type : Full-time Position Overview We are seeking an experienced Technical Team Lead with 8 – 10 years of professional experience in designing and leading large-scale web development and AI applications. The ideal candidate will have a strong background in LLM-based solutions (Claude, OpenAI, Gemini, Bedrock), application architecture, and hands-on development expertise. This role requires both technical leadership and team management, as well as strong client-facing communication skills to gather requirements and drive solutions end-to-end. Key Responsibilities i. System Architecture & Design Define and design scalable, secure, and efficient architectures for web applications and AI-driven solutions. Ensure architectural alignment with business goals and technical feasibility. Evaluate and integrate LLM-based services (Claude, OpenAI, Gemini, AWS Bedrock) into product ecosystems. ii. AI & ML Development Work with fine-tuning of small/medium language models and deep learning models. Leverage AI/ML services for intelligent automation, NLP, and document/data intelligence use cases. iii. Web & Backend Development Hands-on expertise with Django and FastAPI frameworks. Proficiency in relational and non-relational databases: PostgreSQL , MySQL , and familiarity with NoSQL . Build and maintain APIs and services to support enterprise-grade applications. iv. Cloud & Infrastructure (AWS) Hands-on experience with AWS services including Lambda, RDS, EC2, Batch, Textract, and Bedrock. Design, deploy, and maintain cloud-native applications with scalability, cost optimization, and security in mind. v. Leadership & Client Engagement Lead, mentor, and manage technical teams to deliver high-quality solutions. Collaborate with clients to understand requirements, translate them into technical designs, and oversee implementation. Ensure project timelines, deliverables and best practices are followed. Qualifications & Skills Experience: 8–10 years in software development and system architecture. Technical Expertise: Strong knowledge of LLMs, AI services, and fine-tuning workflows. Proficiency in Django, FastAPI, PostgreSQL, MySQL, and familiarity with NoSQL databases. Deep understanding of AWS cloud services (Lambda, RDS, EC2, Batch, Textract, Bedrock). Leadership: Proven ability to lead teams, manage resources, and deliver complex projects. Communication: Excellent communication and interpersonal skills for engaging with clients and stakeholders. Problem-Solving: Strong analytical and solution-oriented mindset. Nice to Have Exposure to multi-tenant architectures and enterprise SaaS platforms. Experience with CI/CD, DevOps workflows, and containerization (Docker, Kubernetes). Familiarity with security best practices and compliance standards. Job Type: Full-time Pay: ₹1,000,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Ability to commute/relocate: Mysore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Experience: Software/Application Engineering: 8 years (Required) Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
pune, maharashtra, india
On-site
About Position: We are looking for an experienced and talented Python Architect to join our growing data competency team. The ideal candidate will have a strong background in working with Lead Python Platform, Apache Beam + Databricks platform, parsing/OCR, Validation, Implementation, Cloud (AWS/Azure/GCP). We have built the core features, but need a senior engineer to architect. Role: Python Architect Location: All Persistent Locations Experience: 7 to 14 Years Job Type: Full Time Employment What You'll Do: Architecture and reuse: Design and build a shared component library/SDK for pipelines: ingestion, parsing/OCR, extraction (RegEx now, LLM/SLM later), validation, enrichment, publishing. Define patterns/templates for Apache Beam pipelines and Databricks jobs; standardize configuration, packaging, versioning, CI/CD, and documentation. Create pluggable interfaces so multiple teams can swap extractors (Regex/LLM), OCR providers, and EMR publishers without code rewrites. Define repo strategy - shared/child repos for each use case Performance and reliability Own end-to-end profiling and tuning: cProfile/py-spy/line_profiler, memory (tracemalloc), CPU vs I/O analysis. Instrument services with Elastic APM and correlate traces/metrics with Splunk logs; build dashboards and runbooks. Implement concurrency best practices: asyncio for I/O-bound, ThreadPool/ProcessPool for CPU-bound, batching, rate limiting, retries, etc. Implement robust LLM API rate limiting/governance: enforce provider TPM and concurrency caps, request queueing/token budgeting, and emit APM/Splunk metrics (throttle rate, queue depth, cost per job) with alerts. Establish SLOs/alerts for throughput, latency, error rates; set up DLQs and recovery patterns. Team Enablement Mentor devs, lead design reviews, codify best practices, write clear docs and examples. Partner with ML engineers on the future LLM/SLM path (evaluation harness, safety/PII, cost/perf). Expertise You'll Bring: 7+ years Python with strong depth in performance and concurrency (asyncio, concurrent.futures, multiprocessing), profiling and memory tuning. Observability expertise: Elastic APM instrumentation and dashboarding; Splunk for logs and correlation; OpenTelemetry familiarity. Must have implemented LLM based solutions and supported them in production API engineering for high-throughput integrations (REST, OAuth2), resilience patterns, and secure handling of sensitive data. Strong architecture/design skills: clean interfaces, packaging shared libs, versioning, CI/CD (GitHub Actions/Azure DevOps), testing. 3+ years building large-scale data pipelines with Apache Beam and/or Spark, including hands-on Databricks experience (Jobs, Delta Lake, cluster tuning). Document processing: OCR (Tesseract, AWS Textract, Azure Form Recognizer), PDF parsing, text normalization. LLM/SLM integration experience (e.g., OpenAI/Azure AI, local SLMs), prompt/eval frameworks, PII redaction/guardrails. Cloud and tooling: AWS/Azure/GCP, Dataflow/Flink, Terraform, Docker; cost/performance tuning on Databricks. Security/compliance mindset (HIPAA), secrets management, least-privilege access. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let’s unleash your full potential at Persistent - persistent.com/careers “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
vadodara, gujarat
On-site
As a skilled Python AIML Developer with expertise in Django, you will be responsible for developing AI/ML models, integrating them with web applications, and deploying scalable solutions. Your role will involve collaborating with cross-functional teams to design, develop, and implement AI-driven applications. Your key responsibilities will include developing, training, testing, and optimizing AI/ML models for real-world applications such as classification, regression, and natural language processing (NLP). You will integrate machine learning models into Django-based web applications and design APIs for model interaction using Django REST Framework (DRF). Additionally, you will work with libraries like TensorFlow, PyTorch, Scikit-learn, HuggingFace, ElasticSearch, Textract, or similar for AI/ML development, optimize model performance, and manage databases to store AI-related data. Furthermore, you will be deploying machine learning models using technologies like Docker, Kubernetes, Azure, or cloud platforms (AWS, GCP, or Azure), performing unit testing and debugging, leading a team of engineers, and providing technical leadership/mentorship. Collaboration with front-end developers, data engineers, and product managers to create seamless AI-driven applications is also a part of your responsibilities. It is crucial to stay updated with the latest AI/ML advancements and integrate them into projects. To excel in this role, you should be proficient in AI/ML development libraries like TensorFlow, Pytorch, Scikit-Learn, HuggingFace, or ElasticSearch, Textract, and similar libraries. You must have an understanding of NLP, Computer Vision, Deep Learning, and Predictive Analysis, along with hands-on experience with SQL and NoSQL databases. Knowledge about Docker, Kubernetes, CI/CD pipeline logic, Azure, and cloud services is essential, along with a minimum of 3 years of experience in AI technologies. Strong communication skills are also required for effective collaboration with team members.,
Posted 1 week ago
2.0 years
8 - 18 Lacs
delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 1 week ago
0.0 - 5.0 years
0 - 1 Lacs
delhi, delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
As a premier cloud-first consulting company, Avahi is dedicated to delivering exceptional value to customers by redefining the way businesses embrace a cloud-first approach. With a global team spanning North America, Europe, and Southeast Asia, Avahi fosters a collaborative and diverse environment where professional growth, creativity, and mutual respect thrive. Avahi is currently seeking a Head of AI to lead the company's AI strategy, drive innovation, and oversee AI solution development for customers. This leadership role requires deep expertise in AI/ML, cloud computing, and scalable architectures. The successful candidate will collaborate closely with AWS, customers, and internal teams to deliver transformative AI solutions that enhance business performance and unlock new opportunities. Key Responsibilities: - Define and execute Avahi's AI strategy in alignment with AWS advancements and market trends. - Lead the development of AI-driven solutions by collaborating with engineering, data scientists, and consulting teams. - Serve as a trusted AI advisor to customers, designing AI solutions that seamlessly integrate with AWS services. - Stay ahead of emerging AI trends and lead research initiatives to integrate state-of-the-art AI/ML techniques into Avahi's offerings. - Oversee the evolution of Avahi's open-source SDK for Bedrock adoption and AI-driven platform solutions. - Build and mentor a high-performing AI team, fostering a culture of collaboration, innovation, and technical excellence. Required Skills And Qualifications: - 10+ years of experience in AI/ML, with at least 5 years in leadership roles. - Deep technical expertise in AWS AI/ML services, including Bedrock, SageMaker, Comprehend, Transcribe, and Textract. - Strong background in GenAI, LLMs, prompt engineering, and responsible AI practices. - Experience in AI solution architecture, MLOps, and scalable AI deployments. - Proven track record of delivering AI-powered business impact at scale. - Ability to engage with C-level executives and translate AI capabilities into real business value. - Passion for mentorship, team-building, and fostering an innovative AI culture. - Prior experience working at AWS or as an AWS Partner is highly preferred. - Experience leading open-source AI initiatives or contributing to AI/ML communities. - Background in AI product development and AI-driven SaaS solutions. If you are passionate about AI and want to shape the future of AI consultancy at Avahi, we encourage you to apply now and lead the AI revolution. Avahi offers remote-first flexibility, an innovative culture, career development opportunities, and a purpose-driven mission dedicated to diversity and sustainability. Join Avahi and make an impact in a fast-paced, customer-focused environment with abundant opportunities for growth. Avahi is committed to fostering a workplace that celebrates diversity and inclusivity, welcoming applicants from all backgrounds and experiences.,
Posted 2 weeks ago
0.0 - 31.0 years
3 - 9 Lacs
work from home
Remote
Required Skill Sets & Qualifications. 1. Technical Skills (The Builder's Toolkit)Core Programming: Expert proficiency in Python (essential for AI/ML libraries). AI/ML & NLP: Strong hands-on experience with: Large Language Models (LLMs): Practical experience in working with APIs of OpenAI (GPT-4), Google Gemini, Anthropic Claude, or open-source models (LLaMA 2, Mistral). Prompt engineering is a key skill. Frameworks: LangChain, LlamaIndex for building sophisticated agentic workflows. Natural Language Processing (NLP): Libraries like spaCy, NLTK, Hugging Face Transformers. Optical Character Recognition (OCR): Experience with tools like Adobe Extract API, Google Document AI, Amazon Textract, or open-source options (Tesseract) for Indian documents. API Integration: Mastery in connecting various systems via RESTful APIs and webhooks (e.g., connecting a chatbot to a CRM and a document database). Low-Code/No-Code Platforms: Experience with leveraging platforms like Zapier, Make.com, n8n, or Microsoft Power Automate to quickly prototype and connect different SaaS tools is a huge plus. Cloud & DevOps: Experience with cloud platforms (AWS, Google Cloud, Azure) and knowledge of deploying and maintaining AI models (e.g., using AWS SageMaker, Google Vertex AI). Data Security: Understanding of encryption, secure API protocols, and data anonymization techniques crucial for handling sensitive financial data. 3. Soft Skills & Mindset (The Architect)Systems Thinking: Ability to see the entire customer and operational journey and build interconnected agents, not isolated bots. Problem-Scoping & Solutioning: Can break down a complex business problem (e.g., "analyze documents") into a technical workflow (e.g., "trigger -> OCR -> data extraction -> validation -> CRM update"). Agility & Learning: The AI field moves fast. A constant desire to learn and experiment with new tools and models is critical. Communication: Must be able to explain complex AI concepts to non-technical stakeholders (management, loan consultants). Project Management: Ability to manage this large-scale integration project, prioritize tasks, and deliver functional modules. How to Apply. Interested candidates should submit their resume along with a cover letter or portfolio link that must include: Examples of previous AI automation projects you have built. A brief paragraph on how you would approach integrating any two of the systems mentioned above (e.g., connecting a Document Analysis system to a CRM). Any experience specific to the Indian financial sector.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |