Jobs
Interviews

109 Aws Bedrock Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 14.0 years

35 - 55 Lacs

hyderabad, pune

Work from Office

Job Application Link:https://app.fabrichq.ai/jobs/865d4815-c031-4216-8e54-462d9b57ea23 Job Summary: The Data Science Architect role focuses on designing and implementing complex Generative AI solutions using AWS technologies. The position requires expertise in Retrieval Augmented Generation (RAG), GraphRAG methodologies, and agentic AI systems. The architect will develop advanced AI architectures incorporating state-of-the-art GenAI technologies while leveraging AWS Bedrock and SageMaker. Key Responsibilities Design and architect complex Generative AI solutions using AWS technologies Develop advanced AI architectures incorporating state-of-the-art GenAI technologies Create and implement Retrieval Augmented Generation (RAG) and GraphRAG solutions Architect scalable AI systems using AWS Bedrock and SageMaker Design and implement agentic AI systems with advanced reasoning capabilities Develop custom AI solutions leveraging vector databases and advanced machine learning techniques Evaluate and integrate emerging GenAI technologies and methodologies Skills & Requirements Must Have Skills Python programming for AI/ML applications AWS AI services (Bedrock, SageMaker) Retrieval Augmented Generation (RAG) and GraphRAG Vector database architectures Large Language Models (LLMs) Deep learning frameworks (PyTorch, TensorFlow)

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an Annuities at NTT DATA, you will be part of a dynamic team in Bangalore, Karntaka (IN-KA), India, focused on AI Security Engineering. Your role will involve leveraging your expertise in Java, Python, React/Angular frameworks, and AI/ML training & inference platforms such as AWS Bedrock, AWS Sagemaker, and open-source/custom AI/ML models. **Key Responsibilities:** - Utilize your 5+ years of experience in building enterprise grade full stack applications - Demonstrate strong hands-on development skills in Java or Python, including unit testing with frameworks like Junit or Pytest - Develop APIs based on REST, gRPC methodologies using FastAPI, Spring REST, or similar frameworks - Work on cloud native applications using Kubernetes or other container management solutions - Contribute to the development, deployment, performance tuning, and maintenance of AI models and applications on cloud platforms **Qualifications Required:** - 5+ years of experience in building enterprise grade full stack applications - Strong hands-on development experience in Java or Python, including unit testing frameworks such as Junit or Pytest - 3+ years of experience in API development based on REST, gRPC methodologies using FastAPI, Spring REST, or similar frameworks - 3+ years of experience in development and maintenance of cloud native applications using Kubernetes or other container management solutions - Experience with development, deployment, performance tuning, and maintenance of AI models and applications on cloud platforms NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. With a commitment to helping clients innovate, optimize, and transform for long term success, we offer diverse expertise in more than 50 countries and a robust partner ecosystem. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. As one of the leading providers of digital and AI infrastructure worldwide, NTT DATA is part of the NTT Group, investing over $3.6 billion each year in R&D to propel organizations and society confidently into the digital future. Visit us at us.nttdata.com.,

Posted 4 days ago

Apply

5.0 - 10.0 years

20 - 22 Lacs

bengaluru

Work from Office

Role Overview We are seeking a skilled and innovative Machine Learning Engineer with expertise in Large Language Models (LLMs) to join our team. The ideal candidate has hands-on experience developing, fine-tuning, and deploying LLMs, alongside a deep understanding of the machine learning lifecycle. This role involves building scalable AI solutions, collaborating with cross-functional teams, and contributing to cutting-edge AI initiatives. Key Responsibilities Model Development & Optimization: Develop, fine-tune, and deploy LLMs like OpenAI's GPT, Anthropic's Claude, Googles Gemini, or AWS Bedrock. Customize pre-trained models for specific use cases, ensuring high performance and scalability. Machine Learning Pipeline Design: Build and maintain end-to-end ML pipelines, from data preprocessing to model deployment. Optimize training workflows for efficiency and accuracy. Integration & Deployment: Work closely with software engineering teams to integrate ML solutions into production environments. Ensure APIs and solutions are scalable and robust. Experimentation & Research: Experiment with new architectures, frameworks, and approaches to improve model performance. Stay updated with advancements in LLMs and generative AI technologies. Collaboration: Collaborate with cross-functional teams, including data scientists, engineers, and product managers, to align ML solutions with business goals. Provide mentorship to junior team members as needed. Required Qualifications Experience: At least 5 years of professional experience in machine learning or AI development. Proven experience with LLMs and generative AI technologies. Technical Skills: Proficiency in Python (required) and basic knowledge of Java is needed Hands-on experience with APIs and tools like OpenAI, Anthropic's Claude, Google Gemini, or AWS Bedrock. Familiarity with ML frameworks such as TensorFlow, PyTorch, or Hugging Face. Strong understanding of data structures, algorithms, and distributed systems. Cloud Expertise: Experience with AWS, GCP, or Azure, including services relevant to ML workloads (e.g., AWS SageMaker, Bedrock). Data Engineering: Proficiency in handling large-scale datasets and implementing data pipelines. Experience with ETL tools and platforms for efficient data preprocessing. Problem Solving: Strong analytical and problem-solving skills, with the ability to debug and resolve issues quickly. Preferred Qualifications Experience with multi-modal models and generative AI for images, text, or other modalities. Understanding of ML Ops principles and tools (e.g., MLflow, Kubeflow). Familiarity with reinforcement learning and its applications in AI. Knowledge of distributed training techniques and tools like Horovod or Ray. Advanced degree (Masters or Ph.D.) in Computer Science, Machine Learning, or a related field.

Posted 4 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

bengaluru

Work from Office

Job Title: Lead AI/ML Engineer GenAI, LLMs & Automation Location: Bangalore (HSR Layout) Experience: 5+ years (including 3+ in Cloud) Role Summary We are building GenAI-driven assistants to transform how professionals learn, buy, and get support. Youll work closely with the CTO and lead the architecture, development, and deployment of LLM-powered solutionsincluding custom model training on proprietary datasets, RAG pipelines, and multi-agent systems. This is a full-stack AI roleyou’ll also drive infrastructure, DevOps, and observability across the GenAI stack to ensure production-grade performance, scalability, and automation. Must-Have Skills LLM fine-tuning and custom model creation using proprietary data (e.g., SFT, LoRA, PEFT) Strong expertise in RAG architectures, vector stores (Pinecone, FAISS), and embeddings Hands-on with LangChain, LangGraph, and prompt engineering Solid cloud experience (AWS – ECS, Lambda, Bedrock, S3, IAM) Backend/API engineering (FastAPI or Node.js), Docker, Git, CI/CD (GitHub Actions, CodePipeline) Scripting with Python & Bash; n8n or other workflow automation tools Observability: Langfuse, LangWatch, OpenTelemetry Bonus Skills Agentic frameworks (LangFlow, CrewAI, AutoGen) Experience integrating chatbots in web or LMS environments Infra as Code (Terraform), monitoring tools (Grafana, CloudWatch) You’ll Be Responsible For Designing and scaling AI-first features for real users Leading AI engineering practices end-to-end—model, infra, and app layer Rapid experimentation, iteration, and optimization Collaborating cross-functionally and shipping production-grade solutions fast

Posted 4 days ago

Apply

8.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Platform Development and Machine Learning expert at Adobe, you will play a crucial role in changing the world through digital experiences by building scalable AI platforms and designing ML pipelines. Your responsibilities will include: - Building scalable AI platforms that are customer-facing and evangelizing the platform with customers and internal stakeholders. - Ensuring platform scalability, reliability, and performance to meet business needs. - Designing ML pipelines for experiment management, model management, feature management, and model retraining. - Implementing A/B testing of models and designing APIs for model inferencing at scale. - Demonstrating proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. - Serving as a subject matter expert in LLM serving paradigms and possessing deep knowledge of GPU architectures. - Expertise in distributed training and serving of large language models and proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. - Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results. - Reducing training and resource requirements for fine-tuning LLM and LVM models. - Having extensive knowledge of different LLM models and providing insights on the applicability of each model based on use cases. - Delivering end-to-end solutions from engineering to production for specific customer use cases. - Showcasing proficiency in DevOps and LLMOps practices, including knowledge in Kubernetes, Docker, and container orchestration. - Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Your skills matrix should include expertise in LLM such as Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama, LLM Ops such as ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI, Databases/Datawarehouse like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery, Cloud Knowledge of AWS/Azure/GCP, Dev Ops knowledge in Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus, and Cloud Certifications (Bonus) in AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert. Proficiency in Python, SQL, and Javascript is also required. Adobe is committed to creating exceptional employee experiences and values diversity. If you require accommodations to navigate the website or complete the application process, please contact accommodations@adobe.com or call (408) 536-3015.,

Posted 5 days ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Data Scientist Agentic AI & MLOps Location: Bangalore - Hybrid (3 days work from office, 2 days from home) About Us: Our client delivers next-generation security analytics and operations management. They secure organisations worldwide by staying ahead of cyber threats, leveraging AI-reinforced capabilities for unparalleled protection. Job Overview: Were seeking a Senior Data Scientist to architect agentic AI solutions and own the full ML lifecyclefrom proof-of-concept to production. Youll operationalise LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerised with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyse performance and optimise for cost and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelors or Masters in Computer Science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Experience with Model Context Protocol (MCP) and RAG integrations. Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning. Show more Show less

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior Data Scientist at our client's organization, your role will involve architecting agentic AI solutions and overseeing the entire ML lifecycle, from proof-of-concept to production. You will play a key role in operationalizing large language models, designing multi-agent AI systems for cybersecurity tasks, and implementing MLOps best practices. Key Responsibilities: - Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. - Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. - Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. - Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. - Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. - Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerized with Docker and Kubernetes. - Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyze performance and optimize for cost and SLA compliance. - Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: - Bachelors or Masters in Computer Science, Data Science, AI, or related quantitative discipline. - 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. - In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). - Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI, or equivalent agentic frameworks. - Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. - Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. - Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). - Infrastructure as Code skills with Terraform. - Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. - Solid data science fundamentals: feature engineering, model evaluation, data ingestion. - Understanding of cybersecurity principles, SIEM data, and incident response. - Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: - AWS certifications (Solutions Architect, Developer Associate). - Experience with Model Context Protocol (MCP) and RAG integrations. - Familiarity with workflow orchestration tools (Apache Airflow). - Experience with time series analysis, anomaly detection, and machine learning.,

Posted 5 days ago

Apply

5.0 - 7.0 years

35 - 55 Lacs

hyderabad, pune

Work from Office

Job Application Link:https://app.fabrichq.ai/jobs/865d4815-c031-4216-8e54-462d9b57ea23 Job Summary: The Data Science Architect role focuses on designing and implementing complex Generative AI solutions using AWS technologies. The position requires expertise in Retrieval Augmented Generation (RAG), GraphRAG methodologies, and agentic AI systems. The architect will develop advanced AI architectures incorporating state-of-the-art GenAI technologies while leveraging AWS Bedrock and SageMaker. Key Responsibilities Design and architect complex Generative AI solutions using AWS technologies Develop advanced AI architectures incorporating state-of-the-art GenAI technologies Create and implement Retrieval Augmented Generation (RAG) and GraphRAG solutions Architect scalable AI systems using AWS Bedrock and SageMaker Design and implement agentic AI systems with advanced reasoning capabilities Develop custom AI solutions leveraging vector databases and advanced machine learning techniques Evaluate and integrate emerging GenAI technologies and methodologies Skills & Requirements Must Have Skills Python programming for AI/ML applications AWS AI services (Bedrock, SageMaker) Retrieval Augmented Generation (RAG) and GraphRAG Vector database architectures Large Language Models (LLMs) Deep learning frameworks (PyTorch, TensorFlow)

Posted 5 days ago

Apply

4.0 - 9.0 years

12 - 19 Lacs

ambattur

Work from Office

Looking for AI Developer( Amazon Bedrock/ Agentic AI ) AWS Certification Experience in orchestration frameworks Knowledge of Vector Database

Posted 5 days ago

Apply

7.0 - 10.0 years

34 - 45 Lacs

ahmedabad

Work from Office

Key Skills: AWS Bedrock & Claude LLM AWS Lambda & Serverless Architecture Docker & Containerization MLOps & CI/CD for ML Python & ML Frameworks (TensorFlow, PyTorch, etc.) NOTE: Apply only if you have relevant experience in this domain.

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

noida, uttar pradesh, india

On-site

Job description Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills: Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock fortext/audio/image/videomodalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker,FastAPI/Django/Flask,and Git.

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

mumbai, maharashtra, india

On-site

What's the role As Senior Software Engineer you have 5+ years of experience and your role includes: Software Development: Develop, code, test, and deploy software components. Investigate and correct software defects. Continuous Learning: Stay updated on new technologies and programming languages. Database and Cloud Integration: Work with AWS or other cloud platforms. Automation and Configuration: Create and maintain scripts and configurations for software releases. Develop robust automation scripts. Agile Collaboration: Participate in agile planning and collaborate with teams to execute test matrices. Quality Assurance: Translate user stories into test cases, automate testing, and track defects. Maintain test suites and documentation. Troubleshooting and Optimization: Assist in troubleshooting and optimizing workflows. Identify and address gaps in requirements and tools. Best Practices: Promote best engineering practices and seek continuous improvement. Who are you Qualifications Bachelor or master's degree in Computer Science/Information Systems or equivalent. 5+ years of software design & development experience. Strong programming skills in Python, with proficiency in C++. Experience with Model Content Protocol (MCP) and one or more agentic AI frameworks such as LangGraph, AWS Bedrock AgentCore, CrewAI, or Microsoft AutoGen. Experience with containerized workloads and orchestration tools such as Kubernetes and Argo. Infrastructure experience: Git/GitLab, CI/CD, AWS, Linux, and Maven/SBT/Poetry. Able to provide technical leadership and domain expertise to help build the competence of other engineers (e.g., researching new technologies, sharing tutorials with the team, etc.) Able to translate business and architectural features into quality, consistent software design. A strong quality mindset is considered a must: unit testing, performance testing, writing testable code. Eagerness for challenges, the ability to lead and advance teams of developers and work with a variety of technologies and personnel. Strong communication skills in English, both written and spoken. Preferred Qualifications Experience with spatial data processing, Computer Vision algorithms, and/or Machine Learning frameworks. Experience with MLOps frameworks and best practices. Knowledge of PostgreSQL, SQL, or other database technologies. Strong communication and collaboration skills in a distributed team environment. HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics. Who are we HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes - from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people's lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel.

Posted 6 days ago

Apply

1.0 - 5.0 years

0 Lacs

pune, maharashtra

On-site

As the AVP, Campaign Operations at Synchrony Financial, you will be an integral part of the Growth Marketing team, spearheading the transformation of campaign operations to deliver highly personalized, data-driven consumer journeys. By harnessing advanced automation and cutting-edge Gen/Agentic AI technologies, you will drive the optimization of campaign execution, enhance decision-making processes, and instill a consumer-centric approach across marketing operations. Your role as a transformational leader will involve collaborating across functions to maximize ROI, foster innovation, and fortify Synchrony's competitive standing in the market. Your key responsibilities will include analyzing current campaign workflows to identify opportunities for AI-driven automation and transformation, ensuring robust and compliant processes before implementation, and leading the design and implementation of digitally enabled consumer-first campaigns powered by generative and agentic AI. You will develop scalable frameworks incorporating AI for targeted segmentation and journey orchestration, guide the team in exploring emerging AI tools, and promote the adoption of modern AI techniques and agile marketing practices to drive continuous transformation in campaign operations. Additionally, you will engage with stakeholders to evaluate AI opportunities, advocate for alignment with ethical and governance frameworks, and mentor the team on AI capabilities and consumer journey design. To excel in this role, you should possess a Bachelor's Degree with 4+ years of IT experience or a minimum of 6+ years of Marketing experience in the financial domain. You should have a solid background in digital marketing and campaign operations, preferably within financial services, and familiarity with AI tools and platforms like Microsoft Bot Framework or Rasa. Proficiency in AI and automation tools, as well as knowledge of ML lifecycle concepts and data analytics tools, will be essential. Desired skills include hands-on experience with marketing automation platforms, knowledge of Customer Data Platforms, and a track record of driving change within complex marketing ecosystems. If you meet the eligibility criteria and are excited about driving innovation in campaign operations while ensuring a consumer-first approach, we invite you to apply for this role. Please note that this position requires working from 2 PM to 11 PM IST. For internal applicants, it is important to understand the mandatory skills required for the role, inform your manager and HRM before applying, update your professional profile, and adhere to internal guidelines regarding eligibility and application process.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are an AI Security Engineer joining the Shared Security Services Engineering Team at NTT DATA in Bangalore, Karnataka, India. In this role, you will be responsible for developing, deploying, and maintaining software security solutions to safeguard AI resources within the enterprise. Your key responsibilities include collaborating with AI/ML and Security architecture teams to understand use case requirements, designing and implementing robust security measures to protect AI models from attacks, developing data protection mechanisms, creating API services for AI security tools, building monitoring solutions for AI security posture assessment, integrating security controls into ML/AI workflows, and implementing data loss prevention capabilities for sensitive information across communication channels. Additionally, you will document security processes, architecture, and implementation details. To excel in this position, you are required to have at least 5 years of work experience. Proficiency in Java, Python, React/Angular frameworks, AI/ML training & inference platforms (such as AWS Bedrock, AWS Sagemaker), open-source & custom AI/ML models, Data Science, Terraform, and Helm charts is essential. Mandatory skills for this role include 5+ years of experience in building enterprise-grade full-stack applications, strong hands-on development experience in Java or Python, expertise in API development using REST, gRPC methodologies, experience in cloud native applications development using Kubernetes, and proficiency in developing, deploying, and maintaining AI models and applications on cloud platforms. Preferred skills include a good understanding of OWASP top 10 for AI and CISA guidelines for AI development, familiarity with security risks in AI applications, and experience with observability frameworks like OpenTelemetry. NTT DATA is a trusted global innovator providing business and technology services to Fortune Global 100 companies. As a Global Top Employer, NTT DATA has experts in over 50 countries and a robust partner ecosystem. Their services encompass business and technology consulting, data and artificial intelligence, industry solutions, and digital infrastructure. NTT DATA is part of the NTT Group, investing significantly in R&D to support organizations and society in moving confidently into the digital future. Visit us at us.nttdata.com,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

NTT DATA strives to hire exceptional, innovative, and passionate individuals who want to grow with the organization. If you aspire to be part of an inclusive, adaptable, and forward-thinking team, this opportunity is for you. We are currently seeking an Annuities in Bangalore, Karnataka (IN-KA), India to join our team as an AI Security Engineer. **Technology** - Proficiency in Java, Python, or similar programming languages - Familiarity with React/Angular frameworks - Experience with AI/ML training & inference platforms such as AWS Bedrock, AWS Sagemaker, open-source & custom AI/ML models, Data Science, Terraform, and Helm charts **Mandatory Skills** - 5+ years" experience in building enterprise-grade full stack applications - Strong hands-on development experience in Java or Python, including unit testing with frameworks like Junit or Pytest - 3+ years" experience in API development based on REST, gRPC methodologies using FastAPI, Spring REST, or similar frameworks - Proficiency in developing and maintaining cloud-native applications using Kubernetes or other container management solutions - Experience in developing, deploying, performance tuning, and maintaining AI models and applications on cloud platforms **Preferred Skills** - Good understanding of OWASP top 10 for AI and CISA guidelines for AI development - Preferably holding a cybersecurity certification such as CISSP or equivalent - Understanding of security risks in AI & Gen AI applications related to prompt injection attacks, data leakage, adversarial testing, etc. - Experience with observability frameworks like OpenTelemetry If you are looking to join a trusted global innovator of business and technology services, NTT DATA is the place for you. With a commitment to helping clients innovate, optimize, and transform for long-term success, we are a Global Top Employer with diverse experts in over 50 countries and a robust partner ecosystem. Our services range from business and technology consulting to data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. Join us to be part of one of the leading providers of digital and AI infrastructure globally. NTT DATA is a part of the NTT Group, investing over $3.6 billion annually in R&D to facilitate organizations and society's confident and sustainable transition into the digital future. Learn more about us at us.nttdata.com.,

Posted 1 week ago

Apply

5.0 - 23.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Generative AI Architect with 5 to 10+ years of experience, you will be responsible for designing, developing, and deploying enterprise-grade GenAI solutions. Your role will require in-depth expertise in LLMs, RAG, MLOps, cloud platforms, and scalable AI architecture. You will architect and implement secure, scalable GenAI solutions using LLMs such as GPT, Claude, LLaMA, and Mistral. Additionally, you will build RAG pipelines with LangChain, LlamaIndex, FAISS, Weaviate, and ElasticSearch. Your responsibilities will also include leading prompt engineering, setting up evaluation frameworks for accuracy and safety, and developing reusable GenAI modules for function calling, summarization, document chat, and Q&A. Furthermore, you will deploy workloads on AWS Bedrock, Azure OpenAI, and GCP Vertex AI, ensuring monitoring and observability with Grafana, Prometheus, and OpenTelemetry. You will apply MLOps best practices such as CI/CD, model versioning, and rollback. Researching emerging trends like multi-agent systems, autonomous agents, and fine-tuning will also be part of your role, along with implementing data governance and compliance measures like PII masking, audit logs, and encryption. To be successful in this role, you should have 8+ years of experience in AI/ML, including 2-3 years specifically in LLMs/GenAI. Strong coding skills in Python with Hugging Face Transformers, LangChain, and OpenAI SDKs are essential. Expertise in Vector Databases like Pinecone, FAISS, Qdrant, and Weaviate is required, along with hands-on experience with cloud AI platforms such as AWS SageMaker/Bedrock, Azure OpenAI, and GCP Vertex AI. Experience in building RAG pipelines and chat-based applications, familiarity with agents and orchestration frameworks like LangGraph, AutoGen, and CrewAI, knowledge of MLOps stack including MLflow, Airflow, Docker, Kubernetes, and FastAPI, as well as understanding of prompt security and GenAI evaluation metrics like BERTScore, BLEU, and GPTScore are also important. Excellent communication and leadership skills for architecture discussions and mentoring are expected in this role.,

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Senior Data Scientist / Senior Gen AI Developer Location: PAN India Years of Experience: 4-10 years (Experience: 4- 10 years with at least 3-5 project lifecycle experience in data science.) with Bachelor/Master in Statistics/Economics, BE/BTech/MTech with above experience . Position Overview: This role requires a strong expertise in AI/ML with a focus on Generative AI, combined with business acumen to drive client transformation. In this role, you will be responsible for the design, development, and optimization of AI/ML & generative AI models, with a focus on leveraging Python for algorithm development, model implementation, and deployment. Key Responsibilities: Solution Development: Design and implement AI & Gen AI powered solutions using LLMs (eg. Open AI, Claude, Gemini, Llama3). Fine-tune AI/generative AI models using Python and machine learning libraries. Leverage Agentic AI frameworks such as CrewAI, AutoGen, and orchestrate interactions across vector DBs, RAG pipelines & APIs. Should have experience in developing different ML models for Regression / Classification, Clustering, and Time Series forecasting models as well. Data Processing: Manage and preprocess large datasets for training AI/ML models, ensuring clean, efficient data pipelines for optimal performance. Model Evaluation: Evaluate and iterate on model performance, applying techniques to improve accuracy, robustness, and efficiency. Collaboration: Work closely with other developers, data scientists, and product managers to define AI requirements, integrate models into products, and ensure the AI system meets business needs .Optimization: Optimize model performance for production environments, focusing on scalability and computational efficiency. Thought leadership: Stay updated with the latest trends in generative AI, contributing innovative ideas and applying cutting-edge research to development effo rts. Required skills & experience: Experience: 5+ years of professional experience with a focus on machine learning, deep learning, and 2+ years in generative AI applications. Python Expertise: Advanced proficiency in Python, with solid experience using machine learning and deep learning libraries such as TensorFlow, PyTorch, Keras, and Scikit-learn, Pandas. Generative AI experience: Hands-on experience with tools/platforms like Open AI, LangChain, Hugging Face, Vertex AI, AWS Bedrock etc. and frameworks such as LangGraph, CrewAI, Autogen Data Handling: Experience working with large datasets, including preprocessing, cleaning, and augmenting data for AI model training. Model Deployment: Knowledge of deploying machine learning models into production environments, including familiarity with cloud services (AWS, GCP, Azure) and containerization tools (e.g., Docker). ML Algorithms & Statistics: Strong understanding of machine learning algorithms, statistics, and optimization techniques used in AI. Version Control: Proficiency in using version control systems such as Git for managing codebase and collaboration. Problem Solving: Strong analytical skills with the ability to tackle complex problems and devise efficient solutions in a fast-paced environment. Communication: Ability to clearly communicate technical concepts to both technical and non-technical stakeholders. Project experience in Python, PySpark ,SQL Familiarity with model interpretability and debugging tools. Familiarity with reinforcement learning and its applications to generative models. Experience working in Agile/Scrum environments. Experience with data visualisation tools, such as D3.js, GGplot, etc. Experience in other Visualization tools like Power BI, Tableau, Qlikview shall be added advantage. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

Welcome to the era of Velsera! Seven Bridges, Pierian & UgenTec have combined to become Velsera. Velsera is the precision engine company that empowers researchers, scientists, and clinicians to drive precision R&D, expand access to, and more effectively leverage analytics at the point of care. We unify technology-enabled solutions and scientific expertise to enable a continuous flow of knowledge across the global healthcare ecosystem, accelerating medical breakthroughs that positively impact human health. With our headquarters in Boston, MA, we are growing and expanding our team located in different countries! In this role, you will lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. You will architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Additionally, you will design and implement data integration strategies, develop efficient methods for synthesizing large volumes of data, and leverage advanced feature engineering techniques to implement, validate, and optimize AI models. Your responsibilities will also include simplifying data presentation for stakeholders, maintaining knowledge of the latest advancements in AI and generative AI, and contributing to project management processes. You should have a bachelor's or master's degree in a quantitative field, 8+ years of experience in AI/ML development, and fluency in Python and SQL. Experience with cloud-based AI/ML platforms and tools is essential for this role. At Velsera, our core values include putting people first, being patient-focused, acting with integrity, fostering curiosity, and striving to be impactful. We are an Equal Opportunity Employer committed to creating a collaborative, supportive, and inclusive work environment that promotes mental, emotional, and physical health while driving innovation and positive impact in the healthcare industry.,

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

delhi, india

On-site

Who We Are Balbix is the world&aposs leading platform for cybersecurity posture automation. Using Balbix, organizations can discover, prioritize and mitigate unseen risks and vulnerabilities at high velocity. With seamless data collection and petabyte-scale analysis capabilities, Balbix is deployed and operational within hours, and helps to decrease breach risk immediately. Balbix counts many global 1000 companies among its rapidly growing customer base . We are backed by John Chambers (the former CEO and Chairman of Cisco) , top Silicon Valley VCs and global investors . We have been called magical , and have received raving reviews as well as customer testimonials , numerous industry awards , and recognition by Gartner as a Cool Vendor , and by Frost & Sullivan . About This Job Were hiring a hands-on Technical Team Lead to join our core LLM engineering team. Youll work directly with the company&aposs leadership to design, build, and scale intelligent reasoning pipelines using LangGraph and AWS Bedrock. This is not a coordination role, youll lead by example, write production-grade code, solve complex problems, and help grow a high-performing technical team from day one. You Will Architect and implement LangGraph-powered workflows and Bedrock-based inference. Collaborate closely with the founder, and with the head of AI on system design and product strategy. Build and manage stateful agent flows, tool orchestration, retries, and memory handling. Debug real-world issues across prompts, agent logic, and runtime behavior. Mentor and lead an initial team of 5 engineers, shaping engineering best practices. Own the performance, cost-efficiency, and observability of LLM pipelines. You Have Strong CS fundamentals ( B.Tech/M.Tech or equivalent) and 5+ years of backend or systems engineering experience. Experience with LLM orchestration tools like LangGraph, LangChain, or Bedrock agents. Deep Python skills with experience in async and event-driven programming. Proven track record shipping and maintaining production systems. Ability to work across layers prompt logic, orchestration, infrastructure. Bonus to Have Familiarity with Langfuse or tracing/observability tooling. Experience with vector stores, prompt versioning, or RAG architectures. Background in cybersecurity, risk reasoning, or enterprise software. Why Join Us Work directly to shape a core AI capability Be the technical anchor for a small, elite team building from first principles Ship code that runs in production and impacts global enterprise security Competitive salary, meaningful equity, and a fast-moving builder culture Life @ Balbix At Balbix, we have built a culture that aligns to our values of ownership, customer focus, curiosity, tenacity, innovation, judgement, teamwork, communication, honesty and impact. In joining our team youll work with very motivated and knowledgeable people, build pioneering products and utilize cutting-edge technology. Our Balbix team members see rapid career growth opportunities stemming from our culture of alignment, bottom up innovation, our clarity of goals and unrelenting mission. Last but not least, developing the world&aposs most advanced platform to address what the most important (and hardest) technology problem facing mankind today is exceptionally rewarding! More information at https://www.balbix.com/company/careers/ Please reach out if you want a seat on our rocket-ship and are passionate about changing the cybersecurity equation. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

ahmedabad

Remote

AI Architect Armakuni India Role Summary: Armakuni India is seeking a visionary AI Architect with deep technical expertise in AI/ML systems, Generative AI, and cloud-native architecture. This role demands hands-on leadership to define, design, and scale next-generation AI solutionsincluding LLMs, computer vision systems, and enterprise-grade ML platforms. You will drive innovation across the AI stack, shaping the future of data-driven product development at Armakuni. Key Responsibilities: Architectural Leadership * Define and drive AI/ML architectural vision aligned with business objectives. * Lead the design of scalable, secure, and cost-optimized AI platforms on AWS or similar clouds. * Evaluate and select appropriate models, tools, and infrastructure for enterprise deployment. Team & Technical Mentorship * Guide and mentor engineers, data scientists, and MLOps professionals. * Conduct architecture reviews, enforce coding and deployment standards. * Foster a culture of experimentation, ownership, and knowledge sharing. Generative AI & LLMs * Architect and deliver GenAI solutions leveraging LLMs like GPT-4, Claude, Gemini, LLaMA, etc. * Implement agentic workflows, RAG pipelines, and domain-specific LLM fine-tuning. * Use advanced frameworks like LangChain, LangGraph, LangFuse, Crew AI, LLamaIndex. Machine Learning Engineering * Build robust ML pipelines: preprocessing, model training, evaluation, and drift monitoring. * Apply classical ML, deep learning, and time series forecasting to solve real-world problems. * Deploy and manage models using SageMaker and other MLOps tools. Computer Vision (CV) * Architect CV pipelines for image classification, object detection, segmentation, and more. * Optimize models for cloud and edge deployment using PyTorch/TensorFlow. Cloud & Infrastructure * Lead cloud-native deployments using AWS (SageMaker, Bedrock, Lambda, etc.). * Use containerization tools (Docker, Kubernetes) for scalable infrastructure. * Integrate with RESTful APIs, vector databases (Pinecone, FAISS), and caching layers. Core Skills & Technologies * Languages & Libraries: Python, Scikit-learn, XGBoost, PyTorch, TensorFlow * Generative AI Tools: LangChain, LangGraph, LLamaIndex, LangFuse, Crew AI * Databases: PostgreSQL, DynamoDB, Redis, Chroma * MLOps & Deployment: SageMaker, MLflow, FastAPI, Docker, Uvicorn * Vector Search: Pinecone, FAISS, OpenSearch * Cloud Platforms: AWS (preferred), GCP, Azure Preferred Qualifications * 8+ years in AI/ML, with 4+ years in GenAI and LLMs * Proven track record in leading enterprise-grade AI deployments * Open-source contributions or publications in AI/ML * AWS AI/ML certification or equivalent is a strong plus

Posted 1 week ago

Apply

12.0 - 18.0 years

35 - 55 Lacs

hyderabad, chennai, bengaluru

Hybrid

Job Description: We are looking for a highly skilled Machine Learning Engineer with expertise in Generative AI, AWS Bedrock, Amazon Kendra, and LangChain to join our innovative team. The ideal candidate will have hands-on experience building and deploying advanced ML models, with a strong focus on solving complex business challenges and driving product innovation. Responsibilities: Design, develop, and deploy Generative AI models for creative applications including text, image, and audio generation. Implement and customize AWS Bedrock for faster model development, deployment, and monitoring. Integrate Amazon Kendra to enhance search capabilities and improve user experiences. Build and deploy LangChain-based solutions for secure and scalable language processing. Collaborate with cross-functional teams to define requirements, prepare datasets, and evaluate model performance. Optimize ML pipelines for scalability, cost-efficiency, and performance across cloud platforms (AWS, Azure, GCP). Stay updated on emerging ML trends, tools, and frameworks, and recommend adoption where relevant. Conduct code reviews, provide technical guidance, and mentor junior engineers in ML best practices. Qualifications: Strong experience in machine learning, deep learning, and Generative AI techniques. Hands-on expertise with AWS Bedrock, Amazon Kendra, and LangChain. Proficiency in Python, ML frameworks (TensorFlow, PyTorch), and cloud ML services. Solid understanding of data preprocessing, model evaluation, and deployment pipelines. Experience with cloud infrastructure (AWS, Azure, GCP) and MLOps best practices. Excellent problem-solving, collaboration, and communication skills. Why Trianz? At Trianz , we are transforming digital enterprises with our IP-led platforms in Cloud, Data & AI, Digital Experiences, and Cybersecurity. Recognized by Forbes as one of the Worlds Best Management Consulting Firms and certified as a Great Place to Work® , we bring together talent, innovation, and technology to deliver outcomes that matter.

Posted 1 week ago

Apply

9.0 - 14.0 years

16 - 30 Lacs

bengaluru

Work from Office

Roles and Responsibilities Design, develop, test, deploy and maintain scalable data pipelines using FastAPI, Pandas, NumPy, Matplotlib, Scikit-Learn, Generative AI Concepts. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop high-quality code in Python with expertise in AWS Bedrock and PostgreSQL database management systems. Troubleshoot issues related to data processing pipelines and provide timely resolutions. Desired Candidate Profile 9-14 years of experience in software development with a focus on data engineering or analytics. Strong understanding of machine learning algorithms (e.g., regression, classification) and their implementation using scikit-learn library. Proficiency in working with vector databases such as RAG (Recommendation Algorithm Generator). Experience with AI/ML solution design principles including generative models like GANs (Generative Adversarial Networks).

Posted 1 week ago

Apply

8.0 - 12.0 years

25 - 35 Lacs

pune

Work from Office

modelling/machine Company Description Syngenta is one of the worlds leading agriculture innovation company (Part of Syngenta Group) dedicated to improving global food security by enabling millions of farmers to make better use of available resources. Through world class science and innovative crop solutions, our 60,000 people in over 100 countries are working to transform how crops are grown. We are committed to rescuing land from degradation, enhancing biodiversity and revitalizing rural communities. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. To learn more visit: www.syngenta.com Job Description Role purpose Co-Lead operations of the predictive modelling platform and act as key bridge between R&D IT and R&D Set strategic direction for the modelling platform and guide further technical development Design and develop models to generate new content using machine learning models in a secure, well-tested, and performant way Confidently ship features and improvements with minimal guidance and support from other team members Establish and promote community standards for data-driven modelling, machine learning and model life cycle management Define and improve internal standards for style, maintainability, and best practices for a high-scale machine learning environment. Maintain and advocate for these standards through code review. Support diverse technical modelling communities with governance needs Engage and inspire scientific community as well as R&D IT and promote best practices in modeling Accountabilities Acts as R&D IT co-lead and subject matter expert for the modelling platform, providing strategic direction as well as overseeing technical and scientific governance aspects Works closely with R&D to ensure platform remains fit for purpose for changing scientific needs Engages with modelling communities across R&D to understand applications, recognize opportunities and novel use cases and prioritizes efforts within the platform for maximum impact Develops Python code, scripts and other tooling within the modelling platform to streamline operations and prototype new functionality Provides hands-on support to expert modellers by defining best practices on coding conventions, standards etc. for model deployment and quality control Explores, prototypes and tests new technologies for model building, validation and deployment, e.g. machine learning frameworks, statistical methods, and how they could be integrated into the platform to boost innovation Monitors new developments in the field and maintains awareness of modelling approaches taken by other companies, vendors, and academia. Works with external collaborators in academia and industry to understand and integrate their complementary capabilities Qualifications Critical knowledge, Experience & Capabilities Background in predictive modelling in the physical or life sciences at a postgraduate level Prior wet-lab experience (e.g. biology, chemistry, toxicology, environmental science) is a plus Experience working in an academic or industrial R&D setting Strong Python skills and familiarity with standard data science tooling for data-driven modelling / machine learning Understanding of the model lifecycle and tools to manage it, as well as technical aspects such as deployment, containerization/virtualization, and handling metadata. Experience with DataIKU/DSS and AWS Bedrock is a plus, but not essential Strong analytical thinking and problem-solving skills, adaptability to different business challenges and openness to new solutions and different ways of working Curiosity and ability to acquire domain knowledge in adjacent scientific areas to effectively work across internal teams and quickly get up to speed with different modelling approaches Understanding of mathematical/mechanistic modelling is a plus Solid understanding of Web APIs and how they can be used to operationalize models Adaptable to different business challenges and data types / sources Able to learn and utilize a range of different analytical tools and methodologies not fixed in a particular methodology Strong collaborative, networking and relationship building skills Uses visualization and storytelling with data to communicate results to parties with varying levels of technical proficiency Enjoys working a highly diverse working environment comprising multiple scientific disciplines, nationalities, and cultural backgrounds Able to manage own time and deals effectively with conflicting workloads, in agreement with key customers. Additional information Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. Follow us on: Twitter & LinkedIn https://twitter.com/SyngentaAPAC https://www.linkedin.com/company/syngenta/ India page https://www.linkedin.com/company/70489427/admin/

Posted 1 week ago

Apply

0.0 - 15.0 years

0 Lacs

haryana

On-site

As a Senior Data Scientist specializing in Generative AI, you will be a valuable addition to our growing team. Your primary responsibility will be to design, train, and implement Large Language Models (LLMs) and Foundation Models (FMs) for generative AI applications. You will leverage your expertise in Python, TensorFlow, PyTorch, and AWS Bedrock to efficiently manage datasets and machine learning workflows. Collaborating with cross-functional teams, you will deploy and integrate AI applications into business processes, contributing to the enhancement of client offerings. With 10-15 years of experience in data science and machine learning, including over 6 months of experience with LLMs, FMs, and generative AI in a cloud environment, you will demonstrate your ability to translate complex AI concepts into practical business applications. Your day-to-day activities will involve designing and implementing generative AI models, managing and preprocessing large datasets, integrating AI solutions into business processes, conducting research to develop new AI solutions, and participating in code reviews. To excel in this role, you should possess a Degree or Postgraduate in Computer Science or a related field, or equivalent industry experience. Proficiency in Python, machine learning libraries, and experience with AWS services and cloud-based AI solutions are essential. Strong communication skills, both verbal and written, along with excellent analytical and problem-solving abilities will be key to your success. You should be comfortable working independently and as part of a team, with a strategic mindset focused on research and development. Your ability to influence multiple teams on technical considerations will be highly valued in this position.,

Posted 2 weeks ago

Apply

9.0 - 14.0 years

37 - 45 Lacs

hyderabad, chennai, bengaluru

Hybrid

Location: Gurgaon / Any Xebia Office (Hybrid 3 days in office per week) Work Shift: 3:30 PM 12:30 AM IST Job Description We are seeking GenAI Engineers with strong expertise in AWS, Bedrock, and Agentic AI . The role requires building enterprise-grade AI solutions beyond API usage, leveraging comprehensive native functionalities of Bedrock . Key Responsibilities Architect, design, and deploy GenAI applications on AWS. Implement native Bedrock functionalities for advanced AI workflows. Collaborate with global teams across US time zones. Deliver AI-driven solutions in line with data innovation strategy. Required Skills & Experience 6-8 years of overall experience in Big Data / Data & AI projects . Strong hands-on experience with AWS, GenAI, Bedrock, AgenticAI . Experience with Bedrock beyond APIs , including native integrations. Ability to work effectively in late IST shift . Candidate Details to Share (Mandatory): Total Experience Relevant Experience (GenAI + AWS) Current CTC Expected CTC Notice Period ( Immediate to 2 weeks only apply if you can join early ) Current Location Preferred Location LinkedIn Profile URL Email for Applications: Vijay.S@xebia.com

Posted 2 weeks ago

Apply
Page 1 of 5
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies