Jobs
Interviews

51 Aws Bedrock Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are looking for a seasoned AWS Data Engineer. Responsibilities Design and implement AI/ML/GenAI models using AWS services such as AWS Bedrock, SageMaker, Comprehend, Rekognition, and others. Strong programming skills in Python, R etc Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Knowledge of data preprocessing, feature engineering, and model evaluation techniques. Develop and deploy generative AI solutions to solve complex business problems and improve operational efficiency. Collaborate with data scientists, engineers, and product teams to understand requirements and translate them into technical solutions. Optimize and fine-tune machine learning models for performance and scalability. Ensure the security, reliability, and scalability of AI/ML solutions by adhering to best practices. Maintain and update existing AI/ML models to ensure they meet evolving business needs. Stay up-to-date with the latest advancements in AI/ML and GenAI technologies and integrate relevant innovations into our solutions. Provide technical guidance and mentorship to junior developers and team members. Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment. Good to have AWS Certified Machine Learning Specialty or other relevant AWS certifications. Mandatory Skill Sets (AWS, Azure, GCP) services such as GCP BigQuery, Dataform AWS Redshift, Python Preferred Skill Sets Devops Years of experience required: 4-8 Years Education Qualification BE/B.Tech/MBA/MCA/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Devops, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline + 27 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be working as an AI Platform Engineer in Bangalore as part of the GenAI COE Team. Your key responsibilities will involve developing and promoting scalable AI platforms for customer-facing applications. It will be essential to evangelize the platform with customers and internal stakeholders, ensuring scalability, reliability, and performance to meet business needs. Your role will also entail designing machine learning pipelines for experiment management, model management, feature management, and model retraining. Implementing A/B testing of models and designing APIs for model inferencing at scale will be crucial. You should have proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. As an AI Platform Engineer, you will serve as a subject matter expert in LLM serving paradigms, with in-depth knowledge of GPU architectures. Expertise in distributed training and serving of large language models, along with proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM, will be required. Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results will be part of your responsibilities. Reducing training and resource requirements for fine-tuning LLM and LVM models will also be essential. Having extensive knowledge of different LLM models and providing insights on their applicability based on use cases is crucial. You should have proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. Your proficiency in DevOps and LLMOps practices, along with knowledge of Kubernetes, Docker, and container orchestration, will be necessary. A deep understanding of LLM orchestration frameworks such as Flowise, Langflow, and Langgraph is also required. In terms of skills, you should be familiar with LLM models like Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, and Llama, as well as LLM Ops tools like ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, and Azure AI. Additionally, knowledge of databases/data warehouse systems like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, and Google BigQuery, as well as cloud platforms such as AWS, Azure, and GCP, is essential. Proficiency in DevOps tools like Kubernetes, Docker, FluentD, Kibana, Grafana, and Prometheus, along with cloud certifications like AWS Professional Solution Architect and Azure Solutions Architect Expert, will be beneficial. Strong programming skills in Python, SQL, and Javascript are required for this full-time role, with an in-person work location.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a Senior Machine Learning Engineer Contractor specializing in AWS ML Pipelines, your primary responsibility will be to design, develop, and deploy advanced ML pipelines within an AWS environment. You will work on cutting-edge solutions that automate entity matching for master data management, implement fraud detection systems, handle transaction matching, and integrate GenAI capabilities. The ideal candidate for this role should possess extensive hands-on experience in AWS services like SageMaker, Bedrock, Lambda, Step Functions, and S3. Moreover, you should have a strong command over CI/CD practices to ensure a robust and scalable solution. Your key responsibilities will include designing and developing end-to-end ML pipelines focusing on entity matching, fraud detection, and transaction matching. You will be integrating generative AI solutions using AWS Bedrock to enhance data processing and decision-making. Collaboration with cross-functional teams to refine business requirements and develop data-driven solutions tailored to master data management needs will also be a crucial aspect of your role. In terms of AWS ecosystem expertise, you will be required to utilize SageMaker for model training, deployment, and continuous improvement. Additionally, leveraging Lambda and Step Functions to orchestrate serverless workflows for data ingestion, preprocessing, and real-time processing will be part of your daily tasks. Managing data storage, retrieval, and scalability concerns using AWS S3 will also be within your purview. Furthermore, you will need to develop and integrate automated CI/CD pipelines to streamline model testing, deployment, and version control. Ensuring rapid iteration and robust deployment practices to maintain high availability and performance of ML solutions will be essential. Data security and compliance will be a critical aspect of your role. You will need to implement security best practices to safeguard sensitive data, ensuring compliance with organizational and regulatory requirements. Incorporating monitoring and alerting mechanisms to maintain the integrity and performance of deployed ML models will be part of your responsibilities. Collaboration and documentation will also play a significant role in your day-to-day activities. Working closely with business stakeholders, data engineers, and data scientists to ensure solutions align with evolving business needs will be crucial. You will also need to document all technical designs, workflows, and deployment processes to support ongoing maintenance and future enhancements. Providing regular progress updates and adapting to changing priorities or business requirements in a dynamic environment are expected. To qualify for this role, you should have at least 5+ years of professional experience in developing and deploying ML models and pipelines. Proven expertise in AWS services including SageMaker, Bedrock, Lambda, Step Functions, and S3 is necessary. Strong proficiency in Python and/or PySpark, demonstrated experience with CI/CD tools and methodologies, and practical experience in building solutions for entity matching, fraud detection, and transaction matching within a master data management context are also required. Familiarity with generative AI models and their application within data processing workflows will be an added advantage. Strong analytical and problem-solving skills are essential for this role. You should be able to transform complex business requirements into scalable technical solutions and possess strong data analysis capabilities with a track record of developing models that provide actionable insights. Excellent verbal and written communication skills, the ability to work independently as a contractor while effectively collaborating with remote teams, and a proven record of quickly adapting to new technologies and agile work environments are also preferred qualities for this position. A Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field is a plus. Experience with additional AWS services such as Kinesis, Firehose, and SQS, prior experience in a consulting or contracting role demonstrating the ability to manage deliverables under tight deadlines, and experience within industries where data security and compliance are critical will be advantageous.,

Posted 3 days ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Bengaluru, Karnataka, India

On-site

About the role: In this opportunity as a Senior Software Engineer, you will : Actively participates and collaborates in meetings, processes, agile ceremonies, and interaction with other technology groups. Works with Lead Engineers and Architects to develop high performing and scalable software solutions to meet requirement and design specifications. Provides technical guidance, mentoring, or coaching to software or systems engineering teams that are distributed across geographic locations. Proactively share knowledge and best practices on using new and emerging technologies across all the development and testing groups. Assists in identifying and correcting software performance bottlenecks. Provides regular progress and status updates to management. Provides technical support to operations or other development teams by assisting in troubleshooting, debugging, and solving critical issues in the production environment promptly to minimize user and revenue impact. Ability to interpret code and solve problems based on existing standards. Creates and maintains all required technical documentation / manual related to assigned components to ensure supportability. About You: Youre a fit for the role of Senior Software Engineer, if your background includes: Bachelor s or master s degree in computer science, engineering, information technology or equivalent experience 5+ years of professional software development experience 2+ years of experience with Java and REST based services 2+ years of Python experience Ability to debug and diagnose issues. Experience with version control (Git, GitHub) Experience working with various AWS technologies (DynamoDB, S3, EKS) Experience with Linux Infrastructure as Code, CICD Pipelines Excellent and creative problem-solving skills Strong written and oral communication skills Knowledge of Artificial Intelligence AWS Bedrock, Azure Open AI Large Language Models (LLMs) Prompt Engineering

Posted 3 days ago

Apply

8.0 - 13.0 years

20 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Hi, We are looking for AI Lead Engineer Position Summary We are seeking an experienced and visionary AI Lead Engineer to spearhead the design, development, and scaling of an enterprise-grade, automated FAQ Generation Platform powered by Generative AI. This role will be pivotal in architecting intelligent systems that ingest structured/unstructured content and dynamically generate, validate, and optimize FAQs across domains such as travel, accommodation, policy, and support services. The ideal candidate brings deep expertise in LLM operations (LLMOps), retrievalaugmented generation (RAG), prompt engineering, and multi-modal pipeline orchestration, and can lead cross-functional teams from PoC to production Key Responsibilities • Lead the technical design and architecture of a scalable application using AWS Bedrock, OpenSearch, and DynamoDB. • Drive LLM integration (e.g., Claude, Titan, or third-party models) to enable contextual, accurate, and diverse application development • Develop and optimize semantic search and vector-based retrieval pipelines, supporting real-time and batch generation workflows. • Design and maintain feedback loops for continuous model and content improvement, integrating business and user feedback. • Collaborate closely with UX/UI teams, data engineers, and business SMEs to ensure that the application is aligned with domain needs. • Implement automated data ingestion, preprocessing, and metadata tagging from diverse content sources (PDFs, CMS, knowledge bases). • Define and uphold AI ethics, bias mitigation, and compliance standards in line with enterprise and regulatory frameworks (e.g., EU AI Act). • Provide technical mentorship to AI developers and guide sprint-level deliverables using Agile practices Qualifications • Bachelors or Masters in Computer Science, AI/ML, Data Science, or a related discipline. • 5+ years of experience in AI/ML engineering, with 3-4 years in Generative AI or NLP-led product delivery. • Proven experience with LLMs, Bedrock, LangChain, OpenSearch, and Vector Databases (e.g., Pinecone, FAISS). • Strong hands-on skills with Python, AWS Lambda, DynamoDB, and orchestration tools (Airflow, Step Functions). • Deep understanding of prompt engineering, RAG pipelines, embeddings, and content chunking strategies. • Experience in security and compliance alignment (PII detection, encryption, audit trails) for AI systems. What We Offer • A pioneering opportunity to lead the next-gen AI product shaping intelligent customer experience. • A collaborative environment with cross-functional tech and domain teams. • Access to enterprise AI tooling and LLM platforms (Anthropic, AWS, OpenAI). • Competitive compensation, benefits, and career growth in AI leadership. Good to Have • Exposure to multi-lingual NLP, enterprise search, or knowledge graph integration. • Familiarity with MLOps/LLMOps practices, CI/CD for ML pipelines, and model performance monitoring. • Experience in leading enterprise GenAI use cases such as document QA, virtual assistants, or intelligent search. • GitHub/portfolio (if any) • Brief note on your experience with GenAI-powered systems

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

Yahoo Finance is the world's leading finance destination, providing investors with news, information, and tools to make confident financial decisions. Trusted by over 150 million visitors globally each month, representing over $20 trillion dollars in investable assets, Yahoo Finance delivers high-quality real-time market data across desktop, mobile, and streaming platforms. With breaking news from thousands of sources, original editorial perspectives, objective analyst ratings and research, analytical charts and technical tools, personalized mobile alerts, and more, Yahoo Finance equips investors with knowledge and insights to achieve financial freedom and greater prosperity. Yahoo is a top provider of media and technology brands, reaching over a billion people worldwide. Yahoos Media Engineering organization utilizes the latest technologies to build brands that members love, including Yahoo, AOL, Engadget, TechCrunch, Autoblog, In The Know, and more. With a focus on building at a massive scale to reach hundreds of millions of users, our teams strive to create world-class user experiences, delivering trusted content and data across all brands. We are committed to building and revitalizing this essential, trusted resource for investors and savers under a new leadership team. As an experienced engineer, you will collaborate closely with Engineering, Product, and Design teams to enhance our product offerings. You will develop applications and tools essential for supporting our business operations and ensuring the quality of our data and services. This role involves architecting, designing, scoping, building, maintaining, and iterating on systems needed to deliver world-class finance products and features. Responsibilities: - Be part of an agile scrum team, demonstrating progress through proof of concept, sandboxing, and prototyping - Architect and design scalable, maintainable, secure, and reusable strategic solutions - Deploy, monitor, and manage ML models in production environments using MLOps best practices - Work closely with data scientists to transition models from development to production efficiently - Optimize ML models and infrastructure for efficiency, scalability, and cost-effectiveness - Design and implement frameworks and tools to empower developers and non-technical colleagues - Lead key team initiatives by managing and improving the software development life cycle - Seek opportunities to improve quality and efficiency in day-to-day workflow processes - Present and communicate progress across multiple groups, sharing knowledge and best practices - Perform code reviews for peers and recommend approaches to solving complex problems - Own, deploy, monitor, and operate large-scale production systems - Lead and mentor junior engineers in building production-grade systems and applications - Act as a technical liaison to translate business needs into technical solutions Requirements (must have): - MS or PhD in Computer Science or related major - 5 to 10 years industry experience as a Back End Engineer, ML Engineer, or Research Engineer - Deep functional knowledge and hands-on experience with AWS or GCP cloud services, RESTful Web Services, Containerization (Docker, ECS, Kubernetes), and modern AI tools - Experience with AI/ML Ops tools and platforms, basic Data Science concepts, and version control tools - Capable of implementing resilient web architecture and building web products end to end - Familiarity with financial datasets and experience with time series analysis - Ability to work in a hybrid model, commuting 3 days a week to an office in Bangalore Important notes: - All applicants must apply for Yahoo openings directly with Yahoo - Offer letters and documents will be issued through the system for e-signatures - Yahoo offers flexibility around employee location and hybrid working For further inquiries about the role, please discuss with the recruiter.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a skilled professional, your primary responsibility will involve designing and implementing cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to tackle specific business challenges. You will be tasked with creating conversational AI agents and chatbots that provide seamless, human-like interactions, tailored to meet client needs. Additionally, you will develop and optimize Retrieval-Augmented Generation (RAG) models to enhance AI's ability to retrieve and synthesize pertinent information for accurate responses. Your expertise will be leveraged in managing data lakes, data warehouses (including Snowflake), and utilizing Databricks for large-scale data storage and processing. You are expected to have a thorough understanding of Machine Learning Operations (MLOps) practices and manage the complete lifecycle of machine learning projects, from data preprocessing to model deployment. You will play a crucial role in conducting advanced data analysis to extract actionable insights and support data-driven strategies across the organization. Collaborating with stakeholders from various departments, you will align AI initiatives with business requirements to develop scalable solutions. Additionally, you will mentor junior data scientists and engineers, encouraging innovation, skill enhancement, and continuous learning within the team. Staying updated on the latest advancements in AI and deep learning, you will experiment with new techniques to enhance model performance and drive business value. Effectively communicating findings to both technical and non-technical audiences through reports, dashboards, and visualizations will be part of your responsibilities. Furthermore, you will utilize cloud platforms like AWS Bedrock to deploy and manage AI models at scale, ensuring optimal performance and reliability. Your technical skills should include hands-on experience with PyTorch, TensorFlow, and scikit-learn for deep learning and machine learning tasks. Proficiency in Python or R programming, along with knowledge of big data technologies like Hadoop and Spark, is essential. Familiarity with MLOps, data handling tools such as pandas and dask, and cloud computing platforms like AWS is required. Skills in LLAMAIndex and LangChain frameworks, as well as data visualization tools like Tableau and Power BI, are desirable. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Specialization in deep learning, significant experience with PyTorch and TensorFlow, and familiarity with reinforcement learning, NLP, and generative models are expected. In addition to challenging work, you will enjoy a friendly work environment, work-life balance, company-sponsored medical insurance, a 5-day work week with flexible timings, frequent team outings, and yearly leave encashment. This exciting opportunity is based in Ahmedabad.,

Posted 6 days ago

Apply

5.0 - 8.0 years

4 - 15 Lacs

Pune, Maharashtra, India

On-site

We are seeking experienced GenAI Developers to join our team in Pune. As a key member of our AI innovation group, you will be responsible for designing, architecting, and implementing scalable Generative AI solutions leveraging AWS services. You will lead the transformation of existing LLM-based architectures into modern, cloud-native solutions and collaborate with cross-functional teams to deliver cutting-edge AI applications. Key Responsibilities: Design and architect scalable GenAI solutions using AWS Bedrock and other AWS services. Lead the transformation of homegrown LLM-based architecture into a managed cloud-native solution. Provide architectural guidance across components: RAG pipelines, LLM orchestration, vector DB integration, inference workflows, and scalable endpoints. Collaborate closely with GenAI developers, MLOps, and data engineering teams to ensure alignment on implementation. Evaluate, integrate, and benchmark frameworks such as LangChain, Autogen, Haystack, or LlamaIndex. Ensure infrastructure and solutions are secure, scalable, and cost-optimized on AWS. Act as a technical SME and hands-on contributor for architecture reviews and POCs. Must-Have Skills: Deep expertise with AWS Bedrock, S3, Lambda, SageMaker, API Gateway, DynamoDB/Redshift, etc. Proven experience architecting LLM applications with RAG, embeddings, prompt engineering. Hands-on understanding of frameworks like LangChain, LlamaIndex, or Autogen. Knowledge of LLMs like Anthropic Claude, Mistral, Falcon, or custom models. Strong understanding of API design, containerization (Docker), and serverless architecture. Experience leading cloud-native transformations. Preferred: Experience with CI/CD, DevOps integration for ML/AI pipelines. Exposure to AWS, Azure/GCP in GenAI (bonus).

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,

Posted 1 week ago

Apply

6.0 - 11.0 years

9 - 19 Lacs

Noida

Work from Office

We are looking for a skilled Machine Learning Engineer with strong expertise in Natural Language Processing (NLP) and AWS cloud services to design, develop, and deploy scalable ML models and pipelines. You will play a key role in building innovative NLP solutions for classification, forecasting, and recommendation systems, leveraging cutting-edge technologies to drive data-driven decision-making in the US healthcare domain. Key Responsibilities: Design and deploy scalable machine learning models focused on NLP tasks, classification, forecasting, and recommender systems. Build robust, end-to-end ML pipelines encompassing data ingestion, feature engineering, model training, validation, and production deployment. Apply advanced NLP techniques including sentiment analysis, named entity recognition (NER), embeddings, and document parsing to extract actionable insights from healthcare data. Utilize AWS services such as SageMaker, Lambda, Comprehend, and Bedrock for model training, deployment, monitoring, and optimization. Collaborate effectively with cross-functional teams including data scientists, software engineers, and product managers to integrate ML solutions into existing products and workflows. Implement MLOps best practices for model versioning, automated evaluation, CI/CD pipelines, and continuous improvement of deployed models. Leverage Python and ML/NLP libraries including scikit-learn, PyTorch, Hugging Face Transformers, and spaCy for daily development tasks. Research and explore advanced NLP/ML techniques such as Retrieval-Augmented Generation (RAG) pipelines, foundation model fine-tuning, and vector search methods for next-generation solutions. Required Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related technical field. 6+ years of professional experience in machine learning, with a strong focus on NLP and AWS cloud services. Hands-on experience in designing and deploying production-grade ML models and pipelines. Strong programming skills in Python and familiarity with ML/NLP frameworks like PyTorch, Hugging Face, spaCy, scikit-learn. Proven experience with AWS ML ecosystem: SageMaker, Lambda, Comprehend, Bedrock, and related services. Solid understanding of MLOps principles including version control, model monitoring, and automated deployment. Experience working in the US healthcare domain is a plus. Excellent problem-solving skills and ability to work collaboratively in an agile environment. Preferred Skills: Familiarity with advanced NLP techniques such as RAG pipelines and foundation model tuning. Knowledge of vector databases and semantic search technologies. Experience with containerization (Docker, Kubernetes) and cloud infrastructure automation. Strong communication skills with the ability to translate complex technical concepts to non-technical stakeholders.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

Wipro Limited is a leading technology services and consulting company dedicated to creating innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of consulting, design, engineering, and operational capabilities, Wipro assists clients in achieving their most ambitious goals and establishing sustainable businesses that are prepared for the future. Their global workforce of over 230,000 employees and business partners spread across 65 countries ensures that they fulfill their commitment to helping customers, colleagues, and communities thrive amidst the ever-evolving landscape. In this role, you will need to possess the following core qualifications: - A minimum of 12 years of experience in software/data architecture with practical exposure. - Proficiency in Agentic AI & AWS Bedrock is a must, showcasing hands-on expertise in designing, deploying, and managing Agentic AI solutions utilizing AWS Bedrock and AWS Bedrock Agents. - Extensive knowledge of cloud-native architectures on AWS encompassing compute, storage, networking, and security aspects. - Demonstrated ability in defining technology stacks involving microservices, event streaming, and contemporary data platforms such as Snowflake and Databricks. - Proficiency in Continuous Integration/Continuous Deployment (CI/CD) and Infrastructure as Code (IaC) utilizing tools like Azure DevOps and Terraform. - Strong understanding of data modeling, API design (REST/GraphQL), and integration patterns (ETL/ELT, CDC, messaging). - Excellent communication skills and adept at managing stakeholders, capable of translating intricate technical concepts into tangible business value. Additionally, the following qualifications are preferred for this role: - Experience in the media or broadcasting industry. - Familiarity with Salesforce or other enterprise iPaaS solutions. - Relevant certifications such as AWS/Azure/GCP Architect, Salesforce Integration Architect, and TOGAF. Moreover, proficiency in Generative AI is a mandatory skill for this position. The ideal candidate should have 8-10 years of relevant experience. Join Wipro in their journey to reinvent the digital landscape. As they strive to build a modern organization, they are seeking individuals who are driven by the spirit of reinvention - be it in their personal growth, career trajectory, or skill development. At Wipro, you will be part of a dynamic environment that encourages constant evolution and innovation, adapting to the changing world around us. Embrace this opportunity to contribute to a purpose-driven business and shape your own reinvention journey. Realize your ambitions at Wipro, where applications from individuals with disabilities are warmly welcomed.,

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Ahmedabad, Vadodara

Work from Office

We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs), who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications. The ideal candidate should have hands-on experience with foundation model customization, prompt engineering, retrieval-augmented generation (RAG), and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI, or Anthropics Claude. Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning, parameter-efficient fine-tuning (LoRA, QLoRA, PEFT), or prompt tuning. Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Haystack, or OpenAI Assistants API. Use embedding models, vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock, Azure OpenAI, Google Vertex AI, Anthropic Claude, or OpenAI API. Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails, content moderation, and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelors or Masters in Computer Science, AI/ML, Data Science, or related fields. 3-6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems. Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex. Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters. Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services. Preferred Skills: Experience with open-source LLMs: Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows, or multi-agent LLM orchestration. Experience with multi-turn conversation modeling, dialogue state tracking. Understanding of model quantization, distillation, or fine-tuning in low-resource environments. Familiarity with ethical AI practices, hallucination mitigation, and user alignment. Tools & Technologies: Category Tools & Platforms LLM Frameworks Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring Prometheus, MLflow, Langfuse, Weights & Biases

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You should hold a Bachelor's in Engineering or a Master's degree (BE, ME, B.Tech, M.Tech, MCA, MS) with strong communication and reasoning abilities. You must have over 5 years of hands-on technical experience using AWS serverless resources like ECS, Lambda, RDS, API Gateway, S3, Cloudfront, and ALB. Additionally, you should possess over 8 years of experience independently developing modules in Python, Web development, JavaScript/TypeScript, and Containers. Experience in the design and development of web-based applications using NodeJS is necessary. Familiarity with modern JavaScript frameworks (Vue.js, Angular, React), UI Testing tools (Puppeteer, Playwright, Selenium), CI/CD setup, and managing code and deployments towards faster releases are required. Experience with RDBMS and NoSQL databases, particularly MySQL or PostgreSQL, is essential. It would be advantageous if you have experience in Terraform or similar tools, familiarity with AWS Bedrock, expertise with OCR engines and solutions like AWS Textract and Google Cloud Vision, and an interest in adopting Data Science methodologies and AI/ML technologies to optimize project outcomes. In this role, you will play a pivotal role in driving the success of development projects and achieving business objectives through innovative agile software development practices. You will provide technical guidance to the development team for productionizing Proof of Concepts, drive and execute productionizing activities, identify opportunities for reuse, and efficiently resolve complex technical issues. You will participate in technical design discussions, understand the impact of architecture and hosting strategies on design, apply best practices in software development, prioritize security and performance in all implementations, and conduct detailed code reviews for intricate solutions. The team you will be joining is composed of driven professionals committed to leveraging technology to make a tangible impact in the field of patent services. You will thrive in a multi-region, cross-cultural environment, collaborating on cutting-edge technologies with a strong emphasis on a user-centric approach. At Clarivate, we are committed to providing equal employment opportunities for all individuals with respect to hiring, compensation, promotion, training, and other terms of employment. We adhere to applicable laws and regulations governing non-discrimination in all locations.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The company Flentas specializes in assisting enterprises in maximizing the benefits of the Cloud through their consulting and implementation practice. With a team of experienced Solution Architects and Technology Enthusiasts, Flentas is dedicated to driving significant digital transformation projects and expanding cloud operations for clients worldwide. As a Generative AI Specialist at Flentas, based in Pune, you will be responsible for implementing and optimizing cutting-edge AI solutions. Your role will involve utilizing LangChain, LangGraph, and agentic frameworks, along with hands-on experience in Python and cloud-based AI services such as AWS Bedrock and SageMaker. Additionally, experience in fine-tuning models will be advantageous as you collaborate with the team to develop, deploy, and enhance AI-driven applications. Key Responsibilities: - Designing, developing, and implementing AI solutions using LangChain, LangGraph, and agentic frameworks. - Creating and maintaining scalable AI models utilizing AWS Bedrock and SageMaker. - Building and deploying AI-driven agents capable of autonomous decision-making and task execution. - Optimizing machine learning pipelines and workflows for production environments. - Collaborating with cross-functional teams to understand project requirements and deliver innovative AI solutions. - Conducting fine-tuning and customization of generative AI models (preferred skill). - Monitoring, evaluating, and improving the performance of AI systems. - Staying updated on the latest advancements in generative AI and related technologies. Required Skills: - Proficiency in LangChain, LangGraph, and agentic frameworks. - Strong programming skills in Python. - Experience with AWS Bedrock and SageMaker for AI model deployment. - Knowledge of AI and ML workflows, including model optimization and scalability. - Understanding of APIs, data integration, and AI-driven task automation. Preferred Skills (Good To Have): - Experience with fine-tuning generative AI models. - Familiarity with cloud-based architecture and services beyond AWS (e.g., GCP, Azure). - Knowledge of advanced NLP and transformer-based architectures. Qualifications: - Bachelors or Masters degree in Computer Science, AI, Machine Learning, or a related field. - 3-5 years of hands-on experience in AI development, focusing on generative models. - Demonstrated experience in deploying AI solutions in a cloud environment.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

We are looking for a versatile Technical Architect with specialized knowledge in Large Language models, Generative AI, as well as Full-Stack Cloud Native Application Development and Cloud Engineering to be a part of our innovative product development team. As a Technical Architect, your main responsibility will be to contribute significantly to the development of strategic platforms, cutting-edge products, and creative solutions for our clients. This will involve leveraging the capabilities of LLMs, Generative AI, and Agentic AI in conjunction with Cloud Solutions and Cloud Native Full Stack Application Development. Your key focus areas will include defining technical architecture and solution design for custom product development to deliver impactful and innovative solutions that enrich our product range. The ideal candidate will possess a robust architecture background, a drive to explore the frontiers of technology, and a willingness to acquire new skills and technologies rapidly. Moreover, the ideal candidate will blend product development and cloud architecture expertise with contemporary Generative AI and LLM Solution Architecture and Design proficiencies. Key Skills Required: - Proficiency in Full Stack MERN App Development, encompassing Front-End and Back-End Development, API Development, and Micro Services. - Experience in Azure Cloud Native App Development and Cloud Solutions is essential. - Familiarity with AWS Cloud Native App Development and AWS Cloud Solutions is desirable. - Expertise in LLM Solutions and LLM App Development. - Knowledge of AI Agents, Agentic AI Workflows, and Enterprise Integrations. - Competency in Generative AI, Multi Modal AI, and Creative AI, including working with Text, Imagery, Speech, Audio, and Video AI. Responsibilities: - Establishing Cloud Architecture for cloud solutions and cloud-native app development. - Defining Technical Architecture for full-stack application development and bespoke product development. - Leading the translation of business and product requirements into technical architectures that balance architectural quality, scalability, cost-effectiveness, maintainability, and suitability for the intended purpose. - Collaborating with senior technology and engineering stakeholders to shape, define, refine, validate, review, and optimize proposed architectures. - Working with solution architects, development leads, full-stack developers, engineers, and quality engineers to develop and integrate solutions into existing enterprise products. - Collaborating with cross-functional teams to develop and validate client requirements and quickly translate them into functional solutions. - Utilizing best practices in architecture to shape and validate proposed solutions. - Guiding solution design for scalable AI-enabled products, cloud-native apps, and cloud solutions. - Overseeing the technical delivery of applications involving Cloud Platforms, Cloud Native Apps, and Cloud AI Services. - Leading the solution tech design and promoting the adoption of appropriate design patterns and best practices across all layers of the application stack. - Designing enterprise products and full-stack applications on the MERN stack with a clear separation of concerns across layers. - Creating web apps and solutions that leverage LLM models and Generative AI workflows. - Leveraging Multi Modal AI capabilities to support all content types and modalities such as text, imagery, audio, speech, and video. - Continuously researching and exploring emerging trends and techniques in Generative AI and LLMs to remain at the forefront of innovation. - Leading product development and delivery within demanding timelines. - Engaging hands-on with code and guiding hands-on development Proof of Concepts (PoCs) to validate new ideas, shape new solutions, and provide direction to the development team. Skills and Competencies: Must-Have Capabilities: - Expertise in architecting and designing full-stack applications using the MERN stack (JavaScript) for both client-side and server-side applications. - Strong knowledge of architecting and designing full-stack applications using Python-based development, including Python Apps for LLM integration. - Proficiency in programming languages and key design patterns for both front-end and back-end development. - Previous experience leading solution design across front-end and back-end development. - Experience in solution design across Data Processing, Data Integration, API integration, Enterprise Systems Integration, and Workflow Design. - Experience in designing scaled Gen-AI solutions in Production, including the use of LLMs and Multi Modal AI. JavaScript / MERN Stack Competencies: - Minimum 8 years of hands-on experience in designing and building Full-Stack MERN apps using client-side and server-side JavaScript. - Strong solution design experience across Front-End Web Apps and Web Development, especially apps built on React.js. - Strong solution design experience with API Definition and Design, Micro Services Architecture best practices, and microservices solution design. - Hands-on experience with Node.js and related server-side JavaScript frameworks (e.g., Express.js, Nest.js). - Experience in deploying full-stack MERN apps to a cloud platform like Azure. - Working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js. - Experience with BFF frameworks such as GraphQL. - Experience with API Management and API Gateways. Gen-AI / LLM Apps with Python Competencies: - Minimum 5 years of hands-on experience in designing and building Python-based apps and solutions. - Minimum 2+ years of hands-on experience in designing scaled Gen-AI solutions with LLMs and LLM models. - Strong experience in designing custom data solutions in Python, including data processing pipelines, data workflows, and integrating customer and enterprise datasets to enhance LLM knowledge. - Solid experience in designing end-to-end RAG pipelines using advanced pre-processing, chunking, embedding, and ranking techniques to enhance LLM output quality. - Solid experience in designing LLM Pipelines that integrate structured and unstructured datasets from various sources. - Good experience in designing AI-enabled workflows that automate content, creativity, and business processes. - Hands-on experience in Conversational AI solutions and chat-driven experiences. - Hands-on experience with multiple LLMs and models like GPT-4o, GPT o1, o3 mini, Google, Gemini, Claude Sonnet, etc. - Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and preferably AWS Bedrock and/or GCP Vertex AI. - Experience in designing solutions utilizing AI Assistants and threaded messages to orchestrate with LLMs. - Experience working with AI Agents and Agent Services. - Experience in building Agentic AI workflows for iterative improvement of outputs. - Experience in evaluating and enhancing AI outputs. Multi Modal AI Competencies: - Experience in designing intelligent document processing and document content extraction pipelines. - Experience in utilizing Multi Modal AI models for Imagery and Visual Creative, including LoRA models. - Experience in Computer Vision and Image Processing using Multi Modal AI. - Experience in leveraging Multi Modal AI for Speech, including Text to Speech and Custom Speech training. - Experience in using partner APIs to orchestrate across various Multi Modal AI models. - Ability to lead a team of development leads for delivering Full-Stack MERN Apps and Products/Solutions based on LLMs and LLM models. Nice-to-Have Capabilities: MERN Stack and Cloud-Native App Development: - Experience in working with a Federated Graph architecture. - Experience in working with container apps and containerized environments. - Working experience with Web Components and Portable UI components. Python / ML LLM / Gen-AI App Development: - Hands-on experience with Single-Agent and Multi-Agent Orchestration solutions and frameworks. - Experience with different Agent communication and chaining patterns. - Ability to utilize LLMs for Reasoning and Planning workflows that enable automated orchestration across multiple applications and tools. - Ability to use Graph Databases and Knowledge Graphs as an alternative or replacement for Vector Databases to enable more relevant semantic querying and outputs via LLM models. - Solid background in Machine Learning solutions. - Fundamental understanding of Transformer Models. - Some experience in custom ML model development and deployment. - Proficiency in deep learning frameworks like PyTorch or Keras. - Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are a highly skilled Software Engineer / Senior Software Engineer with 38 years of hands-on experience in full-stack development. You possess strong expertise in Python, AWS services (Lambda, S3, API Gateway), SQL Server, and a working knowledge of Angular. Any experience with AWS Bedrock and AI/ML integrations will be considered a significant advantage. Your role requires you to be a self-motivated professional who excels in designing scalable solutions, working collaboratively across teams, and staying current with evolving technologies. In this role, you will be responsible for various tasks including full-stack development, cloud & DevOps, database management, system design & architecture, team collaboration & mentorship, and continuous learning & innovation. You will design, develop, and maintain scalable applications using Python and modern web frameworks. Additionally, you will build and consume RESTful APIs integrated with AWS services such as Lambda, S3, and API Gateway. Collaborating closely with frontend and backend teams for seamless system integration is a crucial part of your responsibilities. Your expertise in cloud & DevOps will be essential as you deploy, manage, and monitor cloud-based applications on AWS. Leveraging services such as EC2, RDS, and CloudFormation for infrastructure automation, and implementing CI/CD pipelines using Jenkins, AWS Code Pipeline, or similar tools will be part of your daily tasks. You will also ensure high availability and system reliability through performance monitoring and optimization. Database management will be another key aspect of your responsibilities where you will design and optimize database schemas and stored procedures in SQL Server. Enhancing the performance of large datasets and ensuring data integrity will be crucial. You will partner with data teams to support reporting and analytics initiatives. As part of system design & architecture, you will be expected to architect robust, secure, and scalable software solutions. Documenting design decisions and leading implementation efforts across teams will be integral. Evaluating new tools and technologies to improve system design and efficiency will also be part of your role. Team collaboration & mentorship are essential components where you will work cross-functionally with product managers, QA engineers, and fellow developers. Participating in code reviews and mentoring junior engineers to promote a collaborative and high-performance engineering culture will be key responsibilities. Continuous learning & innovation are encouraged, and you are expected to stay informed about the latest technology trends and best practices. Identifying opportunities to integrate AI and machine learning solutions, including AWS Bedrock, and contributing innovative ideas to enhance product capabilities and user experience will be part of your role. To be considered for this position, you should have a Bachelor's degree in Computer Science, Engineering, or a related field, along with 38 years of professional experience in software development. Proficiency in Python with hands-on experience in modern web frameworks, strong expertise in AWS (Lambda, S3, API Gateway), and a working knowledge of Angular for front-end development are required. Advanced SQL Server skills, including query tuning and stored procedures, experience with DevOps tools and CI/CD pipelines, as well as excellent problem-solving, communication, and collaboration skills are necessary. Preferred qualifications include experience with Docker, Kubernetes, or other containerization technologies, familiarity with AI/ML tools and services, especially AWS Bedrock, exposure to multi-cloud environments (Azure, GCP), and AWS Certifications (Developer, Solutions Architect, etc.).,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate should have a strong background in AI/ML analytics and a passion for utilizing data to drive business insights and innovation. Your main responsibilities will include developing and implementing machine learning models and algorithms, collaborating with project stakeholders to understand requirements and deliverables, analyzing and interpreting complex data sets using statistical and machine learning techniques, staying updated with the latest advancements in AI/ML technologies, and supporting various AI/ML initiatives by working with cross-functional teams. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field, along with a strong understanding of machine learning, deep learning, and Generative AI concepts. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Additionally, expertise in cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue), building secure data ingestion pipelines for unstructured data, proficiency in Python, TypeScript, NodeJS, ReactJS, data visualization tools, deep learning frameworks, version control systems, and Generative AI/LLM based development is desired. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer, valuing cross-cultural insight and competence for ongoing success, with a belief that a diverse workforce enhances perspectives and ideas for continuous improvement.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

delhi

On-site

As a Solution Architect/BDM specializing in Hyperscalers at NTT DATA, your role is crucial in assessing client needs, recommending appropriate cloud AI technologies, and collaborating with delivery teams to create end-to-end solutions. You are expected to demonstrate deep expertise in cloud-based AI services and Large Language Models (LLMs) offered by major cloud providers such as AWS Bedrock, Azure OpenAI Service, and Google Vertex AI. Your responsibilities include translating client business requirements into detailed technical specifications, sizing cloud infrastructure requirements, and optimizing cost models for AI workloads. In terms of business development, you will be responsible for sizing and qualifying opportunities in the Cloud AI space, developing proposals and solution presentations, nurturing client relationships, and collaborating with sales teams to create competitive go-to-market strategies. Additionally, you will play a key role in project and delivery leadership by working with delivery teams to develop solution approaches, lead technical discovery sessions, and ensure technical solutions meet client requirements. Your role will also involve AI Agent Development, where you will architect multi-agent systems leveraging cloud platform capabilities, develop frameworks for agent orchestration, and design cloud-native agent solutions that integrate with existing enterprise systems. Furthermore, your knowledge, skills, and attributes should include a strong understanding of cloud infrastructure sizing, optimization, and cost management for AI workloads, as well as the ability to convert business requirements into technical specifications. Basic qualifications for this position include 8+ years of experience in solution architecture or technical consulting roles, 3+ years of specialized experience working with LLMs and Private AI solutions, and a bachelor's degree in computer science, AI, or related field. Preferred qualifications include a master's degree or PhD in Computer Science or related field, cloud certifications from AWS, Azure, or GCP, and experience with autonomous agent development using cloud-based AI services. This position is based in Delhi or Bangalore and offers a hybrid working environment. NTT DATA, a trusted global innovator of business and technology services, is committed to helping clients innovate, optimize, and transform for long-term success. With a diverse team of experts in more than 50 countries, NTT DATA invests significantly in R&D to support organizations and society in embracing the digital future. As an Equal Opportunity Employer, NTT DATA values diversity and inclusion, making it a place where you can grow, belong, and thrive.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms. You will work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. It is essential to stay updated with the latest advancements in AI/ML technologies and methodologies and collaborate with cross-functional teams to support various AI/ML initiatives. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field. A strong understanding of machine learning, deep learning, and Generative AI concepts is required. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue) is highly desirable. Expertise in building enterprise-grade, secure data ingestion pipelines for unstructured data (ETL/ELT) is a plus. Proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks like pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy, Glue crawler, ETL, as well as experience with data visualization tools like Matplotlib, Seaborn, and Quicksight, is beneficial. Additionally, knowledge of deep learning frameworks such as TensorFlow, Keras, and PyTorch, experience with version control systems like Git and CodeCommit, and strong knowledge and experience in Generative AI/LLM based development are essential for this role. Experience working with key LLM models APIs (e.g., AWS Bedrock, Azure Open AI/OpenAI) and LLM Frameworks (e.g., LangChain, LlamaIndex), as well as proficiency in effective text chunking techniques and text embeddings, are also preferred skills. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer that values diversity and believes that a diverse workforce contributes different perspectives and creative ideas, enabling continuous improvement.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a GenAI Developer at Vipracube Tech Solutions, you will be responsible for developing and optimizing AI models, implementing AI algorithms, collaborating with cross-functional teams, conducting research on emerging AI technologies, and deploying AI solutions. This full-time role requires 5 to 6 years of experience and is based in Pune, with the flexibility of some work from home. Your key responsibilities will include fine-tuning large language models tailored to marketing and operational use cases, building Generative AI solutions using various platforms like OpenAI (GPT, DALLE, Whisper) and Agentic AI platforms such as LangGraph and AWS Bedrock. You will also be building robust pipelines using Python, NumPy, Pandas, applying traditional ML techniques, handling CI/CD & MLOps, using AWS Cloud Services, collaborating using tools like Cursor, and effectively communicating with stakeholders and clients. To excel in this role, you should have 5+ years of relevant AI/ML development experience, a strong portfolio of AI projects in marketing or operations domains, and a proven ability to work independently and meet deadlines. Join our dynamic team and contribute to creating smart, efficient, and future-ready digital products for businesses and startups.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an AI Engineer, you will play a crucial role in our innovative startup that is dedicated to utilizing Artificial Intelligence to create transformative solutions. You will be responsible for designing, developing, and implementing AI applications, with a particular focus on Large Language Models (LLMs) and agentic frameworks. Collaborating closely with cross-functional teams, you will contribute to building robust AI solutions that leverage various models and inference points. Your key responsibilities will include designing and developing applications using LLMs and agentic frameworks to address real-world problems, setting up inference points to optimize AI model performance, and integrating AI capabilities into existing products and services in collaboration with software engineers and data scientists. You will utilize technologies like Autogen, LangChain, and AWS Bedrock to build and deploy AI solutions, and contribute to the development of generative AI applications that enhance user experiences and drive innovation. It is essential to stay updated with the latest advancements in AI and machine learning to continuously enhance our technology stack. To qualify for this role, you should have 2-3 years of experience in AI engineering or a related field, emphasizing LLMs and generative AI. Proficiency in agentic frameworks and designing applications around them is necessary, along with expertise in utilizing tools and technologies such as Autogen, LangChain, and AWS Bedrock. Strong programming skills in languages commonly used in AI development, like Python and Java, are required, as well as experience with multiple AI models and their inference setups. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment are also highly valued. If you are passionate about pushing the boundaries of technology and creating impactful AI solutions, this opportunity with our hiring partner is ideal for you. Join us at our startup, where we are committed to leveraging AI to drive innovation and provide unparalleled growth opportunities for talented individuals in the dynamic landscape of Tech Jobs.,

Posted 2 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies