Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary Job Description: AWS Data Engineer About the Role We are looking for a highly technical and experienced AWS Data Engineer to join our team. The successful candidate will be responsible for designing, developing, and deploying machine learning models to solve complex business problems by leveraging large datasets on the AWS platform. This role requires working across the entire ML lifecycle, from data collection and preprocessing to model training, evaluation, and deployment using AWS AI Services. The goal is to create efficient self-learning applications capable of evolving over time. If you are passionate about data engineering and machine learning, possess strong programming skills, and have a deep understanding of statistical methods and various ML algorithms, we want to hear from you. Responsibilities Design, develop, and deploy machine learning models on AWS to address complex business challenges. Work across the ML lifecycle, including data collection, preprocessing, model training, evaluation, and deployment using services such as Amazon SageMaker, AWS Glue, and Amazon S3. Leverage large datasets to derive insights and create data-driven solutions using AWS analytics tools. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions on AWS. Optimize and maintain data pipelines and systems on AWS to ensure efficient data processing and storage. Implement and monitor model performance, making necessary adjustments to improve accuracy and efficiency using AWS monitoring tools. Keep up-to-date with the latest advancements in AWS AI and machine learning technologies. Document processes and models to ensure transparency and reproducibility. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field. Proven experience as a Data Engineer or in a similar role, with a strong focus on machine learning and AWS, ranging from 3 to 8 years of relevant experience. Proficiency in programming languages such as Python, with experience in using AWS SDKs and APIs. Deep understanding of statistical methods and various machine learning algorithms. Experience with AWS AI and ML frameworks and libraries, such as Amazon SageMaker, AWS Glue, and AWS Lambda. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Knowledge of big data technologies and tools, such as Hadoop, Spark, or Kafka, is a plus. Familiarity with AWS cloud platform and services like Amazon EC2, Amazon RDS, and Amazon Redshift is an advantage. Ability to work independently and manage multiple projects simultaneously.
Posted 3 weeks ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: AI/ML Engineer (Python + AWS + REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview: Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share resume to kratika.vijaywargiya@advantal.net
Posted 3 weeks ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Job Summary We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities Architecture & Strategy : Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience Core Skills : Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.) (ref:hirist.tech)
Posted 3 weeks ago
16.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Data Scientist (5+ Years of Experience) We are seeking a highly motivated Data Scientist with over 5 years of hands-on experience in data mining, statistical analysis, and developing high-quality machine learning models. The ideal candidate will have a passion for solving real-world problems using data-driven approaches and possess strong technical expertise across various data science domains. Key Responsibilities: Apply advanced data mining techniques and statistical analysis to extract actionable insights. Design, develop, and deploy robust machine learning models to address complex business challenges. Conduct A/B and multivariate experiments to evaluate model performance and optimize outcomes. Monitor, analyze, and enhance the performance of machine learning models post-deployment. Collaborate cross-functionally to build customer cohorts for CRM campaigns and conduct market basket analysis. Stay updated with state-of-the-art techniques in NLP, particularly within the e-commerce domain. Required Skills & Qualifications: Programming & Tools: Proficient in Python, PySpark, and SQL for data manipulation and analysis. Machine Learning & AI: Strong experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and expertise in NLP, Computer Vision, Recommender Systems, and Optimization techniques. Cloud & Big Data: Hands-on experience with AWS services, including Glue, EKS, S3, SageMaker, and Redshift. Model Deployment: Experience deploying pre-trained models from platforms like Hugging Face and AWS Bedrock. DevOps & MLOps: Understanding of Git, Docker, CI/CD pipelines, and deploying models with frameworks such as FastAPI. Advanced NLP: Experience in building, retraining, and optimizing NLP models for diverse use cases. Preferred Qualifications: Strong research mindset with a keen interest in exploring new data science methodologies. Background in e-commerce analytics is a plus. If youre passionate about leveraging data to drive impactful business decisions and thrive in a dynamic environment, wed love to hear from you!,
Posted 3 weeks ago
0 years
0 Lacs
Budaun Sadar, Uttar Pradesh, India
On-site
MinutestoSeconds is a dynamic organization specializing in outsourcing services, digital marketing, IT recruitment, and custom IT projects. We partner with SMEs, mid-sized companies, and niche professionals to deliver tailored solutions. We would love the opportunity to work with YOU!! Requirements JD: About the Role: We are looking for a highly motivated and innovative AI/ML Engineer to join our growing team. You will play a key role in designing, developing, and deploying machine learning models and AI-driven solutions that solve real-world business problems. This is a hands-on role requiring a deep understanding of ML algorithms, data preprocessing, model optimization, and scalable deployment. Key Responsibilities: Design and implement scalable ML solutions for classification, regression, clustering, and recommendation use cases Collaborate with data scientists, engineers, and product teams to translate business requirements into ML use cases Preprocess large datasets using Python, SQL, and modern ETL tools Train, validate, and optimize machine learning and deep learning models Deploy models using MLOps best practices (CI/CD, model monitoring, versioning) Continuously improve model performance and integrate feedback loops Research and experiment with the latest in AI/ML trends, including GenAI, LLMs, and transformers Document models and solutions for reproducibility and compliance Required Skills: Strong proficiency in Python, with hands-on experience in NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, etc. Solid understanding of supervised and unsupervised learning, NLP, and time-series forecasting Experience with cloud platforms such as AWS, GCP, or Azure (preferred: SageMaker, Vertex AI, or Azure ML Studio) Familiarity with Docker, Kubernetes, and MLOps practices Proficient in writing efficient and production-grade code Excellent problem-solving and critical-thinking skills Good to Have: Experience with LLMs, Generative AI, or OpenAI APIs Exposure to big data frameworks like Spark or Hadoop Knowledge of feature stores, data versioning tools (like DVC or MLflow) Published work, research papers, or contributions to open-source ML projects
Posted 3 weeks ago
3.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Job Overview PitchBook’s Product Owner works collaboratively with key Product stakeholders and teams to deliver the department’s product roadmap. This role takes an active part in aligning our engineering activities with Product Objectives across new product capabilties as well as data and scaling improvements to our core technologies, with a focus on AI/ML data extraction, collection, and enrichment capabilities. Team Overview The Data Technology team within PitchBook’s Product organization develops solutions to support and accelerate our data operations processes. This domain impacts core workflows of data capture, ingestion, and hygiene across our core private and public capital markets datasets. This role works on our AI/ML Collections Data Extraction & Enrichment teams, closely integrated with Engineering and Product Management to ensure we are delivering against our Product Roadmap. These teams provide backend AI/ML services that power PitchBook’s data collection activities and related internal conent management systems. Outline Of Duties And Responsibilities Be a domain expert for your product area(s) and understand user workflows and needs Actively define backlog priority for your team(s) in collaboration with Product and Engineering Manage delivery of features according to the Product Roadmap Validate the priority and impact of incoming requirements from Product Management and Engineering Break down prioritized requirements into well-structured backlog items for the engineering team to complete Create user stories and acceptance criteria that indicate successful implementation of requirements Communicate requirements, acceptance criteria, and technical details to stakeholders across multiple PitchBook departments Define, create, and manage metrics that represent team performance Manage, track, and mitigate risks or blockers of Feature delivery Support execution of AI/ML collections work related but not limited to AI/ML data extraction, collection, and enrichment services. Support PitchBook’s values and vision Participate in various company initiatives and projects as requested Experience, Skills And Qualifications Bachelor's degree in Information Systems, Engineering, Data Science, Business Administration, or a related field 3+ years of experience as a Product Manager or Product Owner within AI/ML or enterprise SaaS domains A proven record of shipping high impact data pipeline or data collection-related tools and services Familiarity with AI/ML workflows, especially within model development, data pipelines, or classification systems Experience collaborating with globally distributed product engineering and operations teams across time zones Excellent communication skills to drive clarity and alignment between business stakeholders and technical teams Bias for action and a willingness to roll up your sleeves and do what is necessary to meet team goals Experience translating user-centric requirements and specifications into user stories and tasks Superior attention to detail including the ability to manage multiple projects simultaneously Strong verbal and written communication skills, including strong audience awareness Experience with shared SDLC and workspace tools like JIRA, Confluence, and data reporting platforms Preferred Qualifications Direct experience with applied AI/ML Engineering services. Strong understanding of supervised and unsupervised ML models, including their training data needs and lifecycle impacts Background in fintech supporting content collation, management, and engineering implementation Experience with data quality measurements, annotation systems, knowledge graphs, and ML Model evaluation Exposure to cloud –based ML infrastructure and data pipeline orchestration tools such as AWS SageMaker, GCP Vertex AI, Airflow, and dbt Certifications related to Agile Product Ownership / Product Management such as CSPO, PSPO, POPM are a plus Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. This role collaborates with Seattle and New York-based stakeholders, and typical overlap is between 6:30 – 8:30AM Pacific. Limited corporate travel may be required to remote offices or other business meetings and events. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. 037_PitchBookDataInc PitchBook Data, Inc Legal Entity
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job Title: AI/ML Architect / Senior AI/ML Engineer (8+ Years Experience) Location: [Onsite/Remote/Hybrid – Customize as per your need] Employment Type: Full-time 🔍 About the Role: We are seeking a seasoned AI/ML Architect / Senior Engineer with 10+ years of hands-on experience in Artificial Intelligence, Machine Learning, and Data Science. The ideal candidate will have worked across various industries (e.g., healthcare, finance, retail, manufacturing, etc.) and demonstrated a deep understanding of the end-to-end ML lifecycle — from data ingestion to model deployment and monitoring. You’ll play a strategic and technical leadership role in designing and scaling intelligent systems while staying ahead of evolving market trends in AI, ML, and GenAI. 🎯 Key Responsibilities: Architect, design, and implement scalable AI/ML solutions across multiple domains. Translate business problems into technical solutions using data-driven methodologies. Lead model development, deployment, and operationalization using MLOps best practices. Evaluate and incorporate emerging trends such as Generative AI (e.g., LLMs) , AutoML , Federated Learning , and Responsible AI . Mentor and guide junior engineers and data scientists. Collaborate with product managers, data engineers, and stakeholders for end-to-end delivery. Establish best practices in experimentation, model validation, reproducibility, and monitoring. Work with modern data stack and cloud ecosystems (AWS, Azure, GCP). 🧠 Required Skills and Experience: 8+ years of experience in AI/ML, Data Science, or related roles. Proficient in Python, R, SQL, and key libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost, etc.). Strong experience with MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Expertise in developing, tuning, and deploying ML/DL models in production environments. Experience in NLP, Computer Vision, Time-Series Forecasting, and/or GenAI. Familiar with model explainability (SHAP, LIME), fairness, and bias mitigation techniques. Solid knowledge of cloud-based architectures (Azure, AWS, or GCP). Experience across domains such as fintech, healthcare, e-commerce, logistics, or manufacturing. 🌐 Preferred Qualifications: Master's or Ph.D. in Computer Science, Data Science, Statistics, or a related field. Experience integrating AI with business applications (e.g., ERP, CRM, RPA platforms). Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. Familiarity with data governance, privacy-preserving AI, and compliance standards (GDPR, HIPAA). 🌟 Why Join Us? Work with cross-functional, forward-thinking teams on impactful projects. Opportunity to lead initiatives in cutting-edge AI and industry 4.0 innovations . Flexible work culture with continuous learning and growth opportunities. Access to the latest tools, cloud infrastructure, and high-compute environments.
Posted 3 weeks ago
13.0 - 17.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced professional with over 13 years of experience in engaging with clients and translating their business needs into technical solutions. You have a proven track record of working with cloud services on platforms like AWS, Azure, or GCP. Your expertise lies in utilizing AWS data services such as Redshift, Glue, Athena, and SageMaker. Additionally, you have a strong background in generative AI frameworks like GANs and VAEs and possess advanced skills in Python, including libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Your role involves designing and implementing advanced AI solutions, focusing on areas like NLP and innovative ML algorithms. You are proficient in developing and deploying NLP models and have experience in enhancing machine learning algorithms. Your knowledge extends to MLOps principles, best practices, and the development and maintenance of CI/CD pipelines. Your problem-solving skills enable you to analyze complex data sets and derive actionable insights. Moreover, your excellent communication skills allow you to effectively convey technical concepts to non-technical stakeholders. In this role, you will be responsible for understanding clients" business use cases and technical requirements, translating them into technical designs that elegantly meet their needs. You will be instrumental in mapping decisions with requirements, identifying optimal solutions, and setting guidelines for NFR considerations during project implementation. Your tasks will include writing and reviewing design documents, reviewing architecture and design aspects, and ensuring adherence to best practices. To excel in this position, you should hold a bachelor's or master's degree in computer science, Information Technology, or a related field. Additionally, relevant certifications in AI, cloud technologies, or related areas would be advantageous. Your ability to innovate, design, and implement cutting-edge solutions will be crucial in this role, as well as your skill in technology integration and problem resolution through systematic analysis. Conducting POCs to validate suggested designs and technologies will also be part of your responsibilities.,
Posted 3 weeks ago
8.0 - 11.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 3 weeks ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 3 weeks ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview We are seeking a Full Stack Developer with minimum of 5+ years of experience in Python, React, and AI/ML, who also has hands-on experience with application hosting on cloud platforms (VMs, App Services, Containers). This is a lead role where you will guide a team of 5 developers and work on building and deploying modern, intelligent web applications using AWS, Azure, and scalable backend/frontend architecture. Responsibilities Lead a team of 5 engineers across backend, frontend, and AI/ML components. Design and develop scalable full stack solutions using Python (FastAPI/Django/Flask) and React.js. Deploy and host applications using VMs (EC2, Azure VMs), App Services, and Containers (Docker/K8s). Integrate and operationalize ML/LLM models into production systems. Own infrastructure setup for CI/CD, application monitoring, and secure deployments. Collaborate cross-functionally with data scientists, DevOps engineers, and business stakeholders. Conduct code reviews, lead sprint planning, and ensure delivery velocity. Tech Stack & Tools Frontend: React, Redux, Tailwind CSS / Material UI Backend: Python (FastAPI/Django/Flask), REST APIs AI/ML: scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain LLM : Azure Open AI , Cohere Cloud: AWS: EC2, Lambda, S3, RDS, SageMaker, EKS, Elastic Beanstalk Azure: App Services, AKS, Azure ML, Azure Functions, Azure VMs LLM: OpenAI / Azure OpenAI (GPT-4, GPT-3.5) Hugging Face Transformers LangChain / LlamaIndex / Haystack Vector DBs: Croma , Pinecone, FAISS, Weaviate, Qdrant RAG (Retrieval Augmented Generation) pipelines App Hosting: VMs (EC2, Azure VMs), Azure App Services, Docker, Kubernetes Database: PostgreSQL, MongoDB, Redis DevOps: GitHub Actions, Jenkins, Terraform (optional), Monitoring (e.g., Prometheus, Azure Monitor) Tools: Git, Jira, Confluence, Slack Key Requirements 5–8 years of experience in full stack development with Python and React Proven experience in deploying and managing applications on VMs, App Services, Docker/Kubernetes Strong cloud experience on both AWS and Azure platforms Solid understanding of AI/ML integration into web apps (end-to-end lifecycle) Experience leading small engineering teams and delivering high-quality products Strong communication, collaboration, and mentoring skills LLM and Generative AI exposure (OpenAI, Azure OpenAI, RAG pipelines) Familiarity with vector search engines Microservices architecture and message-driven systems (Kafka/Event Grid) Security-first mindset and hands-on with authentication/authorization flows Lead Full Stack Developer – Python, React, AI/ML Location: Kochi Experience: 5+ years Team Leadership: Yes, team of 5 developers Employment Type: Full-time Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Salary: As per experience. Experience: 3-5 years on Machine Learning or AI engineering. Job Summary: We are seeking a passionate and skilled AI/ML Engineer to join our team to design, develop, and deploy machine learning models and intelligent systems. You will work closely with software developers, and product managers to integrate AI solutions into real-world applications. Key Responsibilities: · Design, develop, and train machine learning models (e.g. clustering, NLP). · Basic understanding of AI Algorithm and underling models(RAG/CAG). · Build scalable pipelines for data ingestion, preprocessing, and model deployment. · Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Hugging Face. · Collaborate with cross-functional teams to define business problems and develop AI-driven solutions. · Monitor model performance and ensure continuous learning and improvement. · Deploy ML models using Docker, CI/CD, cloud services like AWS/Azure/GCP · Stay updated with the latest AI research and apply best practices to business use cases. Requirements: · Bachelor's or Master’s degree in Computer Science or related field. · Strong knowledge of Python and ML libraries (pandas, NumPy, etc.). · Experience with NLP, Computer Vision, or Recommendation Systems is a plus. · Familiarity with model evaluation metrics and handling bias/fairness in ML models. · Good understanding of REST APIs and cloud-based AI solutions. · Experience with Generative AI (e.g., OpenAI, LangChain, LLM fine-tuning). · Experience on any, Azure SQL, Databricks and Snowflake. Preferred Skills: · Experience with vector databases, semantic search and Agentic frame work. · Experience using platforms like Azure AI, AWS SageMaker, or Google Vertex AI.
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant-Data Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
7.0 - 12.0 years
20 - 30 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis.
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Do you love problem solving? Are you looking for real world Supply Chain challenges? Do you have a desire to make a major contribution to the future, in the rapid growth environment of Cloud Computing? Amazon Web Services is looking for a highly motivated, Data Scientist to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers, Business Intelligence Engineers and Business Analysts to achieve our goals. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. Key job responsibilities Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard time Series Forecasting techniques like ARIMA, ARIMAX, Holt Winter and formulate ensemble model. Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering, Principle Component Analysis, Market Basket analysis). Experience in solving optimization problems like inventory and network optimization . Should have hands on experience in Linear Programming. Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area Detail-oriented and must have an aptitude for solving unstructured problems. You should work in a self-directed environment, own tasks and drive them to completion. Excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About The Team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications 5+ years of data scientist experience 4+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience applying theoretical models in an applied environment Preferred Qualifications Experience in Python, Perl, or another scripting language Experience in a ML or data scientist role with a large technology company Functional knowledge of AWS platforms such as S3, Glue, Athena, Sagemaker, Lambda, EC2, Batch, Step Function. Experience in creating powerful data driven visualizations to describe your ML modeling results to stakeholders Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADSIPL - Karnataka Job ID: A2954457
Posted 3 weeks ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations. Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations. Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Responsibilities #KGS Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations . Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations . Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Qualifications This role is for you if you have the below Educational Qualifications B.Tech or M.Tech in CSE Work Experience 14+ years of Professional Relevant Experience
Posted 3 weeks ago
4.0 years
25 - 35 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2500000 - Rs 3500000 (ie INR 25-35 LPA) Min Experience: 4 years Location: Bengaluru JobType: full-time We are looking for a highly skilled and motivated Machine Learning Engineer with 4-6 years of experience to join our growing team. As a core member of our AI/ML division, you will be responsible for designing, developing, deploying, and maintaining machine learning solutions that power real-world products and systems. You'll collaborate with data scientists, software engineers, and product teams to bring cutting-edge ML models from research into production. The ideal candidate will have a strong foundation in machine learning algorithms, statistical modeling, and data preprocessing, along with proven experience deploying models at scale. This role offers the opportunity to work on a variety of projects involving predictive modeling, recommendation systems, NLP, computer vision, and more. Requirements Key Responsibilities: Design and develop robust, scalable, and efficient machine learning models for various business applications. Collaborate with data scientists and analysts to understand project objectives and translate them into ML solutions. Perform data cleaning, feature engineering, and exploratory data analysis on large, structured and unstructured datasets. Evaluate, fine-tune, and optimize model performance using techniques such as cross-validation, hyperparameter tuning, and ensembling. Deploy models into production using tools and frameworks such as Docker, MLflow, Airflow, or Kubernetes. Continuously monitor model performance and retrain/update models as required to maintain accuracy and relevance. Conduct code reviews, maintain proper documentation, and contribute to best practices in ML development and deployment. Work with stakeholders to identify opportunities for leveraging ML to drive business decisions. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, Statistics, or a related field. 4-6 years of hands-on experience in designing and implementing machine learning models in real-world applications. Strong understanding of classical ML algorithms (e.g., regression, classification, clustering, ensemble methods) and experience with deep learning techniques (CNNs, RNNs, transformers) is a plus. Proficient in programming languages such as Python (preferred) and experience with ML libraries/frameworks like scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Experience in data preprocessing, feature selection, and pipeline automation. Familiarity with version control systems (e.g., Git) and collaborative development environments. Ability to interpret model results and communicate findings to technical and non-technical stakeholders. Strong problem-solving skills and a passion for innovation and continuous learning. Preferred Qualifications: Experience with cloud-based ML platforms (AWS SageMaker, Azure ML, or Google Cloud AI Platform). Exposure to big data technologies (e.g., Spark, Hadoop) and data pipeline tools (e.g., Airflow). Prior experience in model deployment and ML Ops practices
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderābād
On-site
Overview: We are seeking an experienced Cloud Delivery engineer to work horizontally across our organization, collaborating with Cloud Engineering, Cloud Operations, and cross-platform teams. This role is crucial in ensuring that cloud resources are delivered according to established standards, with a focus on both Azure and AWS platforms. The Cloud Delivery engineer will be responsible for delivery of Data and AI platforms. Responsibilities: Seeking a talented AWS artificial intelligence specialist with following skills. Provision the cloud resources, ensuring they adhere to approved architecture and organizational standards on both Azure and AWS. Collaborate closely with Cloud Engineering, Cloud Operations, and cross-platform teams to ensure seamless delivery of cloud resources on both Azure and AWS. Architecting, Designing, Developing and Implementing AI models and algorithms to address business challenges and improve processes. Experience in implementing Security Principles and Guardrails to AI Infrastructure. Identify and mitigate risks associated with cloud deployments and resource management in multi-cloud environments. Collaborating with cross-functional teams of data scientists, software developers, and business stakeholders to understand requirements and translate them into AI solutions. Create and maintain documentation for AI models, algorithms as Knowledge base article Participate in capacity planning and cost optimization initiatives for multi-cloud resources. Experience working with Vector DB (Datastax HCD) Conduct experiments to test and compare the effectiveness of different AI approaches. Troubleshooting and resolving issues related to AI systems. Deploying AI solutions into production environments and ensuring their integration with existing systems. Monitoring and evaluating the performance of AI systems, adjusting as necessary to improve outcomes Research and stay updated on the latest AI and machine learning technology advancements. Present findings and recommendations to stakeholders, including technical and non-technical audiences. Providing technical expertise and guidance on AI-related projects and initiatives. Expereince in creating Deployments for Intelligent Search, Intelligent Document Processing, Media Intelligence, Forecasting, AI for DevOps, Identity Verification, Content Moderation Experience in Amazon Bedrock, SageMaker, All Foundational AWS Resources under Compute, Networking, Security, App Runner, Lambda Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field; Master's degree preferred. 8+ years of experience in IT, with at least 4 years focused on cloud technologies, including substantial experience with both AWS & Azure. Strong understanding of AWS and Azure services, architectures, and best practices, particularly in Data and AI platforms. Certifications in both AWS (e.g., AWS Certified Solutions Architect - Professional) Azure (e.g., Azure Solutions Architect Expert). Experience in working with multiple teams cloud platforms. Demonstrated ability to work horizontally across different teams and platforms. Strong knowledge of cloud security principles and compliance requirements in multi-cloud environments. working experience of DevOps practices and tools applicable to both Azure and AWS. Experience with infrastructure as code (e.g., ARM templates, CloudFormation, Terraform). Proficiency in scripting languages (e.g., PowerShell, Bash, Python). Solid understanding of networking concepts and their implementation in Azure and AWS. Preferred: Cloud Architecture/specialist. Experience with hybrid cloud architectures. Familiarity with containerization technologies (e.g., Docker, Kubernetes) on both Azure and AWS.
Posted 3 weeks ago
0 years
2 - 4 Lacs
Hyderābād
On-site
DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing, implementing, and building complex, scalable, and secure GenAI and ML applications and models built on AWS tailored to customer needs Providing technical guidance and implementation support throughout project delivery, with a focus on using AWS AI/ML services Collaborating with customer stakeholders to gather requirements and propose effective model training, building, and deployment strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Software Development
Posted 3 weeks ago
3.0 years
0 Lacs
Gurgaon
On-site
DESCRIPTION We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As an engineer you will be responsible for: Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About the team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Amazon.in Software Development
Posted 3 weeks ago
0 years
2 - 3 Lacs
Gurgaon
On-site
DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Software Development
Posted 3 weeks ago
4.0 - 6.0 years
3 - 6 Lacs
Noida
On-site
We are looking for a driven individual with financial knowledge and analytical mindset. The candidate should be a motivated team player who can maintain efficiency and accuracy when multitasking. To be a strong candidate for this role, the key here will be experience in financial services and proven understanding of products. along with this, a strong written and verbal communicator to be able to interact with CSU/Field RPs. Key Responsibilities Working with Surveillance internal teams and business partners to define and document business requirements Engage Business counterparts to ensure solutions are appropriate as per business requirement and level of readiness Translating business requirements into Solutions Perform and deliver on complex ad-hoc business analysis requests Translate analytic output into understandable and actionable business knowledge Coordinate and prioritize business needs in a matrix management environment Document and communicate results and recommendations to external and internal teams Required Qualifications 4-6 years of experience in analytics industry Financial services experience required Strong quantitative/analytical/programming and problem-solving skills Excellent knowledge of MS Excel, Power point and Word Highly motivated self-starter with excellent verbal and written communication skills Ability to work effectively in a team environment on multiple projects and drive results through direct and in-direct influence Candidate should be willing to learn tools like Python, SQL, PowerApps & PowerBI Series 7 or SIE preferred Preferred Qualifications Experience with AWS Infrastructure with experience on and knowledge of tools like SageMaker and Athena Python programming, SQL and data manipulation skills About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Legal Affairs
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough