Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: MLOps Engineer Job Type: Contractor Location: On-site Gurugram, Pune or Bangalore Job Summary Join our customer's dynamic team as a hands-on MLOps Engineer and play a pivotal role in driving the development, deployment, and automation of robust machine learning pipelines. Utilize your expertise in AWS and MLOps to help architect, optimize, and scale production-ready ML solutions across diverse projects. We value professionals who excel in both written and verbal communication, collaborating effectively in a high-performing environment. Key Responsibilities Design, automate, and maintain end-to-end ML pipelines for model training, deployment, and monitoring on AWS infrastructure. Lead the development and operationalization of machine learning solutions using AWS services such as EKS, ECS, ECR, SageMaker, Step Functions, EventBridge, SNS/SQS, and Model Registry. Integrate ML Flow to manage experiment tracking, model versioning, and lifecycle management. Implement and manage CI/CD pipelines specifically tailored for machine learning code and workflows. Collaborate closely with data scientists, engineers, and stakeholders to productionize ML models and ensure reliability, scalability, and security. Monitor and troubleshoot ML systems in production, proactively resolving issues and optimizing performance. Document workflows, processes, and architectural decisions with clarity and precision. Required Skills and Qualifications Proven experience in MLOps with hands-on expertise in designing and deploying ML pipelines in production environments. Strong proficiency with AWS core services, especially EKS, ECS, ECR, SageMaker (jobs, batch transform, hyperparameter tuning), Step Functions, EventBridge, SNS/SQS, and Model Registry. Solid understanding of core machine learning concepts and best practices for productionizing ML code. Demonstrated experience with ML Flow for managing model lifecycle and experiment tracking. Expertise in implementing CI/CD pipelines for ML projects. Excellent written and verbal communication skills, with a collaborative mindset. A passion for automation, optimization, and scalable system design. Preferred Qualifications Experience supporting large-scale, distributed machine learning systems in a cloud environment. Familiarity with container orchestration and monitoring tools within AWS. Contributions to open-source MLOps or ML engineering projects.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform sustainability. Requirements Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5-8+ years of experience in AI/ML Engineering, with at least 3 years in applied deep Skills : Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. Benefits Competitive compensation and full-time benefits. Opportunities for certification and professional growth. Flexible work hours and remote work options. Inclusive, innovative, and supportive team culture. (ref:hirist.tech)
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Consultant - AI Engineer Introduction To Role Are you ready to redefine an industry and change lives? As a Senior Consultant - AI Engineer, you'll develop and deploy key AI products that generate business and scientific insights through advanced data science techniques. Dive into building models using both foundational and innovative methods, processing structured and unstructured data, and collaborating closely with internal partners to solve complex problems in drug development, manufacturing, and supply chain. This is your chance to make a direct impact on patients by transforming our ability to develop life-changing medicines! About The Operations IT Team Operations IT is a global capability supporting the Global Operations organization across Pharmaceutical Technology Development, Manufacturing & Global Engineering, Quality Control, Sustainability, Supply Chain, Logistics, and Global External Sourcing and Procurement. We operate from key hubs in the UK, Sweden, the US, China, and our Global Technology Centers in India and Mexico. Our work directly impacts patients by transforming our ability to develop life-changing medicines, combining pioneering science with leading digital technology platforms and data. Accountabilities Drive the implementation of advanced modelling algorithms (e.g., classification, regression, clustering, NLP, image analysis, graph theory, generative AI) to generate actionable business insights. Mentor AI scientists, plan and supervise technical work, and collaborate with stakeholders. Work within an agile framework and in multi-functional teams to align AI solutions with business goals. Engage internal stakeholders and external partners for the successful delivery of AI solutions. Continuously monitor and optimize AI models to improve accuracy and efficiency (scalable, reliable, and well-maintained). Document processes, models, and key learnings & contribute to building internal AI capabilities. Ensure AI models adhere to ethical standards, privacy regulations, and fairness guidelines. Essential Skills/Experience Bachelor's in operations research, mathematics, computer science, or related quantitative field. Advanced expertise in Python and familiarity with database systems (e.g. SQL, NoSQL, Graph). Proven proficiency in at least 3 of the following domains: Generative AI: advanced expertise working with: LLMs/transformer models, AWS Bedrock, SageMaker, LangChain Computer Vision: image classification and object detection MLOps: putting models into production in the AWS ecosystem Optimization: production scheduling, planning Traditional ML: time series analysis, unsupervised anomaly detection, analysis of high dimensional data Proficiency in ML libraries sklearn, pandas, TensorFlow/PyTorch Experience productionizing ML/ Gen AI services and working with complex datasets. Strong understanding of software development, algorithms, optimization, and scaling. Excellent communication and business analysis skills. Desirable Skills/Experience Master’s or PhD in a relevant quantitative field. Cloud engineering experience (AWS cloud services) Snowflake Software development experience (e.g. React JS, Node JS) When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is purpose-led, ensuring that we can fulfill our mission to push the boundaries of science and discover life-changing medicines. We take pride in working close to the cause, opening the locks to save lives, ultimately making a massive difference to the outside world. Here you'll find a dynamic environment where innovation thrives and diverse minds work inclusively together. Ready to make a meaningful impact? Apply now and be part of our journey to revolutionize healthcare! Date Posted 27-Jun-2025 Closing Date 09-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 month ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities: Overall 8+ years of experience, out of which in 4+ in AI, ML and Gen AI and related technologies Proven track record of leading and scaling AI/ML teams and initiatives Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications Expertise in ML frameworks such as PyTorch and/or TensorFlow Experience with ONNX runtime, model optimization and hyperparameter tuning Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack Experience in production-level deployment of AI models at enterprise scale Proficiency in data preprocessing, feature engineering, and large-scale data handling Expertise in image and video processing, object detection, image segmentation, and related CV tasks Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications Experience with speech recognition, audio classification, and general signal processing techniques Experience with RAG, VectorDB, GraphDB and Knowledge Graphs Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI) Required Skills: Strong leadership and team management skills Excellent verbal and written communication skills Strategic thinking and problem-solving abilities Adaptability and adapting to the rapidly evolving AI/ML landscape Strong collaboration and interpersonal skills Ability to translate market needs into technological solutions Strong understanding of industry dynamics and ability to translate market needs into technological solutions Demonstrated ability to foster a culture of innovation and creative problem-solving
Posted 1 month ago
10.0 years
0 Lacs
India
Remote
One of our Looking for a Cloud Engineer with 5–10 years of experience. This is a remote role open anywhere in India. Candidates should have strong cloud expertise; e xperience with Python and AWS Bedrock. About us: Hire22.ai connects top talent with executive roles anonymously and confidentially , transforming hiring through an AI-first, instant CoNCT model . Companies get interview-ready candidates in just 22 hours. No telecalling, no spam, no manual filtering. YOUR ROLES AND RESPONSIBILITIES * Lead the design and development of scalable, secure, and reliable cloud-native applications on AWS. * Implement and manage containerized environments using Kubernetes, EKS, or OpenShift. * Develop infrastructure as code (IaC) using Terraform or AWS CDK. * Set up and maintain CI/CD pipelines using tools such as GitLab CI, GitHub Actions, Jenkins or ArgoCD. * Collaborate with cross-functional teams to deliver high-quality, production-ready solutions. * Provide mentorship to junior developers and participate in code and architecture reviews. * Optimize and monitor applications and infrastructure performance using modern observability tools like Datadog QUALIFICATIONS: * Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. * AWS or other cloud certifications (e.g., Solutions Architect, DevOps Engineer) are a plus. * Strong communication and leadership skills. Must-Have Skills: * Proficiency in programming languages such as TypeScript, Python, Go, or similar. * Strong hands-on experience with AWS Serverless technologies (e.g., AWS Lambda). * Deep knowledge of AWS managed databases such as Amazon RDS and DynamoDB. * Solid understanding of containerization using Kubernetes, AWS EKS/ECS, or OpenShift. * Proven experience in CI/CD pipelines using GitLab CI, GitHub, Jenkins, ArgoCD, or Azure DevOps similar. * Experience with developer tools such as Git, Jira, and Confluence. Should-Have Skills : * AWS Networking components like Amazon API Gateway. * AWS Storage services including Amazon S3. * Exposure to AWS AI/ML services such as Amazon SageMaker, Amazon Bedrock, Amazon Q, or Amazon Q Business. * Infrastructure as Code using Terraform or AWS CDK. Nice-to-Have Skills: * Experience with Contact Center solutions (e.g., Amazon Connect). * Use of HashiCorp Vault for secrets management. * Kafka messaging platforms (e.g., Apache Kafka, Confluent, or Amazon MSK). * Exposure to multi-cloud environments (e.g., Azure, GCP). * Experience with monitoring tools such as Datadog, Dynatrace, or New Relic. * Understanding of FinOps and cloud cost optimization. * Experience with Single Sign-On (SSO) technologies like OAuth2, OpenID Connect, JWT, or SAML. * Working knowledge of Linux operating systems and shell scripting This Job CoNCT is Valid for Only 22 Hours. Please Apply Quickly.
Posted 1 month ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 8+ years of hands on experience Position: Manager Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong understanding of telecom KPIs and datasets, such as CDRs, service usage data, network QoS, and customer lifecycle metrics Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Hands-on experience in Python, SQL, and PySpark for large-scale data handling and modeling Experience with ML libraries such as scikit-learn, XGBoost, TensorFlow, PyTorch Knowledge of data wrangling, feature engineering, and preparing telecom datasets for modeling Proficiency in data visualization tools such as Power BI, Tableau, or Looker Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Lead and manage analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Work closely with US-based clients and consultants to understand telecom-specific use cases and define analytical approaches Guide teams in building and validating predictive models that address key business needs like churn, upsell, ARPU, and service quality Translate business problems into data-driven solutions, and ensure alignment with KPIs and expected outcomes Review and manage data quality, model robustness, and explainability for telecom-specific deployments Present insights and analytical recommendations to internal stakeholders and client leadership teams Mentor and upskill junior team members while fostering a culture of collaboration and continuous learning Support practice development activities including IP creation, proposal support, and client engagement planning Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute
Posted 1 month ago
0 years
0 Lacs
Borivali, Maharashtra, India
On-site
Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing, implementing, and building complex, scalable, and secure GenAI and ML applications and models built on AWS tailored to customer needs Providing technical guidance and implementation support throughout project delivery, with a focus on using AWS AI/ML services Collaborating with customer stakeholders to gather requirements and propose effective model training, building, and deployment strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Maharashtra Job ID: A2944592
Posted 1 month ago
4.0 years
0 Lacs
Delhi
On-site
Job Title: Software Engineer – AI/ML Location: Mohali/Chandigarh/Delhi Experience: 4-8 years About the Role: We are seeking a highly experienced and innovative AI & ML engineer to lead the design, development, and deployment of advanced AI/ML solutions, including Large Language Models (LLMs), for enterprise-grade applications. You will work closely with cross-functional teams to drive AI strategy, define architecture, and ensure scalable and efficient implementation of intelligent systems. Key Responsibilities: Design and architect end-to-end AI/ML solutions including data pipelines, model development, training, and deployment. Develop and implement ML models for classification, regression, NLP, computer vision, and recommendation systems. Build, fine-tune, and integrate Large Language Models (LLMs) such as GPT, BERT, LLaMA, etc., into enterprise applications. Evaluate and select appropriate frameworks, tools, and technologies for AI/ML projects. Lead AI experimentation, proof-of-concepts (PoCs), and model performance evaluations. Collaborate with data engineers, product managers, and software developers to integrate models into production environments. Ensure robust MLOps practices, version control, reproducibility, and model monitoring. Stay up to date with advancements in AI/ML, especially in generative AI and LLMs, and apply them innovatively. Requirements : Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field. Min 4+ years of experience in AI/ML. Deep understanding of machine learning algorithms, neural networks, and deep learning architectures. Proven experience working with LLMs, transformer models, and prompt engineering. Hands-on experience with ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, etc. Proficiency in Python and experience with cloud platforms (AWS, Azure, or GCP) for ML workloads. Strong knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, etc.) and practices. Excellent problem-solving and communication skills. Preferred Qualifications: Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and embeddings. Exposure to real-time AI systems, streaming data, or edge AI. Contributions to AI research, open-source projects, or publications in AI/ML. Interested ones, kindly apply here or share resume at hr@softprodigy.com
Posted 1 month ago
40.0 years
5 - 10 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216765 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems Principal Data Scientist Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
8.0 years
2 - 4 Lacs
Mohali
On-site
Position: AI/ML Lead Experience: 8+ years Location: Mohali Key Responsibilities: Lead the end-to-end design, architecture, and delivery of complex AI/ML solutions, including scalable data pipelines, advanced model development, training, deployment, and post-deployment support. Strategically develop and implement machine learning models across diverse domains such as natural language processing (NLP), computer vision, recommendation systems, classification, and regression. Drive innovation by integrating and fine-tuning Large Language Models (LLMs) like GPT, BERT, LLaMA, and similar state-of-the-art transformer architectures into enterprise-grade applications. Own the selection and implementation of appropriate ML frameworks, tools, and cloud technologies aligned with business goals and technical requirements. Spearhead AI/ML experimentation, Proof-of-Concepts (PoCs), benchmarking, and model optimization initiatives. Collaborate cross-functionally with data engineering, software development, and product teams to seamlessly integrate ML capabilities into production systems. Establish and enforce robust MLOps pipelines covering CI/CD for ML, model versioning, reproducibility, and monitoring to ensure reliability at scale. Stay at the forefront of AI advancements, particularly in generative AI and LLM ecosystems, and champion their adoption across business use cases. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field. Hands on experience in the AI/ML domain, with a proven track record of delivering production-grade ML systems. Expertise in machine learning algorithms, deep learning architectures, and advanced neural network design. Demonstrated hands-on experience with LLMs, transformer-based models, prompt engineering, and embeddings. Proficiency with ML frameworks and libraries such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, etc. Strong programming skills in Python and familiarity with cloud platforms (AWS, Azure, or GCP) for scalable ML workloads. Solid experience with MLOps tools and practices — including MLflow, Kubeflow, SageMaker, or equivalent. Excellent leadership, analytical thinking, and communication skills. Preferred Qualifications: Experience with vector databases . Exposure to real-time AI systems, edge AI, and streaming data environments. Active contributions to open-source projects, research publications, or thought leadership in AI/ML. Certification in AI/ML is big plus. Interested candidates can apply directly or share their resume at shubhra_bhugra@softprodigy.com
Posted 1 month ago
0 years
4 - 7 Lacs
Chennai
On-site
Job Title: Senior Consultant - AI Engineer Introduction to role: Are you ready to redefine an industry and change lives? As a Senior Consultant - AI Engineer, you'll develop and deploy key AI products that generate business and scientific insights through advanced data science techniques. Dive into building models using both foundational and innovative methods, processing structured and unstructured data, and collaborating closely with internal partners to solve complex problems in drug development, manufacturing, and supply chain. This is your chance to make a direct impact on patients by transforming our ability to develop life-changing medicines! About the Operations IT team Operations IT is a global capability supporting the Global Operations organization across Pharmaceutical Technology Development, Manufacturing & Global Engineering, Quality Control, Sustainability, Supply Chain, Logistics, and Global External Sourcing and Procurement. We operate from key hubs in the UK, Sweden, the US, China, and our Global Technology Centers in India and Mexico. Our work directly impacts patients by transforming our ability to develop life-changing medicines, combining pioneering science with leading digital technology platforms and data. Accountabilities: Drive the implementation of advanced modelling algorithms (e.g., classification, regression, clustering, NLP, image analysis, graph theory, generative AI) to generate actionable business insights. Mentor AI scientists, plan and supervise technical work, and collaborate with stakeholders. Work within an agile framework and in multi-functional teams to align AI solutions with business goals. Engage internal stakeholders and external partners for the successful delivery of AI solutions. Continuously monitor and optimize AI models to improve accuracy and efficiency (scalable, reliable, and well-maintained). Document processes, models, and key learnings & contribute to building internal AI capabilities. Ensure AI models adhere to ethical standards, privacy regulations, and fairness guidelines. Essential Skills/Experience: Bachelor's in operations research, mathematics, computer science, or related quantitative field. Advanced expertise in Python and familiarity with database systems (e.g. SQL, NoSQL, Graph). Proven proficiency in at least 3 of the following domains: Generative AI: advanced expertise working with: LLMs/transformer models, AWS Bedrock, SageMaker, LangChain Computer Vision: image classification and object detection MLOps: putting models into production in the AWS ecosystem Optimization: production scheduling, planning Traditional ML: time series analysis, unsupervised anomaly detection, analysis of high dimensional data Proficiency in ML libraries sklearn, pandas, TensorFlow/PyTorch Experience productionizing ML/ Gen AI services and working with complex datasets. Strong understanding of software development, algorithms, optimization, and scaling. Excellent communication and business analysis skills. Desirable Skills/Experience: Master’s or PhD in a relevant quantitative field. Cloud engineering experience (AWS cloud services) Snowflake Software development experience (e.g. React JS, Node JS) When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is purpose-led, ensuring that we can fulfill our mission to push the boundaries of science and discover life-changing medicines. We take pride in working close to the cause, opening the locks to save lives, ultimately making a massive difference to the outside world. Here you'll find a dynamic environment where innovation thrives and diverse minds work inclusively together. Ready to make a meaningful impact? Apply now and be part of our journey to revolutionize healthcare!
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad
On-site
Growexx is looking for smart and passionate Senior Data Scientist , who will help in building great AI Agents for different business needs. Key Responsibilities Design and implement LLM powered video conversation systems to support use cases such as real-time customer service, sales enablement, and personalized product walkthroughs, integrating video streaming systems and leveraging multimodal models for speech. Text, and visual understanding. Develop and fine-tune LLM-driven solutions for tasks such as text summarization, customer support automation, personalization, and user journey understanding. Deploy LLM and ML models into production environments for activation across websites, product applications, and sales/marketing channels. Conduct comprehensive evaluation of LLMs, including performance benchmarking (accuracy, latency, token usage, cost), prompt effectiveness testing, fine-tuning impact analysis, and safety/bias assessments. Integrate LLM agents with APIs, internal knowledge bases, retrieval systems (RAG architectures), and external tools to enable autonomous or semi-autonomous decision-making. Build a deep understanding of business models, objectives, challenges, and opportunities by working closely with leadership and key stakeholders. Document model methodologies, evaluation frameworks, agent workflows, deployment architectures, and post-activation performance results in a structured and reproducible format. Stay current with advancements in LLMs, agentic AI, retrieval-augmented generation (RAG), and ML technologies to recommend and implement innovative solutions. Key Skills Experience using Python, SciKit, SQL, Jupyter Notebooks, Amazon SageMaker, Github & AWS Bedrock. Experience working with multimodal AI systems for video-based conversation, speech-to-text, text-to-speech, and LLM-driven dialogue orchestration for interactive, real-time user engagement. Proven experience designing, fine-tuning, evaluating, and deploying Large Language Models (LLMs) and generative AI applications. Experience designing and deploying agentic systems using frameworks such as LangChain, AutoGen, CrewAI, and custom function-calling pipelines. Expertise integrating LLM agents with APIs, knowledge bases, retrieval systems (RAG architecture), and orchestrating dynamic multi-agent workflows. Strong understanding of evaluation metrics for LLMs, including prompt testing, token optimization, bias/safety analysis, latency, and cost benchmarks. Expertise in designing and executing A/B, multivariate, and lift tests to measure activated ML/LLM model performance across digital and offline channels. Continuous learner, keeping up-to-date with the latest advances in transformers, generative AI models, retrieval-augmented generation (RAG), and agentic AI frameworks. Education and Experience B Tech or B. E. (Computer Science / Information Technology) 5 + years as a Data Scientist or similar roles. Analytical and Personal skills Must have good logical reasoning and analytical skills. Good Communication skills in English – both written and verbal. Demonstrate Ownership and Accountability of their work. Attention to details.
Posted 1 month ago
3.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Job description Technical Experience 3+ years of deep AWS experience, specifically with Kinesis, Glue, Lambda, S3, and Step 5-7 years of hands-on experience in data pipeline development and ETL processes Functions Strong proficiency in NodeJS/JavaScript and Java for serverless and containerized applications Production experience with Apache Spark, Apache Flink, or similar big data processing frameworks Data Engineering Expertise Proven experience with real-time streaming architectures and event-driven systems Hands-on experience with Parquet, Avro, Delta Lake, and columnar storage optimization Experience implementing data quality frameworks such as Great Expectations or similar tools Knowledge of star schema modeling, slowly changing dimensions, and data warehouse design patterns Experience with medallion architecture or similar progressive data refinement strategies AWS Skills Experience with Amazon EMR, Amazon MSK (Kafka), or Amazon Kinesis Analytics Knowledge of Apache Airflow for workflow orchestration Experience with DynamoDB, ElastiCache, and Neptune for specialized data storage Familiarity with machine learning pipelines and Amazon SageMaker integration.
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
Punjab, India
On-site
Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If You Are a Visionary Leader Who Thrives At The Intersection Of Technology And Operations, This Is Your Opportunity To Make a Global Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes Responsibilities (KRAs) : Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity Skills & Tools : Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, : Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech)
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior DevOps Engineer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. To be successful as a Senior DevOps Engineer you should have experience with: Basic/ Essential Qualifications Experience working with containers, Kubernetes and related technologies on Cloud platform(s) – AWS preferably Experience working in setting up the cloud infrastructure using Cloud Formation Experience of DevOps tooling such as Jenkins, Bitbucket, Nexus, Gitlab, Jira etc. Experience in working & configuring wide range of AWS services such as API Gateway, Lambda, ECS, Sagemaker, Bedrock, EC2, RDS etc. Experience on virtual server hosting (EC2), container management (Kubernetes, ECS or EKS) as well as Windows and Linux Operating System Network experience, aware of cloud network patterns such as VPC, network interconnect, subnets, peering, firewalls, etc. Some Other Highly Valued Skills Includes Strong programming experience in Python Experience working with ML libraries e.g., scikit-learn, TensorFlow, PyTorch. Proficient with Jenkins, Bitbucket/Gitlab and Git Workflows Exposure to working within a controlled environment such as banking and financial services Experience on Docker and at least one Docker Container orchestration – Amazon ECS/EKS, Kubernetes. Relevant AWS certification(s) You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 month ago
2.0 - 31.0 years
2 - 4 Lacs
Sri Nagar Colony, Hyderabad
On-site
Web Developer - Full Stack with AI IntegrationKey ResponsibilitiesCollaborate with cross-functional teams to define, design, and ship new features including AI-powered capabilities Develop server-side logic using Node.js, ensuring high performance and responsiveness to requests from front-end components Build reusable and efficient front-end components using React.js, including AI-driven interfaces and data visualization Integrate AI/ML APIs (OpenAI, Google AI, AWS AI services) and implement features like chatbots, content generation, and recommendation systems Implement and maintain API integrations with third-party services and AI platforms Optimize applications for maximum speed and scalability, including efficient AI model inference and response caching Collaborate with team members to troubleshoot, debug, and optimize application performance across traditional and AI systems Stay up-to-date with emerging AI technologies, web development frameworks, and industry trends Implement security and data protection measures, including secure handling of AI model inputs/outputs Participate in code reviews and provide constructive feedback on both web development and AI integration practices Deploy applications on AWS and manage cloud infrastructure, utilizing AI/ML services like SageMaker and Bedrock Develop prompt engineering strategies and build user interfaces for AI model interactions and monitoring
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior DevOps Engineer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. To be successful as a Senior DevOps Engineer you should have experience with: Basic/ Essential Qualifications Experience working with containers, Kubernetes and related technologies on Cloud platform(s) – AWS preferably Experience working in setting up the cloud infrastructure using Cloud Formation Experience of DevOps tooling such as Jenkins, Bitbucket, Nexus, Gitlab, Jira etc. Experience in working & configuring wide range of AWS services such as API Gateway, Lambda, ECS, Sagemaker, Bedrock, EC2, RDS etc. Experience on virtual server hosting (EC2), container management (Kubernetes, ECS or EKS) as well as Windows and Linux Operating System Network experience, aware of cloud network patterns such as VPC, network interconnect, subnets, peering, firewalls, etc. Some Other Highly Valued Skills Includes Strong programming experience in Python Experience working with ML libraries e.g., scikit-learn, TensorFlow, PyTorch. Proficient with Jenkins, Bitbucket/Gitlab and Git Workflows Exposure to working within a controlled environment such as banking and financial services Experience on Docker and at least one Docker Container orchestration – Amazon ECS/EKS, Kubernetes. Relevant AWS certification(s) You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 1 month ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Lead Engineer - Data & AI Career Level: E Introduction To Role Are you ready to redefine an industry and change lives? AstraZeneca is seeking a seasoned AI and Data Engineering manager to join our Data Analytics and AI (DA&AI) organization. In this pivotal role, you'll be instrumental in shaping and delivering next-generation data platforms, data mesh, and AI capabilities that drive our digital transformation. Your expertise will be crucial in building data infrastructure that supports enterprise-scale data platforms and AI Analytics deployment, fueling intelligent operations across the business. Accountabilities Technical & AI Leadership Lead and mentor a multi-functional team of data and AI engineers to deliver scalable, AI-ready data products and pipelines. Define and enforce standard processes for Data engineering, data pipeline orchestration, and ELT/ETL development lifecycle management. Guide the development of solutions that integrate data engineering with machine learning, foundational models, and semantic enrichment. AI-Driven Data Engineering Architect and develop data pipelines using tools such as DBT, Apache Airflow, and Snowflake, optimized to support both analytics and AI/ML workloads. Design infrastructures that facilitate automated feature engineering, metadata tracking, and real-time model inference. Enable large-scale data ingestion, preparation, and transformation to support AI use cases such as forecasting, natural language querying/processing (NLQ/P), and intelligent automation. Governance and Metadata Management Have an approach to adhere to data governance and compliance practices that ensure trust, transparency, and explainability in AI outputs. Manage and scale enterprise metadata frameworks using tools like Collibra, aligning with FAIR data principles and AI ethics guidelines. Establish traceability across data lineage, model lineage, and business outcomes. Stakeholder Engagement Act as a trusted technical advisor to business leaders across enabling functions (e.g., Finance, M&A, GBS), helping translate strategic goals into AI-driven data solutions. Lead delivery across multiple workstreams, ensuring measurable KPIs and adoption of both data and AI capabilities. Essential Skills/Experience 12+ years of hands-on experience in data engineering and AI-enabling infrastructure, with expertise in: DBT, Apache Airflow, Snowflake, PostgreSQL, Amazon Redshift 2+ years working with or supporting AI/ML teams in building production-ready pipelines and infrastructure. Strong communication skills with a demonstrated ability to influence both technical and non-technical collaborators. Experience in implementing data products by applying data mesh principles. Experience working across enabling business units such as Finance, HR, and M&A. Academic Qualifications Bachelor’s or Master’s degree in computer science, Information Technology, or related field with relevant industrial experiences. Desirable Skills/Experience Proficiency in Python, especially in libraries like Pandas, NumPy, and Scikit-learn for data and ML workflows. Exposure to ML lifecycle tools such as SageMaker, MLflow, Azure ML, or Databricks. Exposure to foundational AI models (e.g., LLMs), vector databases, and retrieval-augmented generation (RAG) methodologies. Knowledge of data cataloguing tools such as Collibra, semantic data models, ontologies, and business glossary tools. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, your work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining innovative science with leading digital technology platforms. With a passion for impacting lives through data, analytics, AI, machine learning, and more, we are at a crucial stage of our journey to become a digital and data-led enterprise. Here you can innovate, take ownership, experiment with groundbreaking technology, and tackle challenges that have never been addressed before. Our dynamic environment offers countless opportunities to learn and grow while contributing to something far bigger. Ready to make a meaningful impact? Apply now to join our team! Date Posted 27-Jun-2025 Closing Date 03-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 month ago
3.0 years
0 Lacs
Dehradun, Uttarakhand, India
Remote
About Yogotribe Platform: Yogotribe is building a transformative digital platform dedicated to wellness, connecting seekers with a diverse range of yoga retreats, meditation centers, Ayurveda clinics, and holistic wellness experiences. Our strategic approach involves a robust initial deployment using Odoo as the core platform. The foundational Phase 1 is already established on a scalable and secure AmistacX Odoo and AWS backend infrastructure , fully integrated and stable on Amazon EC2. This setup provides a solid foundation for all Odoo functionalities, setting the stage for future evolution towards a microservices-driven architecture. We are seeking a talented and experienced External Odoo Developer to join us on a project basis. Your primary responsibility will be to rapidly develop professional and high-quality custom Odoo modules to complete all remaining functionalities within our existing, integrated AWS ecosystem. Role Summary: As an Odoo Developer for Yogotribe, you will be responsible for the design, development, and implementation of new custom Odoo modules and enhancements within our established Odoo 17.x environment. While the AWS backend integration is already in place and stable, you will focus on building the Odoo-side functionalities that utilize these existing integrations . This is a project-based assignment focused on delivering specific functionalities. Your ability to work independently, adhere to Odoo best practices, and effectively leverage the established AWS services through Odoo will be paramount to your success. Key Responsibilities: Custom Odoo Module Development: Design, develop, and implement new Odoo modules and features using Python, Odoo ORM, QWeb, XML, and JavaScript, aligned with project requirements to complete all envisioned functionalities. Leveraging Existing AWS Integrations: Develop Odoo functionalities that seamlessly interact with our already established AWS backend, utilizing existing integrations for services such as: Data storage (AWS S3 for attachments). Eventing and messaging (AWS SQS, AWS SNS). Email services (AWS SES). Interactions with AWS Lambda for AI/ML processing (e.g., Amazon Comprehend, Rekognition). Code Quality & Best Practices: Write clean, maintainable, well-documented, and efficient code, adhering to Odoo development guidelines and industry best practices. Testing & Debugging: Conduct thorough testing of developed modules, identify and resolve bugs, and ensure module stability and performance within the integrated Odoo-AWS environment. Documentation: Create clear and concise technical documentation for developed Odoo modules, including design specifications, API usage, and deployment notes. Collaboration: Work closely with the core team to understand project requirements, provide technical insights, and deliver solutions that meet business needs. Deployment Support: Assist in the deployment and configuration of developed Odoo modules within the AWS EC2 environment. Required Skills & Experience: Odoo Development Expertise (3+ years): Strong proficiency in Python development within the Odoo framework (ORM, API, XML, QWeb). Extensive experience in developing and customizing Odoo modules (e.g., sales, CRM, accounting, website, custom models). Familiarity with Odoo 17.x development practices is highly desirable. Solid understanding of Odoo architecture and module structure. Understanding of Odoo on AWS: Proven understanding of how Odoo operates within an AWS EC2 environment. Familiarity with the use of existing AWS services integrated with Odoo, particularly S3, SQS/SNS, and SES. Knowledge of AWS IAM, VPC, Security Groups, and general cloud security concepts relevant to understanding the existing Odoo deployment. Database Proficiency: Experience with PostgreSQL, including schema design and query optimization. Version Control: Proficient with Git for source code management. Problem-Solving: Excellent analytical and debugging skills to troubleshoot complex Odoo functionalities within an integrated system. Communication: Strong verbal and written communication skills for effective collaboration in a remote, project-based setting. Independent Work Ethic: Proven ability to manage project tasks, deliver on time, and work effectively with minimal supervision. Desirable (Bonus) Skills: Experience with front-end technologies for Odoo website customization (HTML, CSS/Tailwind CSS, JavaScript frameworks). Knowledge of Odoo performance optimization techniques. Familiarity with CI/CD pipelines (e.g., AWS CodePipeline, CodeBuild, CodeDeploy) from an Odoo module deployment perspective. Understanding of microservices architecture concepts and patterns, especially in the context of a future migration from the Odoo monolith. Prior experience with AWS AI/ML services (e.g., Comprehend, Rekognition, Personalize, SageMaker, Lex) is a plus, specifically in how Odoo might interact with them via existing integrations. Assignment Type & Duration: This is a project-based assignment with clearly defined deliverables and timelines for specific Odoo module development. The initial project scope will be discussed during the interview process. The feasibility of support extension or future project engagements will be decided based on the successful outcome and quality of deliverables for the current project. To Apply: Please submit your resume, a brief cover letter outlining your relevant Odoo development experience at hr@yogotribe.com, specifically highlighting your ability to build complete functionalities and work within an established AWS-hosted Odoo environment.
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Growexx is looking for smart and passionate Senior Data Scientist , who will help in building great AI Agents for different business needs. Key Responsibilities Design and implement LLM powered video conversation systems to support use cases such as real-time customer service, sales enablement, and personalized product walkthroughs, integrating video streaming systems and leveraging multimodal models for speech. Text, and visual understanding. Develop and fine-tune LLM-driven solutions for tasks such as text summarization, customer support automation, personalization, and user journey understanding. Deploy LLM and ML models into production environments for activation across websites, product applications, and sales/marketing channels. Conduct comprehensive evaluation of LLMs, including performance benchmarking (accuracy, latency, token usage, cost), prompt effectiveness testing, fine-tuning impact analysis, and safety/bias assessments. Integrate LLM agents with APIs, internal knowledge bases, retrieval systems (RAG architectures), and external tools to enable autonomous or semi-autonomous decision-making. Build a deep understanding of business models, objectives, challenges, and opportunities by working closely with leadership and key stakeholders. Document model methodologies, evaluation frameworks, agent workflows, deployment architectures, and post-activation performance results in a structured and reproducible format. Stay current with advancements in LLMs, agentic AI, retrieval-augmented generation (RAG), and ML technologies to recommend and implement innovative solutions. Key Skills Experience using Python, SciKit, SQL, Jupyter Notebooks, Amazon SageMaker, Github & AWS Bedrock. Experience working with multimodal AI systems for video-based conversation, speech-to-text, text-to-speech, and LLM-driven dialogue orchestration for interactive, real-time user engagement. Proven experience designing, fine-tuning, evaluating, and deploying Large Language Models (LLMs) and generative AI applications. Experience designing and deploying agentic systems using frameworks such as LangChain, AutoGen, CrewAI, and custom function-calling pipelines. Expertise integrating LLM agents with APIs, knowledge bases, retrieval systems (RAG architecture), and orchestrating dynamic multi-agent workflows. Strong understanding of evaluation metrics for LLMs, including prompt testing, token optimization, bias/safety analysis, latency, and cost benchmarks. Expertise in designing and executing A/B, multivariate, and lift tests to measure activated ML/LLM model performance across digital and offline channels. Continuous learner, keeping up-to-date with the latest advances in transformers, generative AI models, retrieval-augmented generation (RAG), and agentic AI frameworks. Education and Experience B Tech or B. E. (Computer Science / Information Technology) 5 + years as a Data Scientist or similar roles. Analytical and Personal skills Must have good logical reasoning and analytical skills. Good Communication skills in English – both written and verbal. Demonstrate Ownership and Accountability of their work. Attention to details.
Posted 1 month ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us: Netcore Cloud is a MarTech platform helping businesses design, execute, and optimize campaigns across multiple channels. With a strong focus on leveraging data, machine learning, and AI, we empower our clients to make smarter marketing decisions and deliver exceptional customer experiences. Our team is passionate about innovation and collaboration, and we are looking for a talented Lead Data Scientist to guide and grow our data science team. Role Summary: As the Lead Data Scientist, you will head our existing data science team, driving the development of advanced machine learning models and AI solutions. You will play a pivotal role in shaping our ML/AI strategy, leading projects across NLP, deep learning, predictive analytics, and recommendation systems, while ensuring alignment with business goals. Exposure to Agentic AI systems and their evolving applications will be a valuable asset in this role, as we continue to explore autonomous, goal-driven AI workflows. This role combines hands-on technical leadership with strategic decision-making to build impactful solutions for our customers. Key Responsibilities: Leadership and Team Management : Lead and mentor the existing data science engineers, fostering skill development and collaboration. Provide technical guidance, code reviews, and ensure best practices in model development and deployment. Model Development and Innovation : Design and build machine learning models for tasks like NLP, recommendation systems, customer segmentation, and predictive analytics. Research and implement state-of-the-art ML/AI techniques to solve real-world problems. Ensure models are scalable, reliable, and optimized for performance in production environments. We operate in AWS and GCP, so it’s mandatory that you have previous experience in setting up the MLOps workflow in either of the cloud service provider. Business Alignment : Collaborate with cross-functional teams (engineering, product, marketing, etc.) to identify opportunities where AI/ML can drive value. Translate business problems into data science solutions and communicate findings to stakeholders effectively. Drive data-driven decision-making to improve user engagement, conversions, and campaign outcomes. Technology and Tools : Work with large-scale datasets, ensuring data quality and scalability of solutions. Leverage cloud platforms like AWS, GCP for model training and deployment. Utilize tools and libraries such as Python, TensorFlow, PyTorch, Scikit-learn, and Spark for development. With so much innovation happening around Gen AI and LLMs, we prefer folks who have already exposed themselves to this exciting opportunity via AWS Bedrock or Google Vertex. Qualifications: Education : Master’s or PhD in Computer Science, Data Science, Mathematics, or a related field. Experience : With an industry experience of more than 8 years with 5+ years of experience in data science, and at least 2 years in a leadership role managing a strong technical team. Proven expertise in machine learning, deep learning, NLP, and recommendation systems. Hands-on experience deploying ML models in production at scale. Experience in Martech or working on customer-facing AI solutions is a plus. Technical Skills : Proficiency in Python, SQL, and ML frameworks like TensorFlow or PyTorch. Strong understanding of statistical methods, predictive modeling, and algorithm design. Familiarity with cloud-based solutions (AWS Sagemaker, GCP AI Platform, or similar). Soft Skills : Excellent communication skills to present complex ideas to both technical and non-technical stakeholders. Strong problem-solving mindset and the ability to think strategically. A passion for innovation and staying up-to-date with the latest trends in AI/ML. Why Join Us: Opportunity to work on cutting-edge AI/ML projects impacting millions of users. Be part of a collaborative, innovation-driven team in a fast-growing Martech company. Competitive salary, benefits, and a culture that values learning and growth. Location Bengaluru
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Minimum of (3+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences What You Will Do End to end machine learning lifecycle management on Sagemaker Model training & tracking using MLFLow Model Monitoring Sagemaker Pipelines building Experience in using Sagemaker ground truth for optimised collection of annotated data (Plus). Cost and performance optimization What You Need To Be Successful Experience working in cross-functional teams and collaborating effectively with different stakeholders. Strong problem-solving and analytical skills. Excellent communication skills to document and present technical concepts clearly. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challenging career opportunities, professional training, and competitive compensation. (www.harman.com)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France