Home
Jobs

661 Sagemaker Jobs - Page 16

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. What’s on offer? Consultant/Senior Consultant /Principal Consultant – Advisory - One Consulting - Data Analytics Where? PAN India What do we expect? Must Have Skills  Bachelors/Masters/Computer Science/Equivalent engineering  Previous Work Experience: 3-7 years (Bachelor Degree holders)  In-depth experience with various AWS data services, including Amazon S3, Amazon Redshift, AWS Glue, AWS Lambda, and Amazon EMR, SQS, SNS, Step Functions, Event Bridge.  Ability to design, implement, and maintain scalable data pipelines using AWS services.  Strong proficiency in big data technologies such as Apache Spark and Apache Hadoop for processing and analyzing large datasets.  Hands-on experience with database management systems such as Amazon RDS, DynamoDB, and others.  Good knowledge in AWS OpenSearch.  Experience in data ingestion services like AWS AppFlow, DMS, AWS Glue etc.  Hands on experience on developing REST API’s using AWS API Gateway.  Experience in real time and batch data processing in AWS environment utilizing services like Kinesis firehose, AWS Glue, AWS Lambda etc.  Proficiency in programming languages such as Python, PySpark for building data applications and ETL processes.  Strong scripting skills for automation and orchestration of data workflows.  Solid understanding of data warehousing concepts and best practices.  Experience in designing and managing data warehouses on AWS Redshift or similar platforms.  Proven experience in designing and implementing Extract, Transform, Load (ETL) processes.  Knowledge of AWS security best practices and the ability to implement secure data solutions.  Knowledge of monitoring logs, create alerts, dashboards in AWS CloudWatch.  Understanding of version control systems, such as Git. Nice To Have Skills  Experience with Agile and DevOps concepts.  Understanding of networking principles, including VPC design, subnets, and security groups.  Experience with containerization tools such as Docker and orchestration tools like Kubernetes.  Ability to deploy and manage data applications using containerized solutions.  Familiarity with integrating machine learning models into data pipelines.  Knowledge of AWS SageMaker or other machine learning platforms.  Experience in AWS Bedrock for GEN AI integration  Knowledge of monitoring tools for tracking the performance and health of data systems.  Ability to optimize and fine-tune data pipelines for efficiency.  Experience in AWS services like: Code Pipeline, Code Commit, Code Deploy, Code Build, Cloud Formation. Mandatory skill sets- AWS Data Engieer Preferred Skill Sets-AWS Year of experience required- 4-7 Years Qualifications-BE / BTech / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply

40.0 years

4 - 8 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216711 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human data to push beyond what’s known today. ABOUT THE ROLE Let’s do this. Let’s change the world. At Amgen, we believe that innovation can and should be happening across the entire company. Part of the Artificial Intelligence & Data function of the Amgen Technology and Medical Organizations (ATMOS), the AI & Data Innovation Lab (the Lab) is a center for exploration and innovation, focused on integrating and accelerating new technologies and methods that deliver measurable value and competitive advantage. We’ve built algorithms that predict bone fractures in patients who haven’t even been diagnosed with osteoporosis yet. We’ve built software to help us select clinical trial sites so we can get medicines to patients faster. We’ve built AI capabilities to standardize and accelerate the authoring of regulatory documents so we can shorten the drug approval cycle. And that’s just a part of the beginning. Join us! We are seeking a Senior DevOps Software Engineer to join the Lab’s software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Software Engineering, or related field. 8+ years of experience in full-stack software engineering. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 weeks ago

Apply

5.0 years

5 - 7 Lacs

Hyderābād

On-site

Visionify is working on bringing the power of Computer Vision and AI for everyday use-cases. We are looking to hire a strong, motivated and enthusiastic Sr. Computer Vision Engineer to execute our roadmap. As a Sr. Computer Vision Engineer, you will be working on the state-of-the art challenges in the field of Computer Vision & solving them with novel algorithms and optimizations. Majority of our work is focused on applied Computer Vision - so a good understanding of current state of the art of models (Classification, Object detection, Object Recognition, OCR, LayoutML, GAN etc networks). You will be working on Pytorch as primary language - so prior demonstrated knowledge of Pytorch is must for this position. Any experience with Azure, Azure ML Studio framework etc., would also be preferable. Candidates are expected to stay current with the latest features and contribute to the open-source Pytorch project with a focus on performance and accuracy improvements. You deeply understand the PyTorch framework and underlying implementations to solve customer challenges., and provide insights into how key issues affect the product. Many of our models get deployed to the edge - so experience in optimizing and pruning models, converting models to NVIDIA TensorRT etc would be preferable. You must possess excellent Python coding skills - as Python is used throughout our organization to build training and inference pipelines. Candidates should have excellent communication and presentation skills. The ideal candidate will be passionate about artificial intelligence and stay up-to-date with the latest developments in the field. Responsibilities: Understanding business objectives and developing Computer Vision based solutions that help achieve it. The software could involve training framework, inference framework, working with different technologies for ML. You will build models and solutions with Pytorch. You will optimize the Pytorch models for different runtime environments including NVIDIA Jetson TensorRT. Guide the development team with their works, unblock their questions, help accelerate their deliverables.Developing ML/Computer Vision algorithms that could be used to solve a given problem. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Develop processes for different common operations of the team: data acquisition, model training, prototype development. Finding open-source datasets for prototype development. Develop pipelines for data processing, augmentation, training, inference and active retraining. Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production Requirements: Bachelors/Masters degree in Computer Science/Computer Engineering/IT or related fields. 5+ years of experience. Exceptional candidates with less experience are welcome to apply. Industry experience working in Image & Video Processing (OpenCV, GStreamer, Tensorflow, PyTorch, TensorRT, Model Training/Inference, Video Processing Pipelines, Different GStreamer Convertors etc). Sound knowledge of various deep learning classification models including ResNet, Inception, VGG etc and object detection models including MobileNetSSD, Yolo, FastRCNN, MaskRCNN etc. Good knowledge of Pytorch, Torchvision, writing training routines. Ability to update models, add/drop features, visualize how the model is performing etc. Experience in Colab and Jupyter Notebook Familiarity with CUDA/GPU Knowledge of CNN visualization techniques such as CAM, GradCAM etc. Strong understanding of Computer Vision and Real-time Video Processing techniques. Strong experience with Python and writing reusable code. Experience working with OpenCV and Scikit packages. Experience with the NVIDIA platform (NVIDIA Deepstream, TensorRT). Experience with Python web framework, e.g. Flask, Django or FastAPI Experience with different ML platforms: PyTorch, TensorFlow. Proficiency with AWS SageMaker Experience with databases (Elasticsearch, SQL, NoSQL, Hive, …) Experience in a cloud environment for software development and deployment (AWS preferred) Experience with utilising various GPU-based training infrastructures. Experience with Docker Knowledge of DevOps and MLOps best practices for production Machine Learning systems. Desired Traits: Thrive in a collaborative environment. Flexible with changing requirements. Come up with innovative solutions. Keen focus on work quality & developing robust code.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 9 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216265 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai

On-site

Role – AIML Data Scientist Job Location : Hyderabad Mode of Interview - Virtual Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary At Novartis, innovation is at the heart of everything we do — and AI is a key enabler in how we reimagine medicine. As an Associate Director Solution Delivery – AI Products, you’ll lead the development and delivery of advanced AI solutions that transform commercial operations in the U.S. market. You will sit at the intersection of technology, data science, and business — owning the product lifecycle from discovery to deployment, while fostering a culture of experimentation, innovation, and continuous delivery. Working within the DD&T Commercial organization, you’ll turn novel ideas into production-ready products that deliver real-world impact. About The Role Key Responsibilities Product Strategy & Innovation Shape and own the product roadmap for AI solutions that power next-gen commercial capabilities — including intelligent targeting, omnichannel orchestration, and HCP engagement optimization. Drive innovation by scouting new AI technologies, validating ideas through MVPs and pilots, and scaling high-value solutions. Translate commercial objectives into actionable technical requirements and solution architectures. Technical Delivery & Solution Ownership Lead cross-functional teams (data science, data engineering, MLOps, digital product) to build, deploy, and operate production-grade AI solutions. Oversee technical delivery from data ingestion and model development to deployment, monitoring, and retraining. Ensure models are performant, scalable, secure, and seamlessly integrated into platforms such as CRM and content engines. Stakeholder Management & Business Alignment Partner with commercial stakeholders (field teams, marketing, analytics) to define needs, align priorities, and drive adoption. Act as a translator between business and technical teams — clearly communicating trade-offs, timelines, and value. Provide thought leadership and influence stakeholders at multiple levels, from working teams to senior leadership. Compliance & Responsible AI Ensure solutions align with Novartis standards and external regulations Embed ethical AI principles into product design — including fairness, transparency, and accountability. Core Technical Focus Areas AI/ML Development: Supervised learning, NLP, recommender systems, experimentation frameworks. Data Infrastructure: Scalable pipelines, real-time data ingestion, structured/unstructured healthcare data. MLOps: Model deployment, versioning, monitoring, and retraining workflows (MLflow, Airflow, SageMaker, etc.). Integration: REST/gRPC APIs, integration with CRM (e.g., Veeva), content platforms, and campaign engines. Cloud Platforms: AWS, Azure, or GCP environments for AI/ML workloads and data operations. What You’ll Bring 5+ years of product management experience, with 2+ years in AI/ML or data-centric product delivery. Strong technical foundation and experience working closely with engineering and data science teams. Demonstrated ability to deliver AI products from concept to production, including service ownership. Strong stakeholder management skills, with the ability to influence across technical and business domains. Experience in healthcare, life sciences, or other regulated environments is a plus. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

We're Hiring: AI DevOps Engineer – ML, LLM & Cloud for Battery & Livestock Intelligence 📍 Hyderabad / Remote | 🧠 3–8 Years Experience 💼 Full-time | Deep Tech | AI-Driven IoT At Vanix Technologies , we’re solving real-world problems using AI built on top of IoT data — from predicting the health of electric vehicle batteries to monitoring livestock behavior with BLE sensors. We’re looking for a hands-on AI DevOps Engineer who understands not just how to build ML/DL models, but also how to turn them into intelligent cloud-native services . If you've worked on battery analytics or sensor-driven ML , and you're excited by the potential of LLMs + real-time IoT — this is for you. What You’ll Work On 🔋 EV Battery Intelligence Build models for SOH, true SOH, SOC, RUL prediction , thermal event detection, and high-risk condition classification. Ingest and process time-series data from BMS, CAN bus, GPS , and environmental sensors. Deliver analytics that plug into our BatteryTelematicsPro SaaS for OEMs and fleet customers. 🐄 Livestock Monitoring AI Analyze BLE sensor data from our cattle wearables (motion, temp, rumination proxies). Develop models for health anomaly detection, estrus prediction, movement patterns , and outlier behaviors. Power actionable insights for farmers via mobile dashboards and alerts. 🤖 Agentic AI & LLM Integration Chain ML outputs with LLMs (e.g., GPT-4, Claude) using LangChain or similar frameworks . Build AI assistants that summarize events, auto-generate alerts, and respond to user queries using both structured and ML-derived data. Support AI-powered explainability and insight generation layers on top of raw telemetry. ☁️ Cloud ML Engineering & DevOps Deploy models on AWS (Lambda, SageMaker, EC2, ECS, CloudWatch). Design and maintain CI/CD pipelines for data, models, and APIs. Optimize performance, cost, and scalability of cloud workloads. ✅ You Must Have Solid foundation in ML/DL for time-series / telemetry data Hands-on with PyTorch / TensorFlow / Scikit-learn / XGBoost Experience with battery analytics or sensor-based animal behavior prediction Understanding of LangChain / OpenAI APIs / LLM orchestration AWS fluency: Lambda, EC2, S3, SageMaker, CloudWatch Python APIs Nice to Have MLOps stack (MLFlow, DVC, W&B) BLE signal processing or CAN bus protocol parsing Prompt engineering or fine-tuning experience Exposure to edge-to-cloud model deployment Why Vanix Technologies? Because we're not another AI lab — we're a deep-tech company building production-ready AI platforms that interact with real devices in the field , used by farmers, OEMs, and EV fleets . You’ll work at the intersection of: IoT + AI + LLMs Hardware + Cloud Mission-critical data + Everyday impact Show more Show less

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Required Skills & Qualifications: 4 to 12 years of experience in Data Science, Machine Learning, or AI. Proficiency in Python and libraries like Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch . Experience with ML lifecycle tools such as MLflow, Airflow, or SageMaker. Solid understanding of statistics, probability, and linear algebra . Experience working with structured and unstructured data (SQL, NoSQL, text, images, etc.). Familiarity with cloud platforms like AWS, Azure, or GCP. Strong problem-solving and analytical thinking skills.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask . Implement gRPC services , event-driven systems ( Kafka, PubSub ), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development : data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow , Dagster , SageMaker , and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation , A/B testing , and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python , FastAPI/Flask , gRPC, and event-driven architectures. Experience with CI/CD , infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices : feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration . Familiarity with tools like Airflow/Dagster , SageMaker , and data pipeline architecture . Show more Show less

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less

Posted 2 weeks ago

Apply

13.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Location: Bangalore, Pune Notice Period: Immediate to 10 days Experience: 13+ Years About the Role: We are looking for an experienced Engineering Manager (Generative AI) to lead client-facing AI/ML initiatives in a dynamic services environment. In this role, you will be the bridge between clients, data science teams, and engineering groups, ensuring the successful delivery of cutting-edge Generative AI solutions aligned to business goals. You will manage multiple concurrent projects involving emerging technologies like large language models (LLMs), diffusion models, and prompt engineering—while handling client communications, technical planning, and delivery governance. Key Responsibilities: Own and drive end-to-end Generative AI project delivery for clients—from pre-sales support and requirements gathering to deployment and post-launch support. Coordinate with cross-functional teams (data scientists, MLOps engineers, software developers, UI/UX designers) to translate client needs into scalable AI solutions. Manage project scope, timelines, budgets, and resource allocation across multiple client engagements. Conduct client workshops, create solution roadmaps, and define MVPs with measurable KPIs. Track progress using Agile methodologies and ensure timely delivery within SLA and quality standards. Guide internal teams on technology choices and delivery best practices in the Generative AI ecosystem (e.g., OpenAI, Hugging Face, LangChain, Pinecone, etc.). Support proposal development, solution architecture reviews, and pricing during pre-sales activities. Maintain clear, consistent communication with clients and internal leadership; manage escalations, risks, and delivery blockers proactively. Stay current with advancements in Generative AI and bring innovation to client engagements. Required Qualifications: 13+ years of experience in software development, GenAI, machine learning with at least 5 years in technical project management roles. Solid understanding of AI/ML workflows and hands-on familiarity with Generative AI concepts (LLMs, embeddings, vector search, prompt engineering, etc.). Conversant with Python and SQL. Practical experience in designing business solutions in these areas, and explaining those to the customers as part of pre-sales and/or delivery stage. Excellent analysis, and problem-solving skills. Deep thinking ability. Proven knowledge of data structure and algorithms. Proven track record of strong leadership and decision-making skills to drive consensus on requirements and ensure timely delivery of critical customer-facing AI features. Strong track record of developing, leading, coaching, and mentoring machine learning engineers and scientists is vital. Proficiency in Agile project management frameworks and tools (e.g., Jira, Confluence, Azure DevOps). Strong communication and stakeholder management skills—especially in client-facing roles. Cultivate a positive and growth-oriented environment. Bachelor’s degree in a related field, such as computer science, software engineering or data science, is recommended; Master’s or MBA is a plus. Preferred Skills: Experience working with cloud-native AI stacks (AWS SageMaker, Azure ML, Google Vertex AI, etc.). Exposure to MLOps and AI model lifecycle management. Technical certifications in AI/ML, cloud, or project management (e.g., PMP, PMI-ACP, CSM). Experience in consulting or delivering multi-region/global enterprise AI projects. What’s in it for You: Work with a fast-growing team on cutting-edge AI solutions across global clients. Opportunity to shape and scale next-gen AI delivery practices. Competitive compensation and flexible work arrangements. Access to continuous learning, certifications, and professional growth in the AI domain. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly experienced and strategic Data Scientist to drive impactful data science initiatives across our organization. This role will focus on solving complex business problems by leveraging predictive and generative AI, agentic AI workflows, and large-scale data analytics. The ideal candidate will have a strong background in both data science and engineering, along with hands-on experience in deploying AI solutions in production environments. Job Description: Key Responsibilities Identify high-impact business opportunities through deep data exploration, EDA, and model prototyping Translate business problems into scientific formulations and design data science solutions accordingly Discover and curate relevant datasets (structured & unstructured) aligned with business goals Lead the extraction, transformation, and compilation of data across multiple sources using tools like SQL, R, and Python Conduct rigorous statistical analysis and experimentation, driving hypothesis formulation and validation Automate analytics and reporting processes to ensure scalability, reduce manual effort, and spot anomalies Develop predictive models to support brand, creative, and audience strategies Analyze and interpret data trends over time to deliver actionable business insights Implement and fine-tune models in production, working closely with ML engineers and developers Deploy, monitor, and optimize predictive and generative AI models at scale Build agentic AI workflows that use contextual and real-time data to produce relevant and validated insights Fine-tune foundation models (e.g., LLMs) using proprietary datasets from multiple sources Integrate and orchestrate data flows across APIs and live feeds to centralize access and accelerate insights Champion model governance and performance tracking over time Collaborate with cross-functional teams (Strategy, Creative, Engineering, Marketing) to align on goals and deliverables Mentor junior data scientists and contribute to knowledge sharing and best practices Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or related field 10+ years of professional experience in data science, including 3–5 years in a lead or principal role 2+ years of domain experience in marketing, creative advertising, or media agency context (highly preferred) Strong programming expertise in Python, with libraries such as pandas, NumPy, Matplotlib, Scikit-Learn, TensorFlow, PyTorch, and Keras Hands-on experience in machine learning and deep learning techniques (classification, regression, clustering, etc.) Expertise in both Predictive AI (Machine Learning / Deep Learning) and Generative AI solutions Experience with agentic AI workflows and reasoning models for insight generation Proficiency with cloud-based platforms like Azure ML, AWS Sagemaker, Azure OpenAI, AWS Bedrock, etc. Advanced SQL skills and experience working with structured and unstructured data Strong communication and leadership skills, with a proven ability to lead data science projects end-to-end Preferred Skills Experience building production-ready APIs or data pipelines for ML models Experience in NLP, LLM fine-tuning, and retrieval-augmented generation (RAG) Familiarity with MLOps and CI/CD workflows for model deployment and lifecycle management Strong stakeholder management and cross-functional collaboration skills Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a highly experienced and strategic Data Scientist to drive impactful data science initiatives across our organization. This role will focus on solving complex business problems by leveraging predictive and generative AI, agentic AI workflows, and large-scale data analytics. The ideal candidate will have a strong background in both data science and engineering, along with hands-on experience in deploying AI solutions in production environments. Job Description: Key Responsibilities Identify high-impact business opportunities through deep data exploration, EDA, and model prototyping Translate business problems into scientific formulations and design data science solutions accordingly Discover and curate relevant datasets (structured & unstructured) aligned with business goals Lead the extraction, transformation, and compilation of data across multiple sources using tools like SQL, R, and Python Conduct rigorous statistical analysis and experimentation, driving hypothesis formulation and validation Automate analytics and reporting processes to ensure scalability, reduce manual effort, and spot anomalies Develop predictive models to support brand, creative, and audience strategies Analyze and interpret data trends over time to deliver actionable business insights Implement and fine-tune models in production, working closely with ML engineers and developers Deploy, monitor, and optimize predictive and generative AI models at scale Build agentic AI workflows that use contextual and real-time data to produce relevant and validated insights Fine-tune foundation models (e.g., LLMs) using proprietary datasets from multiple sources Integrate and orchestrate data flows across APIs and live feeds to centralize access and accelerate insights Champion model governance and performance tracking over time Collaborate with cross-functional teams (Strategy, Creative, Engineering, Marketing) to align on goals and deliverables Mentor junior data scientists and contribute to knowledge sharing and best practices Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or related field 10+ years of professional experience in data science, including 3–5 years in a lead or principal role 2+ years of domain experience in marketing, creative advertising, or media agency context (highly preferred) Strong programming expertise in Python, with libraries such as pandas, NumPy, Matplotlib, Scikit-Learn, TensorFlow, PyTorch, and Keras Hands-on experience in machine learning and deep learning techniques (classification, regression, clustering, etc.) Expertise in both Predictive AI (Machine Learning / Deep Learning) and Generative AI solutions Experience with agentic AI workflows and reasoning models for insight generation Proficiency with cloud-based platforms like Azure ML, AWS Sagemaker, Azure OpenAI, AWS Bedrock, etc. Advanced SQL skills and experience working with structured and unstructured data Strong communication and leadership skills, with a proven ability to lead data science projects end-to-end Preferred Skills Experience building production-ready APIs or data pipelines for ML models Experience in NLP, LLM fine-tuning, and retrieval-augmented generation (RAG) Familiarity with MLOps and CI/CD workflows for model deployment and lifecycle management Strong stakeholder management and cross-functional collaboration skills Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role – AIML Data Scientist Job Location : Hyderabad Mode of Interview - Virtual Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description Job summary: Our Firmwide Risk Function is focused on cultivating a stronger, unified culture that embraces a sense of personal accountability for developing the highest corporate standards in governance and controls across the firm. Business priorities are built around the need to strengthen and guard the firm from the many risks we face, financial rigor, risk discipline, fostering a transparent culture and doing the right thing in every situation. We are equally focused on nurturing talent, respecting the diverse experiences that our team of Risk professionals bring and embracing an inclusive environment. Chase Consumer & Community Banking serves consumers and small businesses with a broad range of financial services, including personal banking, small business banking and lending, mortgages, credit cards, payments, auto finance and investment advice. Consumer & Community Banking Risk Management partners with each CCB sub-line of business to identify, assess, prioritize and remediate risk. Types of risk that occur in consumer businesses include fraud, reputation, operational, credit, market and regulatory, among others. Join our Model Insights Team , a Center of Excellence within Consumer & Community Banking (CCB) Risk Modeling, committed to tracking of comprehensive health of machine learning models. We are responsible for sanity of model inputs and score performance tracking for CCB risk decision models. Team collaborates with model developers to identify and recommend potential opportunities for model calibration. We are constantly seeking for opportunities to enhance model performance tracking framework, with aim of providing feedback loop to risk strategies. We are seeking candidates who possess extensive knowledge of data science techniques, appreciation for data combined with of domain expertise, and a keen eye for detail and logic. It’s an opportunity to make an impact to model performance monitoring and governance practices for CCB risk models. Job Responsibilities Drive synergy in model performance tracking across different sub-lines of business. Enhance model performance framework to holistically capture model health, providing actionable insights to model users. Collaborate with model developers to identify potential opportunities for model calibration and conduct preliminary Root Cause Analysis in case of model performance decay. Design and build robust framework to monitor quality of model inputs. Explore opportunities to drive efficiency in model inputs and performance tracking through use of Large Language Model (LLM). Partner with teams across, Risk, Technology, Data Governance, and Control to support effective model performance management and insights. Deliver regular updates on model health to senior leadership of risk organization and the first line of defense. Required Qualifications, Capabilities, And Skills Advanced degree in Mathematics, Statistics, Computer Science, Operations Research, Econometrics, Physics, or a related quantitative field. Minimum of 3 years of experience in developing and managing predictive risk models in financial industry. Proficiency in programming languages such as Python, PySpark, and SQL, along with familiarity with cloud services like AWS SageMaker and Amazon EMR. Deep understanding of advanced machine learning algorithms (e.g. Decision Trees, Random Forest, XGBoost, Neural Networks, Clustering etc) Strong conceptual understanding of performance metrics used to monitor health of machine learning models. Fundamental understanding of the consumer lending business and risk management practices. Experience of working with large datasets with strong ability to analyze, interpret, and derive insights from data. Advanced problem-solving and analytical skills, with a keen attention to detail. Excellent communication skills, with the ability to convey complex information clearly and effectively to senior management. Preferred Qualifications, Capabilities, And Skills Experience of data wrangling and model building on a distributed Spark computation environment (with stability, scalability and efficiency). Proven expertise in designing, building, and deploying production-quality machine learning models. Ability to effectively collaborate with multiple stakeholders on projects of strategic importance, ensuring alignment and successful outcomes. Basic level of proficiency in Tableau ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Halo believes in innovation by inclusion to solve digital problems . As an international agency of over 200 people specializing in interactive media strategy and development, we embrace equity and empowerment in a serious way. Our interdisciplinary teams of unique designers, developers and entrepreneurial minds with a variety of backgrounds, viewpoints, and skills connect to solve business challenges of every shape and size. We empathize to form deep, meaningful relationships with our clients so they can do the same with their audience. Working at Halo feels like belonging . Learn more about our philosophy, benefits, and team at https://halopowered.com/As an AI Architect, you will lead the design of scalable, secure, and modern technology solutions, leveraging artificial intelligence, cloud platforms, and microservices—while ensuring alignment with AI governance principles, agile delivery, and platform modernization strategies As a Data Scientist, you'll be part of a multidisciplinary team applying advanced analytics, machine learning, and generative AI to solve real-world problems across our consulting, health, wealth, and career businesses. You will collaborate closely with engineering, product, and business stakeholders to develop scalable models, design intelligent pipelines, and influence data-driven decision-making across the enterprise. Requirements Design, develop, and deploy robust machine learning models and data pipelines that support AI-enabled applications Apply exploratory data analysis (EDA) and feature engineering techniques to extract insights and improve model performance Collaborate with cross-functional teams to translate business problems into analytical use cases Contribute to the full machine learning lifecycle: from data preparation and model experimentation to deployment and monitoring Work with structured and unstructured data, including text, to develop NLP and generative AI solutions Define and enforce best practices in model validation, reproducibility, documentation, and versioning Partner with engineering to integrate models into production systems using CI/CD pipelines and cloud-native services Stay current with industry trends, emerging techniques (e.g., RAG, LLMs, embeddings), and relevant tools Required Skills & Qualifications 3+ years of experience in Data Science, Machine Learning, or Applied AI roles Proficiency in Python (preferred) and a strong grasp of pandas, NumPy, and scikit-learn Skilled in data querying, manipulation, and pipeline development using SQL and modern ETL frameworks Experience working with Databricks, including notebooks, MLflow, Delta Lake, and job orchestration Experience with Git-based workflows and Agile methodologies Strong analytical thinking, problem-solving skills, and communication abilities Exposure to Generative AI, LLMs, prompt engineering, or vector-based search Hands-on experience with cloud platforms (AWS, Azure, or GCP) and deploying models in scalable environments Knowledge of data versioning, model registry, and ML lifecycle tools (e.g., MLflow, DVC, SageMaker, DataBricks, or Vertex AI) Experience working with visualization tools like Tableau, Power BI, or Qlik Degree in Computer Science, Data Science, Applied Mathematics, or a related field Benefits 100% RemoteWork Salary in USD Get to work on challenging projects for the U.S Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: Gurugram Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled and experienced L3 AWS Network Engineer to join our dynamic team in Hyderabad. As a key member of our technology organization, you will be responsible for the design, implementation, and management of our critical network infrastructure within the Amazon Web Services (AWS) cloud environment. You will leverage your deep understanding of AWS networking services and cloud networking principles to ensure the reliability, security, and performance of our applications and services. This role requires strong technical expertise, excellent problem-solving abilities, and the capacity to collaborate effectively with cross-functional teams to drive business outcomes. Familiarity with AI/ML AWS services and best practices is a significant plus. Responsibilities Design, implement, and manage secure and scalable network architectures on AWS, including VPCs, subnets, security groups, NACLs, route tables, Transit Gateway, Direct Connect, and VPNs. Troubleshoot complex network issues across the AWS environment, identifying root causes and implementing effective solutions. Implement and maintain network monitoring and alerting systems to proactively identify and resolve potential issues. Automate network provisioning and configuration tasks using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Ensure compliance with security policies and best practices for network configurations in AWS. Collaborate with Security teams to implement and manage network security controls, including firewalls, intrusion detection/prevention systems, and network segmentation. Optimize network performance and cost efficiency within the AWS environment. Participate in the planning and execution of cloud migration projects, ensuring seamless network connectivity and minimal disruption. Document network designs, configurations, and operational procedures. Provide technical guidance and mentorship to junior network engineers. Stay up-to-date with the latest AWS networking services, features, and best practices. Collaborate effectively with Engineering, Operations, and Business Units to understand their requirements and translate them into robust network solutions. Contribute to the development and implementation of network standards and policies. Apply familiarity with AI/ML AWS services (e. g., SageMaker, Comprehend, Rekognition) and best practices to support related infrastructure needs. Communicate technical concepts and solutions clearly and effectively, both verbally and in writing, to both technical and non-technical audiences. Participate in on-call rotation as needed to support critical network infrastructure. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 5-7 years of experience in network engineering, with a significant focus on AWS cloud environments. Strong understanding of AWS products and services, with in-depth knowledge of core networking services (VPC, EC2 S3 ELB/ALB, Route 53 etc. ). Solid knowledge of cloud networking fundamentals and technologies, including TCP/IP, DNS, DHCP, routing protocols (BGP, OSPF), VPN, and firewall concepts. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Proficiency in scripting languages such as Python1 or Bash for network automation. Experience with network monitoring and management tools (e. g., CloudWatch, SolarWinds, Nagios). Solid understanding of network security principles and best practices in a cloud environment. Experience with implementing and managing network security controls (Security Groups, NACLs, WAF, etc. ). Excellent analytical and problem-solving skills with the ability to troubleshoot complex network issues. Ability to collaborate effectively with cross-functional teams, including Engineering, Operations, and Business Units, to achieve business objectives. Familiarity with AI/ML AWS services and best practices. Proven written and verbal communication skills, with the ability to articulate technical information clearly and concisely. AWS Certified Advanced Networking - Specialty certification. Experience with container networking (e. g., ECS, EKS, Kubernetes network policies). Knowledge of hybrid cloud networking architectures and technologies (e. g., AWS Direct Connect, VPN). Experience with network performance optimization and cost management in AWS. Familiarity with DevOps principles and practices. Experience with security compliance frameworks (e. g., SOC 2 PCI DSS, HIPAA). Exposure to serverless networking concepts (e. g., Lambda networking). This job was posted by Jayanthi C from Aequalis Software Solutions. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 - 0 Lacs

Chandigarh

On-site

Job Title: Experienced AI Developer Location: Chandigarh Job Type: Full-Time Experience Level: Mid to Senior-Level Job Summary: As an AI Developer, you will be responsible for designing, developing, and deploying machine learning and deep learning models. You will work closely with our data science, product, and engineering teams to integrate AI capabilities into our software applications. Key Responsibilities: Design and implement AI/ML models tailored to business requirements. Train, fine-tune, and evaluate models using datasets from various domains. Integrate AI solutions into web or mobile applications. Collaborate with cross-functional teams to define AI strategies. Optimize models for performance, scalability, and reliability. Stay updated with the latest advancements in AI/ML technologies and frameworks. Deploy models to production environments using tools like Docker, Kubernetes, or cloud services (AWS/GCP/Azure). Write clean, maintainable, and well-documented code. Required Skills and Qualifications: 3+ years of experience in AI/ML development. Strong proficiency in Python and popular ML libraries (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with NLP, computer vision, or recommendation systems. Experience with data preprocessing, feature engineering, and model evaluation. Familiarity with REST APIs and microservices architecture. Solid understanding of AI ethics, bias mitigation, and responsible AI development. Preferred Qualifications: Experience with large language models (e.g., GPT, LLaMA, Claude). Knowledge of AI deployment tools like MLflow, SageMaker, or Vertex AI. Experience with prompt engineering or fine-tuning foundation models. Job Type: Full-time Pay: ₹35,000.00 - ₹70,000.00 per month Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 2 weeks ago

Apply

0.0 - 3.0 years

3 - 9 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-210152 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Mar. 28, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a Associate Data Engineer to design, build, and maintain scalable data solutions that drive business insights. You will work with large datasets, cloud platforms (AWS preferred), and big data technologies to develop ETL pipelines, ensure data quality, and support data governance initiatives. Develop and maintain data pipelines, ETL/ELT processes, and data integration solutions . Design and implement data models, data dictionaries, and documentation for accuracy and consistency. Ensure data security, privacy, and governance standard processes. Use Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, for scalable data processing. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. Optimize data pipeline performance and explore new tools for efficiency. Follow best practices in coding, testing, and infrastructure-as-code (CI/CD, version control, automated testing) . What we expect of you We are all different, yet we all use our unique contributions to serve patients. Strong problem-solving, critical thinking, and communication skills. Ability to collaborate effectively in a team setting. Proficiency in SQL, data analysis tools, and data visualization . Hands-on experience with big data technologies (Databricks, Apache Spark, AWS, Redshift ) . Experience with ETL tools, workflow orchestration, and performance tuning for big data . Basic Qualifications: Bachelor’s degree and 0 to 3 years of experience OR Diploma and 4 to 7 years of experience in Computer science, IT or related field. Preferred Qualifications: Knowledge of data modeling, warehousing, and graph databases Experience with Python, SageMaker, and cloud data platforms . AWS Certified Data Engineer or Databricks certification preferred. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 9 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-210154 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Mar. 28, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in the design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience Master’s degree and 1 to 3 years of experience in Computer Science, IT, or related field OR Bachelor’s degree and 3 to 5 years of experience in Computer Science, IT, or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT, or related field Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (e.g., SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, cloud data platforms. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

3.0 years

10 - 35 Lacs

India

On-site

Job Summary We are seeking a results-driven Data Scientist to join our growing analytics and data science team. The ideal candidate will have hands-on experience working with Big Data platforms , designing and deploying AI/ML models , and leveraging Python and AWS services to solve complex business problems. Key Responsibilities Design, develop, and implement machine learning and deep learning models for predictive analytics and automation. Work with large-scale structured and unstructured datasets using Big Data technologies (e.g., Spark, Hadoop, Hive). Build scalable data pipelines and model deployment workflows in AWS (S3, Lambda, SageMaker, EMR, Glue, Redshift). Perform advanced statistical analysis, hypothesis testing, and A/B testing to support data-driven decisions. Develop, clean, and validate datasets using Python , Pandas, NumPy, and PySpark. Required Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, Engineering, or a related field. 3+ years of experience in a Data Science role, preferably in a cloud-first environment. Proficiency in Python and its data science libraries (e.g., scikit-learn, TensorFlow/PyTorch, pandas, NumPy). Hands-on experience with Big Data tools such as Spark, Hive, or Hadoop. Job Types: Full-time, Permanent Pay: ₹1,005,888.28 - ₹3,586,818.50 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Morning shift Work Location: On the road

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Us: mavQ is an innovative AI company that provides intelligent business automation solutions, empowering organizations with AI-driven tools to streamline operations, enhance efficiency, and accelerate digital transformation. Headquartered in the U.S., with offices in India, mavQ simplifies complex workflows, automates document processing, and delivers actionable insights. Scalable and customizable, mavQ enables organizations to optimize processes, reduce manual effort, and achieve their business goals with ease. Role Overview: We are seeking an experienced and dynamic Technology Leader to lead our talented engineering team. The ideal candidate will have a strong background in building B2B SaaS applications using Java, Spring, and Angular or React for the frontend. This role requires extensive experience across the full stack engineering spectrum, a deep understanding of cloud platforms such as AWS and GCP, and proficiency in system and network architecture. Additionally, the candidate should have hands-on experience with cloud infrastructure for application deployment and an understanding of integrating and hosting machine learning models. Job Title: VP - Product Development Location: Hyderabad, India Key Responsibilities: Strategic Leadership: Develop and execute the engineering strategy that aligns with the company’s vision, goals, and business objectives. Collaborate with executive leadership to shape the product roadmap and ensure that engineering efforts are in sync with business priorities. Drive innovation within the engineering team, identifying emerging technologies and trends that can create competitive advantages. Customer Trust & Success: Champion customer-centric development practices, ensuring that all engineering efforts are focused on delivering value and building trust with customers. Collaborate with customer success, product, and sales teams to understand customer needs and feedback, and translate them into actionable engineering strategies. Ensure that engineering teams are equipped to deliver reliable, secure, and scalable products that instill confidence in our customers. Technical Leadership & Operations: Cloud & Infrastructure Management: Design and implement robust system and network architectures utilizing AWS and GCP to build scalable, reliable cloud solutions. Deploy and manage applications on Kubernetes, ensuring optimal performance and scalability. Handle traffic routing with Ingress Controllers (Nginx), oversee Certificate Management using Cert Manager, and manage secrets with Sealed Secrets and Vault. Enhance application performance with Caching solutions like Redis and Memcache, and implement comprehensive logging and tracing systems using Loki, Promtail, Tempo, and OpenTelemetry (Otel). Establish and maintain monitoring and alerting systems with Grafana, Prometheus, and BlackBoxExporter. Manage Infrastructure as Code using Terraform, oversee Manifest Management with Gitlab, and lead Release Management workflows using Gitlab and ArgoCD. Application & Data Management: Manage Authentication and Authorization services using Keycloak and implement Event Streaming solutions with Kafka and Pulsar. Oversee database management and optimization utilizing tools such as Pg Bouncer, Mulvis, OpenSearch, and ClickHouse. Implement and manage distributed and real-time systems with Temporal. Leverage advanced data processing tools like Trino, Apache Superset, Livy, and Hive to meet specialized data specific requirements. Machine Learning Integration: Collaborate with data scientists to integrate and host machine learning models within applications, implementing MLOps practices to streamline the deployment, monitoring, and management of ML models in production. Utilize tools such as TensorFlow Extended (TFX), Kubeflow, MLflow, or SageMaker for comprehensive ML lifecycle management, ensuring robust model versioning, experimentation, reproducibility, and optimizing ML pipelines for performance, scalability, and efficiency. Project Management: Oversee project timelines, deliverables, and resource allocation. Coordinate with cross-functional teams to align on project goals and deliverables. Ensure timely and high-quality delivery of software products. Qualifications: Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience (12+ years) in software engineering, with a strong focus on B2B SaaS applications. At least 5 years of experience in a senior leadership role, preferably at the VP level. Strategic & Technical Skills: Demonstrated ability to develop and execute engineering strategies that align with business goals. Expertise in full stack development, cloud platforms (AWS, GCP), and Kubernetes. Strong experience with infrastructure management, MLOps, and integrating machine learning models. Ability to translate customer needs into technical requirements and ensure the delivery of high-quality products. Leadership & Soft Skills: Visionary leadership with the ability to inspire and guide large engineering teams. Strong business acumen with the ability to align technical efforts with business objectives. Excellent communication and interpersonal skills, with a focus on building strong cross-functional relationships. Proven track record of fostering customer trust and delivering products that drive customer success. Why Join Us: Leadership: Be a key player in shaping the future of our company and driving its success. Innovation : Lead the charge in adopting cutting-edge technologies and practices. Customer Impact : Play a pivotal role in ensuring our customers’ success and satisfaction. Growth : Opportunities for professional development and career advancement. Culture : A supportive and collaborative work environment where your contributions are valued. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Build a world-class, high-volume data system from the ground up! We have growing data sets and several exciting Machine Learning systems to create or improve, including automated diagnosis of equipment, network anomaly detection and mitigation, and intelligent RF network optimization and technical support automation using LLMs. We are looking for experienced Data Engineers who can help us make progress on these projects quickly. In your first year at Tarana, you will design highly scalable and performant systems. You will design and implement a detailed data architecture/data model and solidify our ELT pipelines as part of a fast-moving, disciplined, and methodical data team. These deliverables will focus on Data Modeling Pipeline design, implementation and deployment Pipeline performance optimization Cluster design and optimization Query performance optimization Data tool analysis and selection Data preparation Data monitoring and visualization System integration You will work closely with cloud and device engineers, network engineers, DevOps engineers, domain experts and the Data Science team to implement your plan. This is a hands-on technical role: you will be doing the heavy technical lifting as we grow the team. You must work well with others: you will need to interact with people across the organization to successfully build and maintain a high volume data architecture. Required Skills and Experience: BE/ME/M.Tech/MS or higher in Computer Science 5-12 years of experience building large-scale systems or ML/AI pipelines Knowledge, Skills and Abilities Needed: Strong skills in scripting languages (mainly python, bash) required Java programming Experience with AWS ecosystem Experience with data integration tools, data governance tools and data quality tools Experience with data warehousing and end-to-end data platforms such as DataBricks, Snowflake, BigQuery and/or Redshift/SageMaker Scala experience a plus Highly collaborative, team-oriented approach Passionate and excited about working in a fast moving startup with exciting projects and opportunities for growth Since our founding in 2009, we’ve been on a mission to accelerate the pace of bringing fast and affordable internet access — and all the benefits it provides — to the 90% of the world’s households who can’t get it. Through a decade of R&D and more than $400M of investment, we’ve created an entirely unique next-generation fixed wireless access technology, powering our first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid 2021 and has now been installed by over 160 service providers globally. We’re headquartered in Milpitas, California, with additional research and development in Pune, India. G1 has been developed by an incredibly talented and pioneering core technical team. We are looking for more world-class problem solvers who can carry on our tradition of customer obsession and ground-breaking innovation. We’re well funded, growing incredibly quickly, maintaining a superb results-focused culture while we’re at it, and all grooving on the positive difference we are making for people all over the planet. If you want to help make a real difference in this world, apply now! Show more Show less

Posted 2 weeks ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies