Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Madurai
On-site
Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams
Posted 4 weeks ago
4.5 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient Experience 4.5-6 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): AI/ML Development, TensorFlow, NLP, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 4 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduation Degree Experience with cloud platforms, particularly AWS AI services (Bedrock, SageMaker) and/or Azure OpenAI ServiceProven experience in developing and deploying LLM-powered applications in production Experience with foundation models and generative AI platforms (e.g. OpenAI, Anthropic, open-source models) Experience in building RAG solutions with vector databases (e.g. pgvector, Pinecone, OpenSearch) Familiarity with MLOps practices (deployment, monitoring, model lifecycle) Good understanding of modern NLP techniques (e.g. transformers, embeddings, prompt engineering) Solid understanding with Machine Learning Frameworks and Libraries (e.g., PyTorch, scikit-learn, TensorFlow, MLflow, Keras, XGBoost) Proven solid programming skills in Python, PySpark Proven exposure to AI orchestration frameworks such as LangChain, LangGraph and others Proven excellent communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.
Posted 4 weeks ago
0 years
0 Lacs
Delhi, India
On-site
The Senior Statistical Data Analyst is responsible for designing unique analytic approaches to detect, assess, and recommend the optimal customer treatment to reduce frictions and enhance experience while properly managing fraud risk with data driven and statistical methods. You will analyze large amounts of account and transaction data to build customer level insights to derive the recommendations and methods to reduce friction and enhance experience on fund availability, transaction/fund hold time and more, and models while managing the customer experience. This role requires critical thinking and analytical savviness to work in a fast-paced environment but can be a rewarding opportunity to help bring a great banking experience and empower the customers to achieve their financial goals. Responsibilities: Analyze large amounts of data/transactions to derive business insights and create innovative solutions/models/strategies. Aggregate and analyze internal and external risk datasets to understand performance of fraud risk at customer level. Analyze customer's banking/transaction behaviors and be able to build predictive models (simple ones like logistic regression, linear regression) to predict churns or negative outcomes or running correlation analysis to understand the correlation. Develop personalized segmentations and micro-segmentation to identify customers based on their fraud risk, banking behavioral, and value. Conduct analysis for data driven recommendations with reporting dashboard to optimize customer treatment regarding friction reduction and fund availability across the entire banking journey. Skillset: Analytics professional preferably with experience in Fraud analytics . Strong knowledge and working experience in SQL and Python is a must. Experience analyzing data with statistical approaches with python (e.g. in Jupyter notebook): for example, clustering analysis, decision trees, linear regression, logistic regression, correlation analysis Knowledge of Tableau and BI tools Hands-on use of AWS (e.g. S3, EC2, EMR, Athena, SageMaker and more) is a plus Strong communication and interpersonal skills Strong knowledge of financial products , including debit cards, credit cards, lending products, and deposit accounts is a plus. Experience working at a FinTech or start-up is a plus.
Posted 4 weeks ago
13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced Cloud AIOps Architect to lead the design and implementation of advanced AI-driven operational systems across multi-cloud and hybrid cloud environments. This role demands a blend of technical expertise, innovation, and leadership to develop scalable solutions for complex IT systems with a focus on automation, machine learning, and operational efficiency. Responsibilities Architect and design the AIOps solution leveraging AWS, Azure, and Cloud Agnostic services, ensuring portability and scalability Develop an end-to-end automated machine learning (ML) pipeline from data ingestion, DataOps, model training, to inference pipelines across multi-cloud environments Design hybrid architectures leveraging cloud-native services like Amazon SageMaker, Azure Machine Learning, and Kubernetes for development, model deployment, and orchestration Design and implement ChatOps integration, allowing users to interface with the platform through Slack, Microsoft Teams, or similar communication platforms Leverage Jupyter Notebooks in AWS SageMaker, Azure Machine Learning Studio, or cloud-agnostic environments to create model prototypes and experiment with datasets Lead the design of classification models and other ML models using AWS SageMaker training jobs, Azure ML training jobs, or open-source tools in a Kubernetes container Implement automated rule management systems using Python in containers deployed to AWS ECS/EKS, Azure AKS, or Kubernetes for cloud-agnostic solutions Architect the integration of ChatOps backend services using Python containers running in AWS ECS/EKS, Azure AKS, or Kubernetes for real-time interactions and updates Oversee the continuous deployment and retraining of models based on updated data and feedback loops, ensuring models remain efficient and adaptive Design platform-agnostic solutions to ensure that the system can be ported across different cloud environments or run in hybrid clouds (on-premises and cloud) Requirements 13+ years of overall experience and 7+ years of experience in AIOps, Cloud Architecture, or DevOps roles Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Hands-on experience working on the design, development, and deployment of contact centre solutions at scale Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Experience in Kafka, Azure DevOps, and AWS DevOps for CI/CD pipelines
Posted 4 weeks ago
2.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company : We are seeking a talented and driven Machine Learning Engineer with 2-5 years of experience to join our dynamic team in Chennai. The ideal candidate will have a strong foundation in machine learning principles and extensive hands-on experience in building, deploying, and managing ML models in production environments. A key focus of this role will be on MLOps practices and orchestration, ensuring our ML pipelines are robust, scalable, and automated. About the Role : A short paragraph summarizing the key role responsibilities. Responsibilities : ML Model Deployment & Management : Design, develop, and implement end-to-end MLOps pipelines for deploying, monitoring, and managing machine learning models in production. Orchestration : Utilize orchestration tools (e.g., Apache Airflow, Kubeflow, AWS Step Functions, Azure Data Factory) to automate ML workflows, including data ingestion, feature engineering, model training, validation, and deployment. CI/CD for ML : Implement Continuous Integration/Continuous Deployment (CI/CD) practices for ML code, models, and infrastructure, ensuring rapid and reliable releases. Monitoring & Alerting : Establish comprehensive monitoring and alerting systems for deployed ML models to track performance, detect data drift, model drift, and ensure operational health. Infrastructure as Code (IaC) : Work with IaC tools (e.g., Terraform, CloudFormation) to manage and provision cloud resources required for ML workflows. Containerization : Leverage containerization technologies (Docker, Kubernetes) for packaging and deploying ML models and their dependencies. Collaboration : Collaborate closely with Data Scientists, Data Engineers, and Software Developers to translate research prototypes into production-ready ML solutions. Performance Optimization : Optimize ML model inference and training performance, focusing on efficiency, scalability, and cost-effectiveness. Troubleshooting & Debugging : Troubleshoot and debug issues across the entire ML lifecycle, from data pipelines to model serving. Documentation : Create and maintain clear technical documentation for MLOps processes, pipelines, and infrastructure. Qualifications : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. 2-5 years of professional experience as a Machine Learning Engineer, MLOps Engineer, or a similar role. Required Skills : Strong proficiency in Python and its ML ecosystem (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, NumPy). Hands-on experience with at least one major cloud platform (AWS, Azure, GCP) and their relevant ML/MLOps services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Proven experience with orchestration tools like Apache Airflow, Kubeflow, or similar. Solid understanding and practical experience with MLOps principles and best practices. Experience with containerization technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines and tools (e.g., GitLab CI/CD, Jenkins, Azure DevOps, AWS CodePipeline). Knowledge of database systems (SQL and NoSQL). Excellent problem-solving, analytical, and debugging skills. Strong communication and collaboration abilities, with a capacity to work effectively in an Agile environment.
Posted 1 month ago
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: AI/GenAI Engineer Job ID: POS-13731 Primary Skill: Databricks, ADF Location: Hyderabad Experience: 3.00 Secondary skills: Python, LLM, Langchain, Vectors, and AWS Mode of Work: Work from Office Experience : 2-3 Years About The Job We are seeking a highly motivated and innovative Generative AI Engineer to join our team and drive the exploration of cutting-edge AI capabilities. You will be at forefront of developing solutions using Generative AI technologies, primarily focusing on Large Language Models (LLMs) and foundation models, deployed on either AWS or Azure cloud platforms. This role involves rapid prototyping, experimentation, and collaboration with various stakeholders to assess the feasibility and potential impact of GenAI solutions on our business challenges. If you are passionate about the potential of GenAI and enjoy hands-on building in a fast-paced environment, this is the role for you. Know Your Team At ValueMomentum’s Engineering Center , we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through a strong engineering foundation and by continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development, leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Responsibilities Develop GenAI Solutions: Develop, and rapidly iterate on GenAI solutions leveraging LLMs and other foundation models available on AWS and/or Azure platforms. Cloud Platform Implementation: Utilize relevant cloud services (e.g., AWS SageMaker, Bedrock, Lambda, Step Functions; Azure Machine Learning, Azure OpenAI Service, Azure Functions) for model access, deployment, data processing. Explore GenAI Techniques: Experiment with and implement techniques like Retrieval-Augmented Generation (RAG), evaluating the feasibility of model fine-tuning or other adaptation methods for specific PoC requirements. API Integration: Integrate GenAI models (via APIs from cloud providers, OpenAI, Hugging Face, etc.) into prototype applications and workflows. Data Handling for AI: Prepare, manage, and process data required for GenAI tasks, such as data for RAG indexes, datasets for evaluating fine-tuning feasibility, or example data for few-shot prompting. Documentation & Presentation: Clearly document PoC architectures, implementation details, findings, limitations, and results for both technical and non-technical audiences. Requirements Overall, 2-3 years of experience. Expert in Python with advance programming and concepts Solid understanding of Generative AI concepts, including LLMs, foundation models, prompt engineering, embeddings, and common architectures (e.g., RAG). Demonstrable experience working with at least one major cloud platform (AWS or Azure). Hands-on experience using cloud-based AI/ML services relevant to GenAI (e.g., AWS SageMaker, Bedrock; Azure Machine Learning, Azure OpenAI Service). Experience interacting with APIs, particularly AI/ML model APIs Bachelor’s degree in computer science, AI, Data Science or equivalent practical experience. About The Company Headquartered in New Jersey, US, ValueMomentum is the largest standalone provider of IT Services and Solutions to Insurers. Our industry focus, expertise in technology backed by R&D, and our customer-first approach uniquely position us to deliver the value we promise and drive momentum to our customers’ initiatives. ValueMomentum is amongst the top 10 insurance-focused IT services firms in North America by number of customers. Leading Insurance firms trust ValueMomentum with their Digital, Data, Core, and IT Transformation initiatives. Benefits We at ValueMomentum offer you a congenial environment to work and grow in the company of experienced professionals. Some benefits that are available to you are: Competitive compensation package. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers.
Posted 1 month ago
8.0 - 10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
As a Senior Product Manager, you will play a pivotal role in defining the strategic direction of our product offerings, collaborating with cross-functional teams, and ensuring the successful execution of our product vision. Experience 8-10 years of experience in the Product domain developing AI based enterprise solutions Proven track record of leveraging AI/ML technologies to enhance product capabilities Successful delivery of innovative solutions in a fast-paced and dynamic environment Roles and Responsibilities Lead the development and execution of AI/ML products, from concept to launch Define the product roadmap, strategy, and vision based on market trends, customer needs, and business goals Collaborate with cross-functional teams including engineering, design, data science, and marketing to drive product development Conduct market research and competitive analysis to identify new opportunities and enhance existing products Manage the product lifecycle, including planning, prioritization, and feature definition Measure and evaluate product performances, user behavior, Customer Satisfaction using qualitative and quantitative methods. Communicate product vision and priorities to stakeholders, team members, and executives Drive product performance analysis and make data-driven decisions to optimize product features and user experience Work closely with customers to gather feedback, understand requirements, and address product issues Stay updated on AI industry trends, technologies, and best practices to drive innovation and maintain competitive advantage Provide leadership and mentorship to junior product team members Certification Required: Product Management Certification (e.g., Pragmatic Marketing, Certified Scrum Product Owner) Behavioral Skills: Excellent leadership and communication skills Strong problem-solving and decision-making abilities Adaptability, creativity, and a passion for innovation Technical Tools & Frameworks Python TensorFlow PyTorch OpenAI Jupyter Notebooks MLflow AWS SageMaker / Google Vertex AI / Azure ML Prospects with proven experience in building AL/ML Products can email resumes to hardik.dwivedi@adani.com
Posted 1 month ago
0 years
0 Lacs
India
On-site
Key Responsibilities: Design end-to-end architectures for Generative AI solutions (text, image, video, code generation). Evaluate and integrate foundation models like OpenAI GPT, Google Gemini, Mistral, LLaMA, Claude, etc. Build scalable GenAI pipelines for training, fine-tuning, prompt engineering, and inference. Collaborate with data scientists, ML engineers, and software developers to operationalize AI models. Lead PoCs and MVPs in collaboration with business stakeholders. Implement RAG (Retrieval Augmented Generation) pipelines for context-aware applications. Define best practices for model deployment (API, containerization, CI/CD). Ensure responsible AI compliance and model monitoring in production. Optimize performance and cost of GenAI workloads on cloud platforms (AWS, Azure, GCP). Required Skills: Strong experience with GenAI models (GPT, BERT, LLaMA, Stable Diffusion, DALL·E, etc.) Proficient in Python, PyTorch, TensorFlow, LangChain, Transformers (Hugging Face) Deep understanding of prompt engineering, model fine-tuning (LoRA, PEFT) Experience with RAG pipelines, vector databases (Pinecone, FAISS, Weaviate, Chroma) Cloud-native development on AWS, Azure, or GCP (SageMaker, Vertex AI, Azure ML) Hands-on with APIs, microservices, and orchestration tools (Docker, Kubernetes, Airflow) Familiar with data engineering workflows, MLOps, and security best practices. Nice to Have: Experience with multimodal models (e.g., video/audio + text) Exposure to enterprise LLMs and model hosting platforms (Anthropic, Cohere, Mistral) Understanding of ethical AI, bias detection, and explainability Prior experience in GenAI for specific domains (Healthcare, Finance, Retail, etc.) Educational Background: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field Certifications in cloud architecture or AI/ML preferred Soft Skills: Strong problem-solving and architectural thinking Ability to communicate complex concepts to non-technical stakeholders Passionate about staying ahead in the AI/ML space
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Traya is an Indian direct-to-consumer hair care brand platform provides a holistic treatment for consumers dealing with hairloss. The Company provides personalized consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathizing with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Responsibilities: Data Analysis and Exploration: Conduct in-depth analysis of large and complex datasets to identify trends, patterns, and anomalies. Perform exploratory data analysis (EDA) to understand data distributions, relationships, and quality. Machine Learning and Statistical Modeling: Develop and implement machine learning models (e.g., regression, classification, clustering, time series analysis) to solve business problems. Evaluate and optimize model performance using appropriate metrics and techniques. Apply statistical methods to design and analyze experiments and A/B tests. Implement and maintain models in production environments. Data Engineering and Infrastructure: Collaborate with data engineers to ensure data quality and accessibility. Contribute to the development and maintenance of data pipelines and infrastructure. Work with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Spark, Hadoop). Communication and Collaboration: Effectively communicate technical findings and recommendations to both technical and non-technical audiences. Collaborate with product managers, engineers, and other stakeholders to define and prioritize projects. Document code, models, and processes for reproducibility and knowledge sharing. Present findings to leadership. Research and Development: Stay up-to-date with the latest advancements in data science and machine learning. Explore and evaluate new tools and techniques to improve data science capabilities. Contribute to internal research projects. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. 3-5 years of experience as a Data Scientist or in a similar role. Leverage SageMaker's features, including SageMaker Studio, Autopilot, Experiments, Pipelines, and Inference, to optimize model development and deployment workflows. Proficiency in Python and relevant libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch). Solid understanding of statistical concepts and machine learning algorithms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience deploying models to production. Experience with version control (Git) Preferred Qualifications: Experience with specific industry domains (e.g., e-commerce, finance, healthcare). Experience with natural language processing (NLP) or computer vision. Experience with building recommendation engines. Experience with time series forecasting.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Company Description Nielsen Sports is the premier provider of analytics and insights within the growing sports industry, offering the most reliable source of independent and holistic market data. We help businesses around the world effectively measure and commercialize their assets in sports. Our technology, data, and insights empower clients to make smarter decisions regarding media valuation, sponsorship, fan engagement, and more, by understanding and connecting with audiences through sports. We leverage cutting-edge image detection, machine learning, and AI to provide a comprehensive view of the sports media landscape. Job Description Role Overview: Are you passionate about leading and nurturing high-performing engineering teams to build groundbreaking AI solutions? Do you thrive on translating cutting-edge research in Computer Vision and Large Language Models into impactful, scalable products? Nielsen Sports is seeking an experienced and dynamic Engineering Manager / Senior Engineering Manager to lead our talented AI/ML engineers in Bengaluru. This role is critical to our mission of delivering innovative solutions that redefine how value is measured and understood in the global sports ecosystem. You will be responsible for guiding your team's technical direction, fostering their growth, and ensuring the successful delivery of projects that leverage sophisticated AI to analyze complex multimedia sports data. Key Responsibilities Team Leadership & Development: Lead, manage, and mentor a team of AI/ML software engineers (MTS 1-5 levels), fostering a culture of innovation, collaboration, ownership, and continuous learning. Drive recruitment, onboarding, performance management, and career development for team members. Champion engineering best practices and a positive, inclusive team environment. Technical Delivery & Execution: Oversee the planning, execution, and delivery of complex AI/ML projects, particularly in computer vision (e.g., object detection, logo recognition) and multi-modal LLM applications. Ensure projects are delivered on time, within scope, and to a high standard of quality. Work closely with product managers, researchers, and other stakeholders to define roadmaps, prioritize features, and translate business requirements into technical solutions. Technical Guidance & Strategy: Provide technical guidance and direction to the team on AI/ML model development, system architecture, MLOps, and software engineering best practices. Stay abreast of the latest advancements in AI, Computer Vision, LLMs, and relevant cloud technologies. Contribute to the broader technical strategy for AI/ML applications within Nielsen Sports. Stakeholder Management & Communication: Effectively communicate project status, risks, and technical details to both technical and non-technical stakeholders. Collaborate with cross-functional teams globally (including product, operations, and other engineering groups) to ensure alignment and successful product integration. Operational Excellence & Process Improvement: Drive improvements in team processes, development methodologies (e.g., Agile/Scrum), and operational efficiency. Ensure the scalability, reliability, and maintainability of the AI/ML systems and services developed by your team. Qualifications Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Artificial Intelligence, or a related technical field. 5-12 years of progressive experience in software development, with a significant portion focused on AI/ML, Data Science, or Computer Vision. For Engineering Manager: At least 2-4 years of direct people management experience, leading and mentoring software engineering teams. For Senior Engineering Manager: At least 5+ years of direct people management experience, potentially including experience managing other leads or managers, and a proven track record of leading multiple or complex projects. Strong understanding of the software development lifecycle (SDLC), Agile methodologies, and CI/CD practices. Technical proficiency and familiarity with AI/ML concepts, machine learning algorithms, deep learning, and statistical modeling. Deep Expertise with computer vision techniques (e.g., object detection, image classification) and an understanding of or keen interest in Large Language Models (LLMs). Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth. Proficiency in programming languages commonly used in AI/ML, such as Python. Excellent leadership, interpersonal, communication, and organizational skills. Proven ability to motivate and grow technical talent. Full Stack Development expertise in any one stack Preferred Qualifications / Bonus Skills Prior hands-on experience as an AI/ML engineer or data scientist before transitioning into management. Experience managing teams working specifically on computer vision and/or LLM-based projects. Experience with MLOps tools and practices (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Familiarity with cloud platforms (AWS, GCP, or Azure) and their AI/ML services. Experience in the sports analytics, media technology, or ad-tech industries. Proven track record of successfully delivering scalable, high-impact AI/ML products or features. Experience working in a global, distributed team environment. Additional Information What We Offer: An opportunity to lead and shape the future of AI in the exciting world of sports analytics. A leadership role with significant impact on Nielsen Sports' products and technology. A dynamic, innovative, and collaborative work environment. Competitive salary, performance-based bonus, and comprehensive benefits package. Opportunities for professional growth and development in a global organization. The chance to work with a passionate team on cutting-edge technologies. Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior Data Scientist (Remote – India) – Predictive Modeling & Machine Learning Location: Remote (India) Job Type: Full-time Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. This role focuses on building and deploying advanced predictive models to influence key business decisions. The ideal candidate should have strong experience in machine learning, data engineering, and working in cloud environments, particularly with AWS. You'll be collaborating closely with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This is a fantastic opportunity to work on impactful AI/ML solutions within a dynamic and innovative team. Key Responsibilities: Predictive Modeling & Machine Learning Develop and deploy machine learning models for forecasting, optimization, and predictive analytics. Use tools such as AWS SageMaker, Bedrock, LLMs, TensorFlow, and PyTorch for model training and deployment. Perform model validation, tuning, and performance monitoring. Deliver actionable insights from complex datasets to support strategic decision-making. Data Engineering & Cloud Computing Design scalable and secure ETL pipelines using AWS Glue. Manage and optimize data infrastructure in the AWS environment. Ensure high data integrity and availability across the pipeline. Integrate AWS services to support the end-to-end machine learning lifecycle. Python Programming Write efficient, reusable Python code for data processing and model development. Work with libraries like pandas, scikit-learn, TensorFlow, and PyTorch. Maintain documentation and ensure best coding practices. Collaboration & Communication Work with engineering, analytics, and business teams to understand and solve business challenges. Present complex models and insights to both technical and non-technical stakeholders. Participate in sprint planning, stand-ups, and reviews in an Agile setup. Preferred Experience (Nice to Have): Experience with applications in the utility industry (e.g., demand forecasting, asset optimization). Exposure to Generative AI technologies. Familiarity with geospatial data and GIS tools for predictive analytics. Qualifications: Master’s in Computer Science, Statistics, Mathematics, or a related field. 5+ years of relevant experience in data science, predictive modeling, and machine learning. Experience working in cloud-based data science environments (AWS preferred).
Posted 1 month ago
5.0 years
0 Lacs
Greater Hyderabad Area
Remote
Job Title: Senior Machine Learning Associate (Computer Vision) – Remote (India) Location: Remote (India) Engagement Type: Long-Term Contract Start Date: ASAP Work Hours: Overlap with EST time zone Dual Employment: Not permitted About the Role: KheyDigit Global Solutions Pvt Ltd is seeking a Senior Machine Learning Associate with deep expertise in Computer Vision to support an international pharmaceutical manufacturing project. This role involves building, optimizing, and deploying AI/ML models that enable automated drug manufacturing line clearance using real-time camera feeds and anomaly detection via AWS infrastructure.You will be part of a global team and collaborate directly with international clients. This is a 100% remote opportunity with long-term growth potential. Key Responsibilities: Design, develop, and deploy computer vision models using AWS SageMaker. Work on edge computing solutions using AWS Greengrass. Support integration and optimization of models on Nvidia Triton GPU infrastructure. Analyze HD camera feeds to detect anomalies in drug manufacturing line operations. Build and train models to understand “normal” production line behavior and identify deviations. Troubleshoot and enhance real-time AI/ML applications in a live production environment. Collaborate with global technical and product teams; communicate fluently in English. Remain adaptable and solution-oriented in a fast-paced, agile setup. Required Skills & Experience: 5+ years of experience in Machine Learning, with a strong focus on Computer Vision. Expert-level experience in AWS SageMaker and AWS Greengrass. Experience in deploying models on GPU-based infrastructures such as Nvidia Triton. Strong problem-solving skills with the ability to debug complex model and deployment issues Excellent communication skills in English, able to work independently with international stakeholders. Prior experience working in regulated or manufacturing environments is a plus. Exposure to AI/ML use cases in Pharmaceutical or Manufacturing sectors. Familiarity with model versioning, CI/CD pipelines, and MLOps practices. Understanding of camera-based systems or computer vision libraries like OpenCV, PyTorch, or TensorFlow. Interview Process: 15-minute HR Screening Technical Interview(s) with 1–2 client representatives Why Join Us: Global exposure and collaboration with international teams Long-term remote engagement with flexible working Opportunity to contribute to impactful AI-driven solutions in drug manufacturing Transparent and professional work culture
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
𝐉𝐨𝐛 𝐓𝐢𝐭𝐥𝐞: 𝐒𝐞𝐧𝐢𝐨𝐫 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐀𝐬𝐬𝐨𝐜𝐢𝐚𝐭𝐞 (𝐂𝐨𝐦𝐩𝐮𝐭𝐞𝐫 𝐕𝐢𝐬𝐢𝐨𝐧) – 𝐑𝐞𝐦𝐨𝐭𝐞 (𝐈𝐧𝐝𝐢𝐚) Location: Remote (Bengaluru, Hyderabad, Chennai) Engagement Type: Long-Term Contract Start Date: ASAP Work Hours: Overlap with EST time zone Dual Employment: Not permitted 𝐀𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐑𝐨𝐥𝐞: KheyDigit Global Solutions Pvt Ltd is seeking a Senior Machine Learning Associate with deep expertise in Computer Vision to support an international pharmaceutical manufacturing project. This role involves building, optimizing, and deploying AI/ML models that enable automated drug manufacturing line clearance using real-time camera feeds and anomaly detection via AWS infrastructure.You will be part of a global team and collaborate directly with international clients. This is a 100% remote opportunity with long-term growth potential. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: 1. Design, develop, and deploy computer vision models using AWS SageMaker. 2. Work on edge computing solutions using AWS Greengrass. 3. Support integration and optimization of models on Nvidia Triton GPU infrastructure. 4. Analyze HD camera feeds to detect anomalies in drug manufacturing line operations. 5. Build and train models to understand “normal” production line behavior and identify deviations. 6. Troubleshoot and enhance real-time AI/ML applications in a live production environment. 7. Collaborate with global technical and product teams; communicate fluently in English. 8. Remain adaptable and solution-oriented in a fast-paced, agile setup. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐝 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: *5+ years of experience in Machine Learning, with a strong focus on Computer Vision. *Expert-level experience in AWS SageMaker and AWS Greengrass. *Experience in deploying models on GPU-based infrastructures such as Nvidia Triton. *Strong problem-solving skills with the ability to debug complex model and deployment issues *Excellent communication skills in English, able to work independently with international stakeholders. *Prior experience working in regulated or manufacturing environments is a plus. *Exposure to AI/ML use cases in Pharmaceutical or Manufacturing sectors. *Familiarity with model versioning, CI/CD pipelines, and MLOps practices. *Understanding of camera-based systems or computer vision libraries like OpenCV, PyTorch, or TensorFlow. 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐏𝐫𝐨𝐜𝐞𝐬𝐬: 15-minute HR Screening Technical Interview(s) with 1–2 client representatives
Posted 1 month ago
35.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description F-Secure makes every digital moment more secure, for everyone. For over 35 years, we’ve led the cyber security industry, protecting tens of millions of people online together with our 200+ service provider partners. We value our Fellows' individuality, with an inclusive environment where diversity drives innovation and growth. What makes you unique is what we value – be yourself, that is (y)our greatest asset. Founded in Finland, F‑Secure has offices in Europe, North America and Asia Pacific. About The Role We are looking for skilled Machine Learning Engineers to join our Technology team in Bengaluru! At F-Secure, we're developing cutting-edge AI-powered cybersecurity defenses that protect millions of users globally. Our ML models operate in dynamic environments where threat actors continuously evolve their techniques. We're seeking a motivated individual to perform in-depth analysis of data and machine learning models, develop and implement models using both classical and modern approaches, and optimize models for performance and latency. This is a fantastic opportunity to enhance your skills in a real-world cybersecurity context with significant impact. This role will be located in Bengaluru, India. You can choose whether you work at our Bengaluru office, or in a hybrid mode from your home office. We hope you are able to join us for common gatherings at the Bengaluru office when needed. Key Responsibilities To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What are we looking for? Prior experience from utilizing various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Additional Nice-to-have's Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What will you get from us? You will work together with experienced and enthusiastic colleagues, and within F-Secure you will find some of the best minds in the cyber security industry. We actively encourage our Fellows to grow and develop within F-Secure, and in your career here you can find yourself contributing to any number of our other products and teams. You decide what to make of this role, what your priorities are, and how you organize your work for the best benefit to us all. We offer interesting challenges and a competitive compensation model with wide range of benefits. You get a chance to develop yourself professionally in an international and highly motivated team serving our customers in providing world class security, privacy and uncensored access to information online. You get to work in a flexible, agile, and dynamic working environment that supports individual needs. Giving our people both support and the opportunity to be in charge of their own work is something that is in our DNA. We are in a unique phase in our 30-year history and with curiosity and excitement in the air we see no limits for building a strong and fruitful career with us! A security vetting will possibly be conducted for the selected candidate in accordance to our employment process.
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: MLOps Engineer Job Type: Contractor Location: On-site Gurugram, Pune or Bangalore Job Summary Join our customer's dynamic team as a hands-on MLOps Engineer and play a pivotal role in driving the development, deployment, and automation of robust machine learning pipelines. Utilize your expertise in AWS and MLOps to help architect, optimize, and scale production-ready ML solutions across diverse projects. We value professionals who excel in both written and verbal communication, collaborating effectively in a high-performing environment. Key Responsibilities Design, automate, and maintain end-to-end ML pipelines for model training, deployment, and monitoring on AWS infrastructure. Lead the development and operationalization of machine learning solutions using AWS services such as EKS, ECS, ECR, SageMaker, Step Functions, EventBridge, SNS/SQS, and Model Registry. Integrate ML Flow to manage experiment tracking, model versioning, and lifecycle management. Implement and manage CI/CD pipelines specifically tailored for machine learning code and workflows. Collaborate closely with data scientists, engineers, and stakeholders to productionize ML models and ensure reliability, scalability, and security. Monitor and troubleshoot ML systems in production, proactively resolving issues and optimizing performance. Document workflows, processes, and architectural decisions with clarity and precision. Required Skills and Qualifications Proven experience in MLOps with hands-on expertise in designing and deploying ML pipelines in production environments. Strong proficiency with AWS core services, especially EKS, ECS, ECR, SageMaker (jobs, batch transform, hyperparameter tuning), Step Functions, EventBridge, SNS/SQS, and Model Registry. Solid understanding of core machine learning concepts and best practices for productionizing ML code. Demonstrated experience with ML Flow for managing model lifecycle and experiment tracking. Expertise in implementing CI/CD pipelines for ML projects. Excellent written and verbal communication skills, with a collaborative mindset. A passion for automation, optimization, and scalable system design. Preferred Qualifications Experience supporting large-scale, distributed machine learning systems in a cloud environment. Familiarity with container orchestration and monitoring tools within AWS. Contributions to open-source MLOps or ML engineering projects.
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024781
Posted 1 month ago
18.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY Parthenon - Artificial Intelligence (AI) and Generative AI (GenAI) Leader EY-Parthenon team is a multi-disciplinary technology team delivering client projects and solutions across key sectors and functions across the deal life cycle which helps organizations re-imagine and scale up their existing portfolios through the adoption of digital and AI/GenAI capabilities on top of strong data and cloud solution skills. These assignments cover a wide range of countries and industry sectors. The opportunity As the Executive Director of AI & GenAI at EYP, you will spearhead the integration of cutting-edge AI solutions to solve complex client challenges, driving measurable impact across revenue growth, cost optimization, and customer experience enhancement. This leadership role requires a visionary with deep technical expertise in AI/GenAI and a proven track record in consulting, enabling you to collaborate with regional partners to secure high-value engagements and deliver scalable, cross-sector solutions. Your Key Responsibilities Client Engagement & Business Development Partner with regional practice teams to identify AI-driven opportunities, craft tailored proposals and win client engagements. Lead client workshops to diagnose pain points, design AI strategies, and articulate ROI-driven use cases (e.g., GenAI for hyper-personalization, predictive analytics for supply chain optimization). Build trusted advisor relationships with C-suite stakeholders, aligning AI initiatives with business outcomes. AI Solution Development Architect end-to-end AI solutions: ideation, data strategy, model development (ML/GenAI), MLOps, and scaling. Drive cross-sector innovation (e.g., GenAI-powered customer service automation for retail, predictive maintenance in manufacturing). Ensure ethical AI practices, governance, and compliance across deployments. Thought Leadership & Market Presence Publish insights on AI trends (e.g., multimodality, RAG architectures) Shape the AI go-to-market strategy, enhancing its reputation as a leader in transformative AI consulting. Skills And Attributes For Success Technical Expertise: Mastery of AI/GenAI lifecycle: NLP, deep learning (Transformers, GANs), cloud platforms (AWS SageMaker, Azure ML), and tools (LangChain, Hugging Face). Proficiency in Python, TensorFlow/PyTorch, and generative models (GPT, Claude, Stable Diffusion). Consulting Acumen : 18+ years in top-tier consulting, with 8+ years leading AI engagements. Expertise in stakeholder management, value storytelling, and commercial negotiation Leadership : Track record of building high-performing teams and strong AI portfolios. Exceptional communication skills, bridging technical and executive audiences. To qualify for the role, you must have Experience of guiding teams on Projects focusing on AI/Data Science and Communicating results to clients Familiar in implementing solutions in Azure Cloud Framework Excellent Presentation Skills 18+ years of relevant work experience in developing and implementing AI, Machine Learning Models- experience of deployment in Azure is preferred Experience in application of statistical techniques like Linear and Non-Linear Regression/classification/optimization, Forecasting and Text analytics. Familiarity with deep learning and machine learning algorithms and the use of popular AI/ML frameworks Minimum 6 years of experience in working with NLG, LLM, DL Techniques Relevant understanding of Deep Learning and neural network techniques Expertise in implementing applications using open source and proprietary LLM models Proficient in using Langchain-type orchestrators or similar Generative AI workflow management tools Minimum of 6-9 years of programming in Python Experience with the software development life cycle (SDLC) and principles of product development Willingness to mentor team members Solid thoughtfulness, technical and problem-solving skills Excellent written and verbal communication skills Preferred Experience PhD/ MS/ Mtech/ Btech in Computer Science, Data Science, or related field. Published research/papers on AI/GenAI applications. Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of clients Willingness to travel extensively and to work on client sites / practice office locations Why Join Us Lead AI innovation at scale for Fortune 500 clients, backed by a global brand and multidisciplinary experts. Thrive in a culture of entrepreneurship, with access to proprietary datasets and emerging tech partnerships. Accelerate your career through executive visibility and equity in shaping the future of AI consulting. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024851
Posted 1 month ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are part of Web Experiences and Services Team (WEST) within the Office Online Product Group, focused on building AI Ops solutions for Online Microsoft Word, Excel, PowerPoint, OneNote, and their shared services. Our mission is to leverage AI and ML to enhance live site management, automate incident resolution, accelerate root cause analysis and prevent incidents proactively. We are focussing on anomaly detection, predictive analytics, and error log analysis to enable more scalable and generalized solutions, driving reliability and efficiency across Office Online applications. We are looking for a Senior Applied Scientist who is passionate about applying AI and ML to incident management and site reliability. You will work as part of a multidisciplinary team, collaborating with engineers, data scientists, and domain experts to develop state-of-the-art solutions that will transform how Office Online handles site reliability and incident prevention. At Microsoft, we are committed to diversity, inclusion, and innovation, ensuring that we build great workplaces and great products. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Collaborate with engineers, product teams, and partners to drive innovation in AI-powered site reliability. Develop and implement AI-driven scalable solutions for incident management and prevention, incident root cause analysis across Office Online applications. Build AI\ML powered solutions like anomaly detection, predictive analytics, and error log analysis etc. for faster mitigation and prevention of incidents. Ensure end-to-end integration of ML\AI powered solutions in production, including deployment, monitoring, and refinement, leveraging cloud-based machine learning platforms (e.g., AWS SageMaker, Azure ML Service, Databricks) and MLOps tools (MLflow, Tecton, Pinecone, Feature Stores) Prototype new approaches in ML\AI, SLM, AI Agents, Agentic workflows etc. to design, run, and analyze experiments for incident detection, mitigation and resolution. Fundamentals: champion and set example in customer obsession, data security, performance, observability and reliability. Optimize AI pipelines, automate responses to incidents, and create a feedback loop for continuous improvement. Stay ahead of emerging trends in AI/ML. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. Excellent coding and debugging skills with deep understanding of ML\AI algorithms and data science problems. Experience with ML techniques such as deep learning, predictive modelling, time series data analysis Excellent communication skills, including the ability to translate complex AI concepts into actionable insights for product teams Preferred Qualifications Passion for new technologies, learning and adapting quickly, end user quality and customer satisfaction. Understanding of AI-driven operations, site reliability engineering (SRE), and production ML systems. Expertise in AI Ops, anomaly detection, error log analysis and incident management solutions. Awareness and understanding of emerging research and technologies related to live site and incident management like Agentic workflows for site reliability management. Effective verbal, visual and written communication skills. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Cloud AI/GenAI Engineer(ServiceNow) We are seeking a talented AI/GenAI Engineer to join our team in delivering cutting-edge AI solutions to clients. The successful candidate will be responsible for implementing, developing, and deploying AI/GenAI models and solutions on cloud platforms. This role requires knowledge of ServiceNow modules like CSM and virtual agent development. Candidate should have strong technical aptitude, problem-solving skills, and the ability to work effectively with clients and internal teams. What You'll Be Doing Key Responsibilities: Cloud AI Implementation: Implement and deploy AI/GenAI models and solutions using various cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI) and frameworks (e.g., TensorFlow, PyTorch, LangChain, Vellum). Build Virtual Agent in SN – Design, develop and deploy virtual agents using SN agent builder Integrate SN – Design and develop seamless integration of SN with other external AI systems Agentic AI: Assist in developing agentic AI systems on cloud platforms, enabling autonomous decision-making and action-taking capabilities in AI solutions. Cloud-Based Vector Databases: Implement cloud-native vector databases (e.g., Pinecone, Weaviate, Milvus) or cloud-managed services for efficient similarity search and retrieval in AI applications. Model Evaluation and Fine-tuning: Evaluate and optimize cloud-deployed generative models using metrics like perplexity, BLEU score, and ROUGE score, and fine-tune models using techniques like prompt engineering, instruction tuning, and transfer learning. Security for Cloud LLMs: Apply security practices for cloud-based LLMs, including data encryption, IAM policies, and network security configurations. Client Support: Support client engagements by implementing AI requirements and contributing to solution delivery. Cloud Solution Implementation: Build scalable and efficient cloud-based AI/GenAI solutions according to architectural guidelines. Cloud Model Development: Develop and fine-tune AI/GenAI models using cloud services for specific use cases, such as natural language processing, computer vision, or predictive analytics. Testing and Validation: Conduct testing and validation of cloud-deployed AI/GenAI models, including performance evaluation and bias detection. Deployment and Maintenance: Deploy AI/GenAI models in production environments, ensuring seamless integration with existing systems and infrastructure. Cloud Deployment: Deploy AI/GenAI models in cloud production environments and integrate with existing systems. Requirements: Education: Bachelor/Master's in Computer Science, AI, ML, or related fields. Experience: 3-5 years of experience in engineering solutions, with a track record of delivering Cloud AI solutions. . Should have at least 2 years’ experience of SN and SN agent builder Technical Skills: Proficiency in cloud AI/GenAI services and technologies across major cloud providers (AWS, Azure, GCP) Experience with cloud-native vector databases and managed similarity search services Experience with SN modules like CSM and virtual agent builder Experience with security measures for cloud-based LLMs, including data encryption, access controls, and compliance requirements Programming Skills: Strong programming skills in languages like Python or R Cloud Platform Knowledge: Strong understanding of cloud platforms, their AI services, and best practices for deploying ML models in the cloud Communication: Excellent communication and interpersonal skills, with the ability to work effectively with clients and internal teams. Problem-Solving: Strong problem-solving skills, with the ability to analyse complex problems and develop creative solutions. Nice to have: Experience with serverless architectures for AI workloads Nice to have: Experience with ReactJS for rapid prototyping of cloud AI solution frontends Location: Delhi or Bangalore (with remote work options) Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 1 month ago
5.0 years
3 - 10 Lacs
Noida
On-site
Senior Applied Scientist Noida, Uttar Pradesh, India Date posted Jul 03, 2025 Job number 1833716 Work site Microsoft on-site only Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview We are part of Web Experiences and Services Team (WEST) within the Office Online Product Group, focused on building AI Ops solutions for Online Microsoft Word, Excel, PowerPoint, OneNote, and their shared services. Our mission is to leverage AI and ML to enhance live site management, automate incident resolution, accelerate root cause analysis and prevent incidents proactively. We are focussing on anomaly detection, predictive analytics, and error log analysis to enable more scalable and generalized solutions, driving reliability and efficiency across Office Online applications. We are looking for a Senior Applied Scientist who is passionate about applying AI and ML to incident management and site reliability. You will work as part of a multidisciplinary team, collaborating with engineers, data scientists, and domain experts to develop state-of-the-art solutions that will transform how Office Online handles site reliability and incident prevention. At Microsoft, we are committed to diversity, inclusion, and innovation, ensuring that we build great workplaces and great products. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. Excellent coding and debugging skills with deep understanding of ML\AI algorithms and data science problems. Experience with ML techniques such as deep learning, predictive modelling, time series data analysis Excellent communication skills, including the ability to translate complex AI concepts into actionable insights for product teams Preferred Qualifications: Passion for new technologies, learning and adapting quickly, end user quality and customer satisfaction. Understanding of AI-driven operations, site reliability engineering (SRE), and production ML systems. Expertise in AI Ops, anomaly detection, error log analysis and incident management solutions. Awareness and understanding of emerging research and technologies related to live site and incident management like Agentic workflows for site reliability management. Effective verbal, visual and written communication skills. Responsibilities Collaborate with engineers, product teams, and partners to drive innovation in AI-powered site reliability. Develop and implement AI-driven scalable solutions for incident management and prevention, incident root cause analysis across Office Online applications. Build AI\ML powered solutions like anomaly detection, predictive analytics, and error log analysis etc. for faster mitigation and prevention of incidents. Ensure end-to-end integration of ML\AI powered solutions in production, including deployment, monitoring, and refinement, leveraging cloud-based machine learning platforms (e.g., AWS SageMaker, Azure ML Service, Databricks) and MLOps tools (MLflow, Tecton, Pinecone, Feature Stores) Prototype new approaches in ML\AI, SLM, AI Agents, Agentic workflows etc. to design, run, and analyze experiments for incident detection, mitigation and resolution. Fundamentals: champion and set example in customer obsession, data security, performance, observability and reliability. Optimize AI pipelines, automate responses to incidents, and create a feedback loop for continuous improvement. Stay ahead of emerging trends in AI/ML. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 month ago
5.0 years
0 Lacs
India
On-site
WhizzHR is hiring Media Solution Architect – AI/ML & Automation Focus . Role Summary: We are seeking a Media Solution Architect to lead the strategic design of AI-driven and automation-centric solutions across digital media operations. This role involves architecting intelligent, scalable systems that enhance efficiency across campaign setup, trafficking, reporting, QA, and billing processes. The ideal candidate will bring a strong blend of automation, AI/ML, and digital marketing expertise to drive innovation and operational excellence. Key Responsibilities: Identify and assess opportunities to apply AI/ML and automation across media operations workflows (e.g., intelligent campaign setup, anomaly detection in QA, dynamic taxonomy validation). Design scalable, intelligent architectures using a combination of machine learning models, RPA, Python-based automation, and media APIs (e.g., Meta, DV360, YouTube). Develop or integrate machine learning models for use cases such as performance prediction, media mix modeling, and anomaly detection in reporting or billing. Ensure adherence to best practices in data governance, compliance, and security, particularly around AI system usage. Partner with business stakeholders to prioritize high-impact AI/automation use cases and define clear ROI and success metrics. Stay informed on emerging trends in AI/ML and translate innovations into actionable media solutions. Ideal Profile: 5+ years of experience in automation, AI/ML, or data science, including 3+ years in marketing, ad tech, or digital media. Strong understanding of machine learning frameworks for predictive modeling, anomaly detection, and NLP-based insight generation. Proficiency in Python and libraries such as scikit-learn, TensorFlow, pandas, or PyTorch. Experience with cloud-based AI platforms (e.g., Google Vertex AI, Azure ML, AWS Sagemaker) and media API integrations. Ability to architect AI-enhanced automations that improve forecasting, QA, and decision-making in media operations. Familiarity with RPA tools (e.g., UiPath, Automation Anywhere); AI-first automation experience is a plus. Demonstrated success in developing or deploying ML models for campaign optimization, fraud detection, or process intelligence. Familiarity with digital media ecosystems such as Google Ads, Meta, TikTok, DSPs, and ad servers. Excellent communication and stakeholder management skills, with the ability to translate technical solutions into business value. Kindly share your Resume at Hello@whizzhr.com
Posted 1 month ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focussed and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Job Summary: We are seeking a highly experienced and technically adept Solution Architect to join our dynamic team. The ideal candidate will be a strategic thinker with a strong hands-on background, responsible for translating business requirements into scalable, secure, and robust technical solutions. This role requires a deep understanding of the entire software development lifecycle, from initial concept and design through to deployment, operations, and continuous improvement. The Solution Architect will play a pivotal role in shaping our technical roadmap, ensuring architectural alignment, and driving the adoption of best practices across product development, infrastructure, and operations. Key Responsibilities: Solution Design & Architecture: Lead the design and development of end-to-end technical solutions, ensuring they meet business needs, technical requirements, and architectural standards. Create detailed architectural diagrams, technical specifications, and design documents for various systems and applications. Evaluate and recommend technology choices, frameworks, and patterns to optimize performance, scalability, security, and cost-effectiveness. Conduct architectural reviews and provide technical guidance to development teams, ensuring adherence to design principles. Product Development Lifecycle: Collaborate closely with product managers, business analysts, and stakeholders to understand business requirements and translate them into technical solutions. Provide architectural oversight throughout the product development lifecycle, from ideation to deployment and beyond. Champion agile methodologies and practices within the technical teams. DevOps & Automation: Drive the adoption of DevOps principles and practices, including continuous integration, continuous delivery (CI/CD), and automated testing. Design and implement scalable and resilient deployment pipelines. Promote infrastructure-as-code (IaC) principles. Infrastructure & Cloud Expertise: Architect and design solutions leveraging leading cloud platforms, with a strong focus on Azure and/or AWS. Demonstrate a deep understanding of cloud services such as compute (VMs, containers, serverless), storage, networking, databases, and security services. Optimize cloud resource utilization for cost efficiency and performance. Possess a solid understanding of on-premise infrastructure concepts and hybrid cloud deployments. Cybersecurity: Integrate security best practices into all phases of the solution design and development lifecycle. Identify and mitigate security risks and vulnerabilities at the architectural level. Ensure compliance with relevant security standards and regulations. Advise on security controls, identity and access management (IAM), data encryption, and network security. Exposure to AI & Emerging Technologies: Stay abreast of emerging technologies and industry trends, particularly in Artificial Intelligence (AI) and Machine Learning (ML). Evaluate the applicability of AI/ML solutions to business problems and integrate them into architectural designs where appropriate. Understand the architectural implications of integrating AI/ML models and data pipelines. Leadership & Communication: Act as a technical leader and mentor to development teams, fostering a culture of technical excellence. Communicate complex technical concepts clearly and concisely to both technical and non-technical stakeholders. Influence and drive architectural decisions across the organization. Qualifications: Bachelor's or Master's degree in Computer Science, Software Engineering, or a relatedfield. 10+ years of experience in software development, with at least 5+ years in a Solution Architect or similar senior architectural role. Proven track record of designing and delivering complex, scalable, and secure enterprise-level solutions. Extensive experience with at least one major cloud platform (Azure and/or AWS) is mandatory, including hands-on experience with core services. Strong understanding of architectural patterns (e.g., microservices, event-driven, serverless) and their application. Proficiency in at least one major programming language (e.g., Java, Python, .NET, Node.js). Solid understanding of databases (relational and NoSQL). Experience with DevOps tools and practices (e.g., Docker, Kubernetes, Jenkins, GitLab CI/CD, Terraform, Ansible). Deep knowledge of cybersecurity principles, best practices, and common vulnerabilities. Exposure to AI/ML concepts, frameworks, and deployment patterns (e.g., TensorFlow, PyTorch, Azure ML, AWS SageMaker) is a significant plus. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications (Nice to Have): Relevant cloud certifications (e.g., Azure Solutions Architect Expert, AWS Certified Solutions Architect - Professional). Contributions to open-source projects or active participation in tech communities. Equal Opportunity Employer KPMG India: KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough