Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 8.0 years
5 - 8 Lacs
Hyderabad, Bengaluru
Work from Office
Why Join? Above market-standard compensation Contract-based or freelance opportunities (212 months) Work with industry leaders solving real AI challenges Flexible work locations Remote | Onsite | Hyderabad/Bangalore Your Role: Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) Automate ML workflows (feature engineering, retraining, deployment) Scale ML models with Docker, Kubernetes, Airflow Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) Expertise in monitoring tools (MLflow, Prometheus, Grafana) Knowledge of distributed data processing (Spark, Kafka) (Bonus: Experience in A/B testing, canary deployments, serverless ML)
Posted 2 weeks ago
4.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. To start a career that is out of the ordinary, please apply... Job Details KANTAR is the world's leading insights, consulting, and analytics company. We understand how people think, feel, shop, share, vote, and view more than anybody else. With over 25,000 people, we combine the best of human understanding with advanced technologies to help the world's leading organizations, succeed and grow. (For more details, visit www.kantar.com) The Global Data Science and Innovation (DSI) unit of KANTAR, nested within its Analytics Practice (https://www.kantar.com/expertise/analytics), is a fast-growing team of niche, elite data scientists responsible for all data science led innovation within KANTAR. The unit has a strong internal and external reputation with global stakeholders and clients, of handling sophisticated cutting-edge analytics, using state-of-the-art techniques and deep mathematical / statistical rigor. The unit is responsible for most AI and Gen AI related initiatives within KANTAR (https://www.kantar.com/campaigns/artificial-intelligence), including bringing in the latest developments in the field of Machine Learning, Generative AI, Deep Learning, Computer Vision, NLP, Optimization, etc. to solve complex business problems in marketing analytics and consulting and build products that empower our colleagues around the world. Job profile We are looking for an Senior AI Engineer to be part of our Global Data Science and Innovation team. As part of a high-profile team, the position offers a unique opportunity to work first-hand on some of the most exciting, and challenging AI-led projects within the organization, and be part of a fast-paced, entrepreneurial environment. As a senior member of the team, you will be responsible for working with your leadership team to build a world-class portfolio of AI-led solutions within KANTAR leveraging the latest developments in AI/ML. You will be part of several initiatives to productionize multiple R&D PoCs and pilots that leverage a variety of AI/ML algorithms and technologies, particularly using (but not restricted to) Generative AI. As an experienced AI engineer, you will hold yourself accountable for the entire process of developing, scaling, and commercializing these enterprise-grade products and solutions. You will be working hands-on as well as with a team of highly talented cross-functional, geography-agnostic team of data scientists and AI engineers. As part of the global data science and innovation team, you will be a representative and ambassador for data science/AI/ML led solutions with internal and external stakeholders. Job Description Candidate will be responsible for the following: Develop and maintain scalable and efficient AI pipelines and infrastructure for deployment. Deploy AI models and solutions into production environments, ensuring stability and performance. Monitor and maintain deployed AI systems to ensure they operate effectively and efficiently. Troubleshoot and resolve issues related to AI deployment, including performance bottlenecks and system failures. Optimize deployment processes to reduce latency and improve the scalability of AI solutions. Implement robust version control and model management practices to track AI model changes and updates. Ensure the security and compliance of deployed AI systems with industry standards and regulations. Provide technical support and guidance for deployment-related queries and issues. Qualification, Experience, And Skills Advanced degree from top tier technical institutes in relevant discipline 4 to 10 years’ experience, with at least past few years working in Generative AI Prior firsthand work experience in building and deploying applications on cloud platforms like Azure/AWS/Google Cloud using serverless architecture Proficiency in tools such as Azure machine learning service, Amazon Sagemaker, Google Cloud AI Prior experience with containerization tools (for ex., Docker, Kubernetes), databases (for ex., MySQL, MongoDB), deployment tools (for ex., Azure DevOps), big data tools (for ex.,Spark). Ability to develop and integrate APIs. Experience with RESTful services. Experience with continuous integration/continuous deployment (CI/CD) pipelines. Knowledge of Agile working methodologies for product development Knowledge of (and potentially working experience with) LLMs and Foundation models from OpenAI, Google, Anthropic and others Hands on coding experience in Python Desired Skills That Would Be a Distinct Advantage Preference given to past experience in developing/maintaining live deployments. Comfortable working in global set-ups with diverse cross-geography teams and cultures. Energetic, self-driven, curious, and entrepreneurial. Excellent (English) communication skills to address both technical audience and business stakeholders. Meticulous and deep attention to detail. Being able to straddle ‘big picture’ and ‘details’ with ease. Location Chennai, Teynampet, Anna SalaiIndia Kantar Rewards Statement At Kantar we have an integrated way of rewarding our people based around a simple, clear and consistent set of principles. Our approach helps to ensure we are market competitive and also to support a pay for performance culture, where your reward and career progression opportunities are linked to what you deliver. We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. Apply for a career that’s out of the ordinary and join us. We want to create an equality of opportunity in a fair and supportive working environment where people feel included, accepted and are allowed to flourish in a space where their mental health and well being is taken into consideration. We want to create a more diverse community to expand our talent pool, be locally representative, drive diversity of thinking and better commercial outcomes. Kantar is the world’s leading data, insights and consulting company. We understand more about how people think, feel, shop, share, vote and view than anyone else. Combining our expertise in human understanding with advanced technologies, Kantar’s 30,000 people help the world’s leading organisations succeed and grow. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
MACHINE-LEARNING ENGINEER ABOUT US Datacultr is a global Digital Operating System for Risk Management and Debt Recovery, we drive Collection Efficiencies, Reduce Delinquencies and Non-Performing Loans (NPL’s). Datacultr is a Digital-Only provider of Consumer Engagement, Recovery and Collection Solutions, helping Consumer Lending, Retail, Telecom and Fintech Organizations to expand and grow their business in the under-penetrated New to Credit and Thin File Segments. We are helping millions of new to credit consumers, across emerging markets, access formal credit and begin theirjourney towards financialhealth. We have clients acrossIndia, South Asia, South East Asia, Africa and LATAM. Datacultr is headquartered in Dubai, with offices in Abu Dhabi, Singapore, Ho Chi Minh City, Nairobi, and Mexico City; and our Development Center is located out of Gurugram, India. ORGANIZATION’S GROWTH PLAN Datacultr’s vision is to enable convenient financing opportunities for consumers, entrepreneurs and small merchants, helping them combat the Socio-economic problems this segment faces due to restricted access to financing. We are on a missionto enable 35 million unbanked& under-served people,access financial services by the end of 2026. Position Overview We’re looking for an experienced Machine Learning Engineer to design, deploy, and scale production-grade ML systems. You’ll work on high-impact projects involving deep learning, NLP, and real-time data processing—owning everything from model development to deployment and monitoring while collaborating with cross-functional teams to deliver impactful, production-ready solutions. Core Responsibilities Representation & Embedding Layer Evaluate, fine-tune, and deploy multilingual embedding models (e.g., OpenAI text-embedding-3, Sentence-T5, Cohere, or in-house MiniLM) on AWS GPU or serverless endpoints. Implement device-level aggregation to produce stable vectors for downstream clustering. Cohort Discovery Services Build scalable clustering workflows in Spark/Flink or Python on Airflow. Serve cluster IDs & metadata via feature store / real-time API for consumption. MLOps & Observability Own CI/CD for model training & deployment. Instrument latency, drift, bias, and cost dashboards; automate rollback policies. Experimentation & Optimisation Run A/B and multivariate tests comparing embedding cohorts against legacy segmentation; analyse lift in repayment or engagement. Iterate on quantisation, distillation, and batching to hit strict cost-latency SLAs. Collaboration & Knowledge-sharing Work hand-in-hand with Product & Data Strategy to translate cohort insights into actionable product features. Key Requirements 5–8 years of hands-on ML engineering / NLP experience; at least 2 years deploying transformer-based models in production. Demonstrated ownership of pipelines processing ≥100 million events per month. Deep proficiency in Python, PyTorch/TensorFlow, Hugging Face ecosystem, and SQL on cloud warehouses. Familiar with vector databases and RAG architectures. Working knowledge of credit-risk or high-volume messaging platforms is a plus. Degree in CS, EE, Statistics, or related; Tech Stack You’ll Drive Model & Serving – PyTorch, Hugging Face, Triton, BentoML Data & Orchestration – Airflow, Spark/Flink, Kafka Vector & Storage – Qdrant/Weaviate, S3/GCS, Parquet/Iceberg Cloud & Infra – AWS (EKS, SageMaker) Monitoring – Prometheus, Loki, Grafana What We Offer Opportunity to shape the future of unsecured lending in emerging markets Competitive compensation package Professional development and growth opportunities Collaborative, innovation-focused work environment Comprehensive health and wellness benefits Location & Work Model Immediate joining possible Work From Office only Based in Gurugram, Sector 65 Kindly share your updated profile with us at careers@datacultr.com to guide you further with this opportunity. ----- END ----- Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less
Posted 2 weeks ago
0.0 - 8.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Senior Full Stack Developer (Python, JavaScript, AWS, Cloud Services, Azure) Ahmedabad, India; Hyderabad, India Information Technology 315432 Job Description About The Role: Grade Level (for internal use): 10 The Team: S&P Global is a global market leader in providing information, analytics and solutions for industries and markets that drive economies worldwide. The Market Intelligence (MI) division is the largest division within the company. This is an opportunity to join the MI Data and Research’s Data Science Team which is dedicated to developing cutting-edge Data Science and Generative AI solutions. We are a dynamic group that thrives on innovation and collaboration, working together to push the boundaries of technology and deliver impactful solutions. Our team values inclusivity, continuous learning, and the sharing of knowledge to enhance our collective expertise. Responsibilities and Impact: Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development. Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3. Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions. Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315432 Posted On: 2025-06-02 Location: Ahmedabad, Gujarat, India
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 766863 About this opportunity As a Senior Machine Learning Engineer (SMLE) , will be leading efforts for AI model deployment at scale, involving edge interfacing, ML pipeline and design of supervising and alerting systems for ML models. A specialist software engineer with experience building large-scale systems and enjoys optimizing systems and evolving them. What you will do: Lead analysis of ML-driven business needs and opportunities for Ericsson and strategic customers. Define model validation strategy and establish success criteria in data science terms. Architect and design data flow and machine learning model implementation for production deployment. Drive rapid development of minimum viable solutions and leverage existing and new data sources. Develop solutions using Generative AI and RAG approaches. Design near real-time streaming and batch applications, ensuring scalability and high availability. Conduct performance analysis, tuning, and apply best practices in architecture and design. Document solutions and support reviews; contribute to product roadmap and backlog governance. Manage system packaging, software versioning, and change management. Perform design and code reviews, focusing on security and functional requirements. Collaborate with product teams to integrate ML models into Ericsson offerings Advocate for new technologies within ML communities and mentor junior team members Build ML competency within Ericsson and contribute to cross-functional initiatives. You will bring: Proficient in Python with strong programming skills in C++/Scala/Java. Demonstrated expertise in implementing diverse machine learning techniques. Skilled in using ML frameworks such as PyTorch, TensorFlow, and Spark ML. Experience designing cloud solutions on platforms like AWS, utilizing services like SageMaker, EKS, Bedrock, and Generative AI models. Expertise in containerization and Kubernetes in cloud environments. Familiarity with Generative AI models, RAG pipelines, and vector embeddings Competent in big data storage and retrieval strategies, including indexing and partitioning. Experience with big data technologies like Spark, Kafka, MongoDB, and Cassandra. Skilled in API design and development for AI/ML models Proven experience writing production-grade software Competence in Codebase repository management like Git and any CI/CD pipelines. Extensive experience in model development and life-cycle-management in one or more industry/application domain Understanding and application of Security: Authentication and Authorization methods, SSL/TLS, Network Security (Firewall, NSG rules, Virtual Network, Subnet, Private Endpoint etc), Data Privacy handling and protection. Degree in Computer Science, Data Science, AI, Machine Learning, Electrical Engineering, or related fields from a reputable institution (Bachelor's, Master's, or Ph.D.) 10+ years of overall industry experience with 5+ years of experience in AI/ML domain
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Role Overview We're looking for a talented Machine Learning Engineer with 3+ years of experience to join our growing AI team. This role will play a central part in developing our real-time feedback engine, integrating and fine-tuning LLMs, and spearheading the training and deployment of custom and small language models (SLMs). Key Responsibilities Build and deploy scalable real-time inference systems using FastAPI and AWS. Fine-tune and integrate large language models (LLMs) like Claude 3.5 Sonnet via Amazon Bedrock. Lead or contribute to the training, fine-tuning, and evaluation of proprietary Small Language Models (SLMs). Build ML pipelines for preprocessing multimodal data (audio, transcript, slides). Collaborate with backend, design, and product teams to bring intelligent features into production. Optimize models for speed, efficiency, and edge/cloud deployment. Contribute to MLOps infrastructure for versioning, deployment, and monitoring of models. Required Qualifications 3+ years of experience in applied ML, including model deployment and training. Strong proficiency in Python, PyTorch, Transformers, and model training frameworks. Experience deploying APIs using FastAPI and integrating models in production. Experience fine-tuning LLMs (e.g., OpenAI, Claude, Mistral) and SLMs for specific downstream tasks. Familiarity with AWS services (S3, Bedrock, Lambda, SageMaker). Strong grasp of data pipelines, performance metrics, and model evaluation. Comfort working with multimodal datasets (text, audio, visual). Bonus Points Experience with Whisper, TTS systems (e.g., Polly), or audio signal processing. Background in building or fine-tuning SLMs for performance-constrained environments. Familiarity with MLOps tooling (MLflow, Weights & Biases, DVC). Experience with Redis, WebSockets, or streaming data systems. Perks and Benefits Attractive remuneration (competitive with market, based on experience and potential). Fully remote work – we’re a truly distributed team. Flexible working hours – we value output over clocking in. Health & wellness perks, learning stipends, and regular team retreats. Opportunity to work closely with passionate founders and contribute from Day 1. Be a core part of a fast-growing, high-impact startup with a mission to transform education. We look forward to receiving your application! Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Andhra Pradesh, India
On-site
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. As part of our Analytics and Insights Consumption team, you’ll analyze data to drive useful insights for clients to address core business issues or to drive strategic outcomes. You'll use visualization, statistical and analytics models, AI/ML techniques, Modelops and other techniques to develop these insights. Years of Experience: Candidates with 4+ years of hands on experience Must Have Internal & External stakeholder management Familiarity with the CCaaS domain, CCaaS Application Development , contact center solution design & presales consulting. In-depth knowledge of CCaaS platforms like MS DCCP, Amazon Connect, NICECXOne, Genesys Cloud , Cisco Webex CC, Cisco HCS, UCCE/PCCE etc., including their architecture, functionalities, and application development, integration capabilities Governance & communication skills Hands-on configuration of Gen AI, LLM to be built on top of CCaaS platforms/Domain (MS DCCP, Amazon Connect, Genesys Cloud/NICE CX) includes, Develop and implement generative AI models to enhance customer interactions, including chatbots, virtual agents, and automated response systems. Speech scientist & speech, conversational fine-tuning (grammar & pattern analysis) Collaborate with stakeholders to identify business needs and define AI-driven solutions that improve customer experiences. Analyze existing customer service processes and workflows to identify areas for AI integration and optimization. Create and maintain documentation for AI solutions, including design specifications and user guides. Monitor and evaluate the performance of AI models, making adjustments as necessary to improve accuracy and effectiveness. Stay updated on the latest advancements in AI technologies and their applications in customer service and contact centers. Conduct training sessions for team members and stakeholders on the use and benefits of AI technologies in the contact center. Understanding of the fundamental ingredients of enterprise integration including interface definitions and contracts; REST APIs or SOAP web services; SQL,MY SQL, Oracle , PostgreSQL , Dynamo DB, S3, RDS Provide effective real time demonstrations of CCaaS & AI (Bots) platforms High proficiency in defining top notch customer facing slides/presentations Gen AI,LLM platforms MUST have technologies includes Copilot, Copilot Studio, Amazon Bedrock, Amazon Titan, Sagemaker, Azure OpenAI, Azure AI Services, Google Vertex AI, Gemini AI. Proficiency in data visualization tools like Tableau, Power BI, Quicksight and others Nice To Have Experience in CPaaS platforms (Twilio, Infobip) for synergies between Communication Platform As A Service & Contact Center As a Service Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their services for scalable data storage, processing, and analytics Work on high velocity Presales solution consulting engagements (RFP, RFI, RFQ) Define industry specific use cases (BFS & I, Telecom, Retail, Manlog etc) Work on high volume presales consulting engagements including solution design document definition, commercial construct (CCaaS) Defining Business Case Show more Show less
Posted 2 weeks ago
10.0 - 13.0 years
5 - 9 Lacs
Hyderābād
On-site
DT-US Product Engineering - Engineering Manager We are seeking an exceptional Engineering Manager who combines strong technical leadership with a proven track record of delivering customer-centric solutions. This role requires demonstrated experience in leading engineering teams, fostering engineering excellence, and driving outcomes through incremental and iterative delivery approaches. Work you will do The Engineering Manager will be responsible for leading engineering teams to deliver high-quality solutions while ensuring proper planning, code integrity, and alignment with customer goals. This role requires extensive experience in modern software engineering practices and methodologies, with a focus on customer outcomes and business impact. Project Leadership and Management: Lead engineering teams to deliver solutions that solve complex problems with valuable, viable, feasible, and maintainable outcomes Establish and maintain coding standards, quality metrics, and technical debt management processes Design and implement evolutionary release plans including alpha, beta, and MVP stages Strategic Development: Be the technical advocate for engineering teams throughout the end-to-end lifecycle of product development Drive engineering process improvements and innovation initiatives Develop and implement strategies for continuous technical debt management Team Mentoring and Development: Lead and mentor engineering teams, fostering a culture of engineering excellence and continuous learning Actively contribute to team velocity through hands-on involvement in design, configuration, and coding Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Customer Engagement and Delivery: Lead customer engagement initiatives before, during, and after delivery Drive rapid, inexpensive experimentation to arrive at optimal solutions Implement incremental and iterative delivery approaches to navigate complexity Foster high levels of customer engagement throughout the development lifecycle Technical Implementation: Ensure proper implementation of DevSecOps practices and CI/CD pipelines Oversee deployment techniques including Blue-Green and Canary deployments Drive the adoption of modern software engineering practices and methodologies Maintain oversight of architecture designs and non-functional requirements Technical Expertise Requirements: Must Have: Modern Software Engineering: Advanced knowledge of Agile methodologies, DevSecOps, and CI/CD practices Technical Leadership: Proven experience in leading engineering teams and maintaining code quality Customer-Centric Development: Experience in delivering solutions through experimentation and iteration Architecture & Design: Strong understanding of software architecture principles and patterns Quality Assurance: Experience with code review processes and quality metrics Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Version Control & Collaboration: Strong proficiency with Git and collaborative development practices Deployment & Operations: Experience with modern deployment techniques and operational excellence AI/ML Engineering: Experience with machine learning frameworks (TensorFlow, PyTorch), MLOps practices, and AI model deployment Data Processing: Knowledge of data processing tools and pipelines for AI/ML applications Domain-Specific Knowledge and experience: Custom, Mobile, Data & Analytics, RPA, or Packages Good to Have: Cloud Platforms: Experience with major cloud providers and their services Package Implementations: Experience with enterprise software package configurations Test Automation: Knowledge of automated testing frameworks and practices Container Technologies: Experience with Docker and Kubernetes Infrastructure as Code: Knowledge of infrastructure automation tools Advanced AI/ML: Experience with large language models, deep learning architectures, and AI model optimization AI Platforms: Familiarity with enterprise AI platforms like Databricks, SageMaker, or Azure ML Education: Advanced degree in Computer Science, Software Engineering, or related field, or equivalent experience. Qualifications: 10-13 years of software engineering experience with at least 5 years in technical leadership roles Proven track record of leading and delivering large-scale software projects Strong experience in modern software development methodologies and practices Demonstrated ability to drive engineering excellence and team performance Experience in stakeholder management and cross-functional collaboration Expert-level proficiency in software development and technical leadership Strong track record of implementing engineering best practices and quality standards Excellent oral and written communication skills, including presentation abilities. The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 303079
Posted 2 weeks ago
0 years
0 Lacs
Greater Noida
On-site
Job Title: Machine Learning cum AI Developer Location: Greater Noida, India Company: EnquiryGate Pvt. Ltd Job Type: Full-time Experience Level: Internship Department: AI/ML Development About EnquiryGate.com: EnquiryGate.com is a fast-growing IT solutions provider specializing in smart digital transformation. We build custom solutions in AI, ML, software development, mobile apps, and ERP/CRM systems to help businesses grow intelligently. We're looking for a passionate Machine Learning cum AI Developer to join our innovation-driven team. Job Summary: As a Machine Learning cum AI Developer at EnquiryGate.com you will design, develop, and deploy intelligent systems that solve real-world problems. You will collaborate with data scientists, backend developers, and product teams to integrate AI/ML models into web and mobile applications. Key Responsibilities: Design, develop, train, test, and deploy ML/AI models for various use cases. Work with structured and unstructured data to build predictive and prescriptive analytics solutions. Implement and optimize algorithms for image recognition, NLP, recommendation systems, etc. Integrate AI/ML models into production-ready APIs or applications. Collaborate with data engineers to prepare clean and usable datasets. Use libraries such as TensorFlow, PyTorch, scikit-learn, and OpenCV. Conduct research on the latest AI trends and apply them to EnquiryGate.com products. Improve model accuracy, reduce latency, and ensure scalability. Monitor deployed models and retrain when needed. Document processes, models, and performance metrics. Required Skills and Qualifications: Bachelor's or Master’s degree in Computer Science, Data Science, AI, or related fields. Solid understanding of ML algorithms, data preprocessing, and evaluation techniques. Proficiency in Python (NumPy, pandas, TensorFlow, Keras, scikit-learn). Experience with AI fields like Natural Language Processing, Computer Vision, or Deep Learning. Familiarity with cloud-based ML platforms (AWS SageMaker, Google AI, or Azure ML). Experience with version control tools (Git) and collaborative workflows. Knowledge of API integration, Flask/Django for model deployment. Strong analytical and problem-solving skills. Preferred Skills (Good to Have): Exposure to Reinforcement Learning, Generative AI (LLMs), or Chatbot development. Experience with data visualization tools (PowerBI, Tableau, or Plotly). Understanding of MLOps and model lifecycle management. To Apply: Send your resume and portfolio (if available) to Hr@EnquiryGate.com with the subject line "Application for ML cum AI Developer – [Your Name]" . Job Types: Full-time, Permanent, Internship Schedule: Day shift Fixed shift Morning shift Work Location: In person
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Responsible for designing and developing a cutting-edge AI and Generative AI infrastructure on AWS Cloud platforms and COLO, tailored for pharmaceutical business use-cases. The platform will facilitate pharma research scientists and other business users for early molecule development and other research activities by providing robust, scalable, and secure computing resources. About The Role Full Job Title: Assoc. Dir. DDIT IES Cloud Engineering ROLE PURPOSE Responsible for designing and developing a cutting-edge AI and Generative AI infrastructure on AWS Cloud platform and COLO, tailored for pharmaceutical business use-cases. The platform will facilitate Biomedical reseacrh Scientists and other business users for early molecule development and other research activities by providing robust, scalable, and secure computing resources. Architect and Design : Lead the design and architecture of an GPU based AI infrastructure platform, with a focus on supporting Generative AI workloads and advanced analytics for pharma business use-cases like BioNeMo, Alpha Fold, ESM Fold, Open Fold, ProtGPT2 and NVIDIA Clara suite. Platform Development: Work with Biomedical Reseacrh scientists to develop and implement technical solutions for ML/Ops (Run:AI) hosted on K8 EKS cluster. Data Management: Oversee the design and implementation of data storage, retrieval, and processing pipelines, ensuring the efficient handling of large datasets, including genomics and chemical compound data. Security and Compliance: In collaboration with cloud domain security architects, implement robust security measures for multi-cloud environment and ensure compliance with relevant industry standards, particularly in handling business sensitive data. Collaboration: Work closely with Biomedical Reseacrh & Data scientists and other business stakeholders to understand their needs and translate them into technical solutions. Performance Optimization: Optimize the performance and cost-efficiency of the platform, including monitoring and scaling resources as needed. Innovation: Stay updated with the latest trends and technologies in AI and cloud infrastructure, continuously exploring new ways to enhance the platform's capabilities. Additional Specifications Required For The Position Bachelor’s degree in information technology, Computer Science, or Engineering. AWS Solution Architect certification – professional 8+ years of strong technical hands-on experience of delivering infrastructure and platform services across geogrphic and business boundaries. Experience of working on GPU based AI Infrastructure. Experience in NVIDIA DGX Infra will be highly preferred. Deep understanding of Architecture and Design of Platform Engineering products with focus mainly on Data science, ML/Ops and Bio science or Pharma Gen AI foundational models. Experience in NVIDIA BioNeMo or Clara will be highly preferred. Extensive experience in building infra solutions on AWS, particularly with services like AWS Bedrock, Amazon Q, SageMaker, ECS/EKS Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes. Experience with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and monitoring solutions. Excellent skills in collaborating with business users, Product team, Operationalizing the delivered products and working closely with Security for implementing compliance. Good knowledge on implementing well defined & industry standard Change management process for platform & its products. Have a well-structured Use-case onboarding process. Should ensure to have documentation for Platform products and implementations done. Experience with DevOps Orchestration/Configuration/Continuous Integration Management technologies Good understanding of High Availability and Disaster Recovery concepts for infrastructure Ability to analyze and resolve complex infrastructure resource and application deployment issues. KEY PERFORMANCE INDICATORS / MEASURES OF SUCCESS Deliver on time Adherence to the Novartis IT quality standards Cost optimization Completeness and quality of deliverables Customer feedback (expectations met/exceeded) Application on boarding delivery success Actively contribute to the business with innovative solutions that show results in the form of cost optimization and / or growth of the business top line revenue LANGUAGES Excellent written, presentation and verbal communication skills Languages: Fluent in English (written & spoken), additional languages a plus COMPETENCY PROFILE DevOps, CI/CD Technical Leadership Scrum Methodology Agile Software Development System integration and Built Problem solving / Root Cause Analysis Cloud services monitoring & cost optimization Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Show more Show less
Posted 2 weeks ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are looking for highly motivated and analytical Machine Learning Engineers with 1–3 years of experience in building scalable, production-ready AI/ML models. This role involves working on complex business problems using advanced ML/DL techniques across domains such as Natural Language Processing (NLP), Computer Vision, Time Series Forecasting, and Generative AI. You will be responsible for end-to-end model development, deployment, and performance tracking while collaborating with cross-functional teams including data engineering, DevOps, and product. Location: Noida / Gurugram / Indore / Bengaluru / Pune / Hyderabad Experience: 1–3 Years Education: BE / B.Tech / M.Tech / MCA / M.Com Key Responsibilities: Model Development & Experimentation Design and build machine learning models for NLP, computer vision, and time series prediction using supervised, unsupervised, and deep learning techniques. Conduct experiments to improve model performance via architectural modifications, hyperparameter tuning, and feature selection. Apply statistical analysis to validate and interpret model results. Evaluate models using appropriate metrics (e.g., accuracy, precision, recall, F1-score, AUC-ROC). Data Handling & Feature Engineering Process large structured and unstructured datasets using Python, Pandas, and DataFrame APIs. Perform feature extraction, transformation, and selection tailored to specific ML problems. Implement data augmentation and enrichment techniques to enhance training quality. Model Deployment & Productionization Deploy trained models to production environments using cloud platforms such as AWS (especially SageMaker). Containerize models using Docker and orchestrate deployments with Kubernetes. Implement monitoring, logging, and automated retraining pipelines for model health tracking. Collaboration & Innovation Collaborate with data engineers and architects to ensure smooth data flow and infrastructure alignment. Explore and adopt cutting-edge AI/ML methodologies and GenAI frameworks (e.g., LangChain, GPT-3). Contribute to documentation, versioning, and knowledge-sharing across teams. Drive innovation and continuous improvement in AI/ML delivery and engineering practices. Mandatory Technical Skills: Languages & Tools: Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) Model Development: Deep Learning, NLP, Time Series, Computer Vision Cloud Platforms: AWS (especially SageMaker) Model Deployment: Docker, Kubernetes, REST APIs ML Ops: Model monitoring, performance logging, CI/CD Frameworks: LangChain (for GenAI), Transformers, Hugging Face Preferred / Good to Have: Experience with Foundation Model tuning and prompt engineering Hands-on with Generative AI (GPT-3/4, OpenAI APIs, LangChain integrations) Certifications: AWS Certified Machine Learning – Specialty Experience with version control (Git), and experiment tracking tools (MLflow, Weights & Biases) Soft Skills: Excellent communication and presentation abilities Strong analytical and problem-solving mindset Ability to work in collaborative, fast-paced environments Curiosity to learn emerging technologies and apply them to real-world problems Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
osition: Data Scientist Location: Chennai, India (Work from Office) Experience: 2–5 years About the Opportunity: Omnihire is seeking a Data Scientist to join a leading AI-driven data‐solutions company. As part of the Data Consulting team, you’ll collaborate with scientists, IT, and engineering to solve high-impact problems and deliver actionable insights. Key Responsibilities: Analyze large, structured and unstructured datasets (SQL, Hadoop/Spark) to extract business-critical insights Build and validate statistical models (regression, classification, time-series, segmentation) and machine-learning algorithms (Random Forest, Boosting, SVM, KNN) Develop deep-learning solutions (CNN, RNN, LSTM, transfer learning) and apply NLP techniques (tokenization, stemming/lemmatization, NER, LSA) Write production-quality code in Python and/or R using libraries (scikit-learn, TensorFlow/PyTorch, pandas, NumPy, NLTK/spaCy) Collaborate with cross-functional teams to scope requirements, propose analytics solutions, and present findings via clear visualizations (Power BI, Matplotlib) Own end-to-end ML pipelines: data ingestion → preprocessing → feature engineering → model training → evaluation → deployment Contribute to solution proposals and maintain documentation for data schemas, model architectures, and experiment tracking (Git, MLflow) Required Qualifications: Bachelor’s or Master’s in Computer Science, Statistics, Mathematics, Data Science, or a related field 2–5 years of hands-on experience as a Data Scientist (or similar) in a data-driven environment Proficiency in Python and/or R for statistical modeling and ML Strong SQL skills and familiarity with Big Data platforms (e.g., Hadoop, Apache Spark) Demonstrated experience building, validating, and deploying ML/DL models in production or staging Excellent problem-solving skills, attention to detail, and ability to communicate technical concepts clearly Self-starter who thrives in a collaborative, Agile environment Nice-to-Have: Active GitHub/Kaggle portfolio showcasing personal projects or contributions Exposure to cloud-based ML services (Azure ML Studio, AWS SageMaker) and containerization (Docker) Familiarity with advanced NLP frameworks (e.g., Hugging Face Transformers) or production monitoring tools (Azure Monitor, Prometheus) Why Join? Work on high-impact AI/ML projects that drive real business value Rapid skill development with exposure to cutting-edge technologies Collaborative, Agile culture with mentorship from senior data scientists Competitive compensation package and comprehensive benefits Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Amazon Prime is a program that provides millions of members with unlimited one-day delivery, unlimited streaming of video and music, secure online photo storage, access to kindle e-books as well as Prime special deals on Prime Day. In India, Prime members get unlimited free One-Day and Two-day delivery, video streaming and early and exclusive access to deals. After the launch in 2016, the Amazon Prime team is now looking for a detailed oriented business intelligence engineer to lead the business intelligence for Prime and drive member insights. At Amazon, we're always working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a dynamic, organized, and customer-focused Analytics expert to join our Amazon Prime Analytics team. The team supports the Amazon India Prime organization by producing and delivering metrics, data, models and strategic analyses. This is an Individual contributor role that requires an individual with excellent team leadership skills, business acumen, and the breadth to work across multiple Amazon Prime Business Teams, Data Engineering, Machine Learning and Software Development teams. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and a proven ability to work in a fast-paced and ever-changing environment. Key job responsibilities The Successful Candidate Will Work With Multiple Global Site Leaders, Business Analysts, Software Developers, Database Engineers, Product Management In Addition To Stakeholders In Business, Finance, Marketing And Service Teams To Create a Coherent Customer View. They Will Define and lead the data strategy of various analytical products owned with Prime Analytics team. Develop and improve the current data architecture using AWS Redshift, AWS S3, AWS Aurora (Postgres) and Hadoop/EMR. Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability. Create entire ML framework for Data Scientists in AWS Bedrock, Sagemaker and EMR clusters Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of advertiser experience. Design and manage data models that serve multiple Weekly Business Reports (WBRs) and other business critical reporting Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human data to push beyond what’s known today. About The Role Let’s do this. Let’s change the world. At Amgen, we believe that innovation can and should be happening across the entire company. Part of the Artificial Intelligence & Data function of the Amgen Technology and Medical Organizations (ATMOS), the AI & Data Innovation Lab (the Lab) is a center for exploration and innovation, focused on integrating and accelerating new technologies and methods that deliver measurable value and competitive advantage. We’ve built algorithms that predict bone fractures in patients who haven’t even been diagnosed with osteoporosis yet. We’ve built software to help us select clinical trial sites so we can get medicines to patients faster. We’ve built AI capabilities to standardize and accelerate the authoring of regulatory documents so we can shorten the drug approval cycle. And that’s just a part of the beginning. Join us! We are seeking a Senior DevOps Software Engineer to join the Lab’s software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Software Engineering, or related field. 8+ years of experience in full-stack software engineering. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 weeks ago
10.0 - 13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary DT-US Product Engineering - Engineering Manager We are seeking an exceptional Engineering Manager who combines strong technical leadership with a proven track record of delivering customer-centric solutions. This role requires demonstrated experience in leading engineering teams, fostering engineering excellence, and driving outcomes through incremental and iterative delivery approaches . Work you will do The Engineering Manager will be responsible for leading engineering teams to deliver high-quality solutions while ensuring proper planning, code integrity, and alignment with customer goals. This role requires extensive experience in modern software engineering practices and methodologies, with a focus on customer outcomes and business impact. Project Leadership and Management: Lead engineering teams to deliver solutions that solve complex problems with valuable, viable, feasible, and maintainable outcomes Establish and maintain coding standards, quality metrics, and technical debt management processes Design and implement evolutionary release plans including alpha, beta, and MVP stages Strategic Development: Be the technical advocate for engineering teams throughout the end-to-end lifecycle of product development Drive engineering process improvements and innovation initiatives Develop and implement strategies for continuous technical debt management Team Mentoring and Development: Lead and mentor engineering teams, fostering a culture of engineering excellence and continuous learning Actively contribute to team velocity through hands-on involvement in design, configuration, and coding Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Customer Engagement and Delivery: Lead customer engagement initiatives before, during, and after delivery Drive rapid, inexpensive experimentation to arrive at optimal solutions Implement incremental and iterative delivery approaches to navigate complexity Foster high levels of customer engagement throughout the development lifecycle Technical Implementation: Ensure proper implementation of DevSecOps practices and CI/CD pipelines Oversee deployment techniques including Blue-Green and Canary deployments Drive the adoption of modern software engineering practices and methodologies Maintain oversight of architecture designs and non-functional requirements Technical Expertise Requirements: Must Have: Modern Software Engineering: Advanced knowledge of Agile methodologies, DevSecOps, and CI/CD practices Technical Leadership: Proven experience in leading engineering teams and maintaining code quality Customer-Centric Development: Experience in delivering solutions through experimentation and iteration Architecture & Design: Strong understanding of software architecture principles and patterns Quality Assurance: Experience with code review processes and quality metrics Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Version Control & Collaboration: Strong proficiency with Git and collaborative development practices Deployment & Operations: Experience with modern deployment techniques and operational excellence AI/ML Engineering: Experience with machine learning frameworks (TensorFlow, PyTorch), MLOps practices, and AI model deployment Data Processing: Knowledge of data processing tools and pipelines for AI/ML applications Domain-Specific Knowledge and experience: Custom, Mobile, Data & Analytics, RPA, or Packages Good to Have: Cloud Platforms: Experience with major cloud providers and their services Package Implementations: Experience with enterprise software package configurations Test Automation: Knowledge of automated testing frameworks and practices Container Technologies: Experience with Docker and Kubernetes Infrastructure as Code: Knowledge of infrastructure automation tools Advanced AI/ML: Experience with large language models, deep learning architectures, and AI model optimization AI Platforms: Familiarity with enterprise AI platforms like Databricks, SageMaker, or Azure ML Education: Advanced degree in Computer Science, Software Engineering, or related field, or equivalent experience. Qualifications: 10-13 years of software engineering experience with at least 5 years in technical leadership roles Proven track record of leading and delivering large-scale software projects Strong experience in modern software development methodologies and practices Demonstrated ability to drive engineering excellence and team performance Experience in stakeholder management and cross-functional collaboration Expert-level proficiency in software development and technical leadership Strong track record of implementing engineering best practices and quality standards Excellent oral and written communication skills, including presentation abilities. The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 303079 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. It’s truly Day 1 for our team in AWS. This is your opportunity to be a member of a team that’s building a suite of AWS Apps and Services to tackle a huge new problem space. You’ll be an integral part of testing to test the app build by services services that leverage AWS technologies like SageMaker, Forecast, Athena, QuickSight, Glue, Bedrock, ML and more. As an QA member of the team, you’ll wear many hats. You’ll help design the overall test strategy, test plan, contribute to the product vision, and establish the technology processes and practices that will lay the groundwork for the organization as it grows. An ideal candidate is an experienced Software QA Engineer with a development and/or QA background who can direct the activities of a growing team. The successful candidate should be able to apply QA process, practice and principles to software development and release processes, should apply their experience with a variety of software QA tools to accomplish these processes, as well as to describe requirements for new scripts, tools and automation needed by their team. Responsibilities include defining test strategy and test plans, reviewing them with stakeholders, improving test coverage, reviewing and filling gaps in existing automation, representing the customer, understanding how the customers use the system and including the most relevant end-to-end user scenarios in test plans and automation. Responsibilities Understanding how all elements of the system software ecosystem work together and developing QA approaches that fit the overall strategy Responsible for development of test strategies and creation of appropriate test harnesses Providing test infrastructure to enable engineering teams to test and own quality of the services. Being a stakeholder of the release to ensure defects are fixed per SLA and end customer experience are protected and improved Development and execution of test plans, monitoring and reporting on test execution and quality metrics Coordinating with offshore Quality Service team on test execution and sign-off A day in the life Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications 4+ years of quality assurance engineering experience Experience in automation testing Experience scripting or coding Experience in manual testing Experience in at least, one modern programming language such as Python, Java or Perl Preferred Qualifications Deep hands-on technical expertise Experience with at least one automated test framework like Selenium or Appium or Cypress Experience in gathering test requirements to create detailed test plans and defining quality metrics to measure product quality A deep understanding of automation testing by leading engineers who can write automation scripts/programs that will aid in automated testing Experience working in Supply chain domain Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu - A83 Job ID: A2939747 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Do you love problem solving? Are you looking for real world Supply Chain challenges? Do you have a desire to make a major contribution to the future, in the rapid growth environment of Cloud Computing? Amazon Web Services is looking for a highly motivated, Data Scientist to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers, Business Intelligence Engineers and Business Analysts to achieve our goals. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. Key job responsibilities Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard time Series Forecasting techniques like ARIMA, ARIMAX, Holt Winter and formulate ensemble model. Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering, Principle Component Analysis, Market Basket analysis). Experience in solving optimization problems like inventory and network optimization . Should have hands on experience in Linear Programming. Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area Detail-oriented and must have an aptitude for solving unstructured problems. You should work in a self-directed environment, own tasks and drive them to completion. Excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About The Team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications Masters with 5+ years of experience or Bachelors with 8+ years of experience in quantitative field (Computer Science, Mathematics, Machine Learning, AI, Statistics, Operational research or equivalent) Experience in Python, R or another scripting language; command line / notebook usage. Knowledge and expertise with Data modelling skills, SQL, MySQL, and Databases (RDBMS, NOSQL) Extensive knowledge and practical experience in several of the following areas: machine learning, statistics, Optimization using Linear Programming. Evidence of using of relevant statistical measures such as Hypothesis testing, confidence intervals, significance of error measurements, development and evaluation data sets, etc. in data analysis projects Excellent written and verbal communication skills for both technical and non-technical audiences Preferred Qualifications Experience in Python, Perl, or another scripting language Experience in a ML or data scientist role with a large technology company Functional knowledge of AWS platforms such as S3, Glue, Athena, Sagemaker, Lambda, EC2, Batch, Step Function. Experience in creating powerful data driven visualizations to describe your ML modeling results to stakeholders Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADSIPL - Karnataka Job ID: A2959646 Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. It’s truly Day 1 for our team in AWS. This is your opportunity to be a member of a team that’s building a suite of AWS Apps and Services to tackle a huge new problem space. You’ll be an integral part of testing to test the app build by services services that leverage AWS technologies like SageMaker, Forecast, Athena, QuickSight, Glue, Bedrock, ML and more. As an QA member of the team, you’ll wear many hats. You’ll help design the overall test strategy, test plan, contribute to the product vision, and establish the technology processes and practices that will lay the groundwork for the organization as it grows. An ideal candidate is an experienced Software QA Engineer with a development and/or QA background who can direct the activities of a growing team. The successful candidate should be able to apply QA process, practice and principles to software development and release processes, should apply their experience with a variety of software QA tools to accomplish these processes, as well as to describe requirements for new scripts, tools and automation needed by their team. Responsibilities include defining test strategy and test plans, reviewing them with stakeholders, improving test coverage, reviewing and filling gaps in existing automation, representing the customer, understanding how the customers use the system and including the most relevant end-to-end user scenarios in test plans and automation. Responsibilities Understanding how all elements of the system software ecosystem work together and developing QA approaches that fit the overall strategy Responsible for development of test strategies and creation of appropriate test harnesses Providing test infrastructure to enable engineering teams to test and own quality of the services. Being a stakeholder of the release to ensure defects are fixed per SLA and end customer experience are protected and improved Development and execution of test plans, monitoring and reporting on test execution and quality metrics Coordinating with offshore Quality Service team on test execution and sign-off A day in the life Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications 4+ years of quality assurance engineering experience Experience in automation testing Experience scripting or coding Experience in manual testing Experience in at least, one modern programming language such as Python, Java or Perl Preferred Qualifications Deep hands-on technical expertise Experience with at least one automated test framework like Selenium or Appium or Cypress Experience in gathering test requirements to create detailed test plans and defining quality metrics to measure product quality A deep understanding of automation testing by leading engineers who can write automation scripts/programs that will aid in automated testing Experience working in Supply chain domain Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu - A83 Job ID: A2939747 Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Requirements Role/Job Title: Senior Developer Function/Department: Information technology Job Purpose As a Backend Developer, you will play a crucial role in designing, developing, and maintaining complex backend systems. You will work closely with cross-functional teams to deliver high-quality software solutions and drive the technical direction of our projects. Your experience and expertise will be vital in ensuring the performance, scalability, and reliability of our applications. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Educational Qualification (Fulltime) Bachelor of Technology (B.Tech) / Bachelor of Science (B.Sc) / Master of Science (M.Sc) /Master of Technology (M.Tech) / Bachelor of Computer Applications (BCA) / Master of Computer Applications (MCA) Experience : 5-10 Years Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.
If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:
The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the sagemaker field, a typical career progression may look like this:
In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:
Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:
What is a SageMaker notebook instance?
Medium:
What is the difference between SageMaker Ground Truth and SageMaker Processing?
Advanced:
As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2