Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name ML Developer Taleo ID Position Level Staff Employment Type Permanent Number of Openings 1 Work Location Kochi, Chennai , Noida , Bangalore , Pune , Kolkata , TVM Position Details As part of EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements (including Experience, Skills And Additional Qualifications) A bachelor’s degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. Technical Skills Requirements Develop and implement machine learning models, including regression, classification (e.g., XGBoost, Random Forest), and clustering techniques. Conduct exploratory data analysis (EDA) to uncover insights and trends within data sets. Apply dimension reduction techniques to improve model performance and interpretability. Utilize statistical models to design and implement effective business solutions. Evaluate and validate models to ensure robustness and reliability. Should have solid background in Python Familiarity with Time Series Forecasting. Basic experience with cloud platforms such as AWS, Azure, or GCP. Exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker) is a plus Additional skill requirements: Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. What we look for A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction, and advisory services, we’re using the finance products, expertise, and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description Job Family Definition: Designs, develops, troubleshoots and debugs software programs for software enhancements and new products. Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools. Determines hardware compatibility and/or influences hardware design. Management Level Definition Contributes to assignments of limited scope by applying technical concepts and theoretical knowledge acquired through specialized training, education, or previous experience. Acts as team member by providing information, analysis and recommendations in support of team efforts. Exercises independent judgment within defined parameters. What You’ll Do Responsibilities: Design topologies and build network configurations that map well-optimized network reference designs Plan, develop and execute automated and manual test plans for the reference design readiness Provide constructive feedback, report issues, and interact with developers to deliver best in class product quality Review requirements from the Product Management, Technical Marketing & Account teams Utilize available network troubleshooting tools, including network packet captures, monitoring devices, log files, and customer inputs to facilitate effective issue resolution. Description for Internal Candidates Minimum Qualifications What you need to bring: BS degree in Computer Science or equivalent experience Years of experience – 1 to 3 yrs Expert knowledge on Layer 2 and Layer 3 technologies by either validating or deploying related networking products Deep understanding on Clos based Data Center networks architecture :3-stage and 5-stage and Data Center Interconnect (DCI) Excellent understanding of features Dot1x, DHCP, Firewall, class-of-service, EVPN-VXLAN Proficient in Class of Service and DCQCN that gets heavily used in AI-ML Based Clos Networks Expert knowledge on Python programming Deep understanding of software, networking, and system concepts, including Linux internals, distributed system concepts and network troubleshooting tools Excellent interpersonal and communication skills with a proven ability to develop and maintain effective relationships. Strong problem solving and decision-making skills Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_01 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 2 days ago
4.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AI and Full Stack Technical Lead Required Skills 4-7 years of overall application development experience with minimum 3 years of strong hands-on experience in designing and writing modular/reusable code using ReactJs. Strong experience in React JS, Node JS and Express JS Expertise in Redux/Flux, RxJS based development Proficient in object-oriented concepts in UI development and expert level knowledge of HTML, CSS, JavaScript, AJAX and JSON Expert level knowledge on web application architecture principles, design patterns and programming practices using front end web technologies Deep understanding of browser internals, UI framework internals, JavaScript engine internals and ability to work around challenges and limitations Hands-on knowledge in Sass & CSS Frameworks (such as Bootstrap, Material etc.) Rest API knowledge Core Python: Data structures, OOP, exception handling. API Integration: Consuming REST APIs (e.g., Azure SDKs). Data Processing: Using pandas, NumPy, and json for data manipulation. AI/ML Libraries: Familiarity with scikit-learn, transformers, or OpenAI SDKs. Experience writing unit and integration test cases Experience working in agile methodologies Experience building responsive UIs on the web that are robust, scalable, and maintainable Monitor, troubleshoot, and optimize solutions Visual Studio, TFS, VSTS and GIT Good to have knowledge in Azure and devOps pipelines Soft Skills Excellent Communication Skills Team Player Self-starter and highly motivated Ability to handle high pressure and fast paced situations Excellent presentation skills Ability to work with globally distributed teams Roles and Responsibilities: Understand existing application architecture and solution design Build engaging, usable, and accessible UI applications/components/code libraries for web. Work with other architects, leads, team members in an agile scrum environment Understand and respect UX and its role in the development of excellent UIs Build engaging, usable, and accessible UI applications/components/code libraries for web. Contribute in all phases of the development lifecycle. Identify the gaps and come up with working solutions Understand enterprise application design framework and processes Lead or Mentor junior and/or mid-level developers Review code and establish best practices Look out for latest technologies and match up with EY use case and solve business problems efficiently Ability to look at the big picture EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: Data Engineer Location: Hyderabad, India (Onsite) Fulltime. Job Description: We are seeking an experienced Data Engineer with 5-8 years of professional experience to design, build, and optimize robust and scalable data pipelines for our SmartFM platform. The ideal candidate will be instrumental in ingesting, transforming, and managing vast amounts of operational data from various building devices, ensuring high data quality and availability for analytics and AI/ML applications. This role is critical in enabling our platform to generate actionable insights, alerts, and recommendations for optimizing facility operations. ROLES AND RESPONSIBILITIES • Design, develop, and maintain scalable and efficient data ingestion pipelines from diverse sources (e.g., IoT devices, sensors, existing systems) using technologies like IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Kafka. • Implement robust data transformation and processing logic to clean, enrich, and structure raw data into formats suitable for analysis and machine learning models. • Manage and optimize data storage solutions, primarily within MongoDB, ensuring efficient schema design, data indexing, and query performance for large datasets. • Collaborate closely with Data Scientists to understand their data needs, provide high-quality, reliable datasets, and assist in deploying data-driven solutions. • Ensure data quality, consistency, and integrity across all data pipelines and storage systems, implementing monitoring and alerting mechanisms for data anomalies. • Work with cross-functional teams (Software Engineers, Data Scientists, Product Managers) to integrate data solutions with the React frontend and Node.js backend applications. • Contribute to the continuous improvement of data architecture, tooling, and best practices, advocating for scalable and maintainable data solutions. • Troubleshoot and resolve complex data-related issues, optimizing pipeline performance and ensuring data availability. • Stay updated with emerging data engineering technologies and trends, evaluating and recommending new tools and approaches to enhance our data capabilities. REQUIRED TECHNICAL SKILLS AND EXPERIENCE • 5-8 years of professional experience in Data Engineering or a related field. • Proven hands-on experience with data pipeline tools such as IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Apache Kafka. • Strong expertise in database management, particularly with MongoDB, including schema design, data ingestion pipelines, and data aggregation. • Proficiency in at least one programming language commonly used in data engineering, such as Python or Java/Scala. • Experience with big data technologies and distributed processing frameworks (e.g., Apache Spark, Hadoop) is highly desirable. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their data services. • Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. • Experience with DevOps practices for data pipelines (CI/CD, monitoring, logging). • Knowledge of Node.js and React environments to facilitate seamless integration with existing applications. ADDITIONAL QUALIFICATIONS • Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. • Strong problem-solving and analytical skills with a meticulous approach to data quality. • Experienced in collaborating and communicating seamlessly with diverse technology roles, including development, support, and product management. • Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in data engineering and business needs. • Experience in the facility management domain or IoT data is a plus. EDUCATION REQUIREMENTS / EXPERIENCE • Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Mathematics, Statistics, or a related quantitative field.
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name ML Developer Taleo ID Position Level Staff Employment Type Permanent Number of Openings 1 Work Location Kochi, Chennai , Noida , Bangalore , Pune , Kolkata , TVM Position Details As part of EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements (including Experience, Skills And Additional Qualifications) A bachelor’s degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. Technical Skills Requirements Develop and implement machine learning models, including regression, classification (e.g., XGBoost, Random Forest), and clustering techniques. Conduct exploratory data analysis (EDA) to uncover insights and trends within data sets. Apply dimension reduction techniques to improve model performance and interpretability. Utilize statistical models to design and implement effective business solutions. Evaluate and validate models to ensure robustness and reliability. Should have solid background in Python Familiarity with Time Series Forecasting. Basic experience with cloud platforms such as AWS, Azure, or GCP. Exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker) is a plus Additional skill requirements: Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. What we look for A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction, and advisory services, we’re using the finance products, expertise, and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore www.veradigm.com. Job Summary What will your job look like: We are seeking a skilled .NET Full Stack Developer with 5+ years of experience in designing and developing web applications, including the integration of AI/ML solutions into business applications. The ideal candidate will be proficient in both front-end and back-end development using the Microsoft technology stack and have hands-on experience in leveraging AI APIs, machine learning models , or services like Azure AI, OpenAI, or custom ML models . Key Responsibilities Develop, test, and maintain scalable web applications using ASP.NET Core, C#, MVC, Web API. Build modern, responsive front-end interfaces using Angular / React / Blazor and integrate with backend APIs. Work with Entity Framework / EF Core and SQL Server / Azure SQL to manage data models and performance. Integrate AI features (e.g., chatbots, recommendation systems, NLP, OCR, or predictive analytics) using APIs or custom ML models. Utilize Azure Cognitive Services, OpenAI, Azure Machine Learning, or similar platforms for AI implementation. Collaborate with data scientists or ML engineers to embed models into production-ready systems. Follow best practices in coding, testing, DevOps (CI/CD), and secure application development. Participate in Agile development processes including planning, code reviews, and retrospectives. An Ideal Candidate Will Have 5+ years of experience in .NET development (C#, ASP.NET Core, Web API, MVC). Front-end experience with Angular / React / Blazor, HTML5, CSS, JavaScript/TypeScript. Hands-on experience integrating with AI services or APIs (e.g., OpenAI, Azure Cognitive Services, Google Cloud AI). Experience with RESTful APIs, Entity Framework, and SQL Server. Understanding of cloud platforms like Azure or AWS. Familiarity with Git, CI/CD pipelines, and Agile development. Good analytical, problem-solving, and communication skills. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
India
On-site
Role description: This role is with one of our prominent portfolio companies. About Us We are a San Francisco based startup building next-generation Voice AI products that redefine how humans interact with machines from smart voice assistants to automated customer conversations, voice-driven tools, and more. We are at the intersection of speech technology, large language models, and real-time systems, backed by leading investors and supported by domain experts. We’re now building our founding engineering team in India to shape the core product experience. What You'll Do Build and deploy real-time voice-based AI applications using ASR (Automatic Speech Recognition), TTS (Text-to-Speech), and LLMs. Work on latency-sensitive systems to enable near real-time conversations. Design and implement prompt-chaining, memory, and tool integration for LLM-powered voice agents. Set up and manage scalable infra for voice/audio processing and AI model serving. Work closely with the founding team on product shaping, roadmap planning, and technical strategy. Continuously experiment with and evaluate new models, APIs, and speech/LLM techniques. Who You Are 3-8 years of experience in AI/ML, deep learning, or backend-heavy engineering roles. Solid hands-on experience with speech technologies (ASR, TTS, diarization, etc.). Comfortable working with Python, PyTorch, Hugging Face, OpenAI, or similar frameworks. Experience deploying real-time systems (Docker, Kubernetes, AWS/GCP). Strong problem-solving skills with a product-first mindset. Self-starter who thrives in high-ownership, fast-paced environments. Excellent written and verbal communication
Posted 2 days ago
14.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Senior Engineering Manager on Twilio’s Traffic Intelligence team. About The Job This position is needed to manage the team of machine learning engineers of the Growth & User Intelligence team and closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build ML and Data Science products that work at a global scale and own end-to-end execution of large scale ML solutions. As a senior manager, you will closely partner with technology and product leaders in the organization to enable the engineers to turn ideas into reality. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions for Traffic Intelligence vertical. Be a champion for your team, setting individuals up for success and putting others’ growth first. Understand the architecture and processes required to build and operate always-available complex and scalable distributed systems in cloud environments. Advocate agile processes, continuous integration and test automation. Be a strategic problem solver and thrive operating in broad scope, from conception through continuous operation of 24x7 services. Exhibit strong communication skills: in person, or on paper. You can explain technical concepts to product managers, architects, other engineers, and support. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required You have a minimum of 14+ years experience with 5 years of proven track record of leading and managing software teams. Experience managing multiple workstreams within the team Bachelor’s or Master’s degree in Computer Science, Engineering or related field. Technical Experience with: Applied ML models with proficiency in Python Experience in modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience in Cloud technologies like AWS, GCP etc. Experience in ML frameworks like PyTorch, TensorFlow, or Keras etc. SaaS Telemetry and Observability tools such as Datadog, Graphana etc. Excellent problem solving, critical thinking, and communication skills. Broad knowledge of development environments and tools used to implement and build code for deployment. Have strong familiarity with agile processes, continuous integration, and a strong belief in automation over toil. As a pragmatist, you are able to distill complex and ambiguous situations into actionable plans for your team. Owned and operated services end-to-end, from requirements gathering and design, to debugging and testing, to release management and operational monitoring. Desired Experience with Large Language Models Experience designing and implementing highly scalable and performant ML models. Location This role will be remote, and based in India(Karnataka, Tamil Nadu, Telangana, Maharashtra & New Delhi) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.
Posted 2 days ago
0 years
0 Lacs
India
Remote
Job Description: AIML Automation Intern (Unpaid, 3 Months, PPO Opportunity) Location: Remote Employment Type: Internship (Unpaid, Full-Time/Part-Time as applicable) Duration: 3 Months Pre-Placement Offer (PPO): Potential, based on performance About The Role We are seeking a highly motivated AIML Automation Intern who is passionate about building and automating intelligent systems using Artificial Intelligence and Machine Learning. In this role, you’ll work on live projects that deliver real business value, gain hands-on experience in AI/ML automation, and learn from domain experts in a high-paced, growth-focused team. All projects during your internship are live business initiatives—not simulations or capstones—giving you the opportunity to see your automation solutions deployed and making a real impact. An exceptional performance may lead to a Pre-Placement Offer (PPO) for a full-time position. What You’ll Do Build and deploy automation solutions leveraging AI/ML libraries and frameworks Collaborate with mentors, data scientists, and engineers to design, develop, and optimize automation pipelines Automate routine business workflows such as data extraction, pre-processing, and monitoring tasks Participate in end-to-end ML lifecycle: data collection, model training, validation, deployment, and monitoring Contribute to continuous improvement of accuracy and efficiency for existing automation initiatives Present your solutions, document your process, and gather feedback for iteration Stay up-to-date with the latest AI/ML and automation trends and best practices What You’ll Get Live Projects: Work exclusively on real-world automation problems with real deployment and impact Mentorship: Access to experienced AI/ML practitioners for guidance and skill-building Growth: Be challenged in a fast-paced setting that pushes you to innovate and learn daily Portfolio: Ship automation modules and models that you can showcase to future employers Opportunity: Best performers may receive a Pre-Placement Offer (PPO) for a full-time role Culture: Join a driven team that values learning, collaboration, and making a difference Who You Are Genuinely interested in Artificial Intelligence, Machine Learning, and automation Comfortable with programming in Python and familiar with ML libraries such as scikit-learn, TensorFlow, or PyTorch Understanding of basic automation and workflow orchestration tools (such as Airflow, Selenium, or any RPA tools) is a plus Eager to take initiative, learn rapidly, and work both independently and as part of a team Strong analytical and problem-solving skills Effective communicator who is open to feedback and mentorship Portfolio, GitHub, or sample code/projects (personal, academic, or open-source) is highly preferred This is an unpaid internship for 3 months, with the potential for a PPO for standout performers. All your contributions will be to live automation projects—an unbeatable way to kickstart your AI/ML career! Note: This is a unpaid internship.Skills: automation,rpa tools,ml,python,airflow,pytorch,selenium,tensorflow,data extraction,artificial intelligence,scikit-learn,workflow orchestration,machine learning
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description As the senior data scientist, your role involves spearheading the development and execution of data-driven solutions for clients. Collaborating closely with clients, you will adeptly grasp their business needs, translating them into an AI/ML framework. Your expertise will be pivotal in designing models and selecting suitable techniques to address the client's specific challenges. Responsible for the entire data science project lifecycle, your duties extend from comprehensive data collection to meticulous model development, deployment, maintenance, and optimization. Your focus will particularly centre on crafting machine learning and deep learning models customized for retail and customer analytics, incorporating champion-challenger models to enhance performance. Effective communication with senior stakeholders is imperative in this role, and your proficiency in Python coding will be crucial for seamless end-to-end model development. As the lead data scientist, you will play a key role in driving innovative solutions that align with client objectives and industry best practices. You should possess good communication and project management skills and can communicate effectively with a wide range of audiences, both technical and business. You would be responsible for creating Presentations, reports etc to present the analysis findings to the end clients/stakeholders. Should possess the ability to confidently socialize business recommendations and enable customer organization to implement such recommendations. You must familiar and implement with a range of models including regression, classification, clustering, decision tree, random forest, support vector machine, naïve Bayes, GBM, XGBoost, multiple linear regression, logistic regression, and ARIMA/ARIMAX. You should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models Qualifications:Python for Data Science (mandatory), Good proficiency in end to end coding which includes deployment experience. Experience processing large data. Min. 3 years exp in Retail domain Preferred skills include proficiency in SQL, Spark, Excel, Azure, AWS, GCP, Power BI, and Flask. Preferred experience in areas such as time series analysis, market mix modelling, attribution modelling, churn modelling, market basket analysis, etc.Possess a strong understanding of mathematics with logical thinking abilities.Excellent communication skills are a must. Qualifications BTech/Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1-2 institutions Or BE/B-Tech, MCA or MBARelevant Experience:8+ years of hands on experience in delivering Data Science/Analytics projects.
Posted 2 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Lowe’s Lowe’s is a FORTUNE® 100 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowe’s operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com Lowe’s India, the Global Capability Center of Lowe’s Companies Inc., is a hub for driving our technology, business, analytics, and shared services strategy. Based in Bengaluru with over 4,500 associates, it powers innovations across omnichannel retail, AI/ML, enterprise architecture, supply chain, and customer experience. From supporting and launching homegrown solutions to fostering innovation through its Catalyze platform, Lowe’s India plays a pivotal role in transforming home improvement retail while upholding strong commitment to social impact and sustainability. For more information, visit Lowes India About The Team This team at a Fortune 100 tech company in the retail domain is responsible for building and maintaining critical enterprise platforms and frameworks that empower internal developers and drive key business functions. Their work spans the entire software development lifecycle and customer journey, encompassing tools like an Internal Developer Portal, front-end frameworks, A/B testing and customer insights platforms, workflow and API management solutions, a Customer Data Platform (CDP), and robust testing capabilities including performance and chaos testing. This team is instrumental in providing the foundational technology that enables innovation, efficiency, and a deep understanding of their customers Job Summary The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver code modules, stable application systems, and software solutions. This includes developing, configuring, or modifying integrated business and/or enterprise application solutions within various computing environments. This role will be working closely with stakeholders and cross-functional departments to communicate project statuses and proposals. Core Responsibilities Translates business requirements and specifications into logical program designs, code modules, stable application systems, and software solutions with occasional guidance from senior colleagues; partners with the product team to understand business needs and functional specifications. Develops, configures, or modifies integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using various programming languages. Tests application using test-driven development and behavior-driven development frameworks to ensure the integrity of the application. Conducts root cause analysis of issues and participates in the code review process to identify gaps. Implements continuous integration/continuous delivery processes to ensure quality and efficiency in the development cycle using DevOps automation processes and tools. Ideates, builds, and publishes reusable libraries to improve productivity across teams. Conducts the implementation and maintenance of complex business and enterprise software solutions to ensure successful deployment of released applications. Solves difficult technical problems to ensure solutions are testable, maintainable, and efficient. Years Of Experience 2 years of experience in software development or a related field 2 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) through iterative agile development. 2 years' experience working with any of the following: frontend technologies (user interface/user experience), middleware (microservices and application programming interfaces), database technologies, or DevOps. Required Minimum Qualifications 2 years of experience writing technical documentation in a software environment and developing and implementing business systems within an organization Bachelor's degree in computer science, computer information systems, or related field (or equivalent work experience in lieu of degree). Skill Set Required Core Java Proficiency: Deep understanding of Java fundamentals, data structures, algorithms, and best practices. Spring Framework (especially Spring Boot): Experience building and deploying applications with Spring Boot, including dependency injection, RESTful API development, and data persistence. Microservices Architecture: Understanding of microservice principles, design patterns, and experience building and deploying distributed systems. Kafka Expertise: Hands-on experience with Kafka for message queuing, event streaming, and building asynchronous communication between services. API Design & Development: Proficiency in designing and implementing robust and scalable RESTful APIs. SQL & NoSQL Databases: Experience working with both relational (SQL) databases and NoSQL databases (e.g., MongoDB, Elastic), including data modeling and query optimization for each. Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits.
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Lowe’s Lowe’s is a FORTUNE® 100 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowe’s operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com. Lowe’s India, the Global Capability Center of Lowe’s Companies Inc., is a hub for driving our technology, business, analytics, and shared services strategy. Based in Bengaluru with over 4,500 associates, it powers innovations across omnichannel retail, AI/ML, enterprise architecture, supply chain, and customer experience. From supporting and launching homegrown solutions to fostering innovation through its Catalyze platform, Lowe’s India plays a pivotal role in transforming home improvement retail while upholding strong commitment to social impact and sustainability. For more information, visit Lowes India About The Team The Marketing Creative Team at Lowe's Home Improvement is the driving force behind our brand's compelling storytelling and visual identity. This dynamic group of creatives collaborate to deliver impactful campaigns that inspire and engage our customers. By blending innovation with a deep understanding of the home improvement industry, the team crafts memorable experiences across print, digital, and social platforms. Dedicated to excellence, creativity, and customer focus, the Marketing Creative Team ensures Lowe's remains a trusted partner for every project, big or small. Job Summary The Creative Designer – Store Design is responsible for executing in-store signage and point-of-purchase (POP) materials that support Lowe’s visual identity and improve the customer experience. Reporting to the Creative Manager, this role works closely with Copywriters, Producers, and Visual Merchandising teams to deliver high-quality, brand-aligned creative across store communication touchpoints. Designers in this role are expected to manage day-to-day creative tasks independently, work within established brand guidelines, and iterate based on feedback from stakeholders and senior team members. Strong visual and production skills, attention to detail, and consistency in execution are essential. While primarily focused on production and implementation, this role also provides opportunities to collaborate on broader creative initiatives and grow design expertise within a retail-focused environment. Roles & Responsibilities Core Responsibilities: Design compelling in-store graphics, signage, and point-of-purchase (POP) materials aligned with Lowe’s brand standards and seasonal guidelines. Delivering projects across Tier 2-3 for different channels/formats. Collaborate with the Store Environment Team, Display Management Team, and Brand creative team to ensure design accuracy, timely delivery, and adherence to brand compliance. Support the development of point-of-purchase materials with a customer-centric focus to meet internal client needs. Apply seasonal style guides and templates for efficient, brand-consistent execution. Good attention to detail is a must. Translate briefs into creative solutions that balance promotional messaging and brand aesthetics. Manage projects efficiently, ensuring timely, error-free delivery and has experience working with project management tools like Workfront as well as utilizing ProofHQ for review and feedback integration. Support visual updates for vendor signage and ensure compliance with evolving brand guidelines. Present design concepts clearly to cross-functional partners and creative leadership team. Stay up to date with retail trends and continuously improve design skills and processes. Years Of Experience 2–4 years in graphic or store design, preferably within retail or agency environments. Education Qualification & Certifications (optional) Required Minimum Qualifications Bachelor’s degree in graphic design, Visual Communication, Retail Design, or a related field. Skill Set Required Primary Skills (must have) Proficiency in Adobe Creative Suite (Photoshop, Illustrator, InDesign) and Mac OS environment. Strong understanding of visual storytelling, typography, composition, and branding. A portfolio demonstrating excellence in store graphics, signage systems, and retail design. Familiarity with production requirements for print and in-store materials. Ability to balance creative flair with strong focus on creative excellence. Secondary Skills (desired) Familiarity with retail fixture systems, floor plan layouts, and visual merchandising principles. Experience working with US stakeholders. Exposure to photography or 3D visualization tools is a plus. Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits.
Posted 2 days ago
6.0 years
0 Lacs
Chandigarh, India
Remote
Job Summary: We are looking for an experienced and results-driven Training & Placement Officer with deep expertise in the EdTech or technical education sector, particularly in AI, data science, and emerging technologies. The ideal candidate will be a strategic leader with a strong track record in learner skill development, corporate tie-ups, and end-to-end placement management. You will play a pivotal role in aligning training outcomes with industry expectations and ensuring our learners secure rewarding career opportunities. Key Responsibilities: Training Responsibilities: Collaborate with academic and technical teams to design and deliver job readiness training programs, including soft skills, communication, aptitude, and technical interview prep. Conduct workshops, mock interviews, group discussions, and coding assessments to enhance employability. Identify skill gaps and recommend curriculum improvements based on industry feedback and placement trends. Integrate AI-driven tools and analytics to personalize training paths for learners. Partner with industry experts and trainers for guest lectures, live projects, and certification programs. Placement Responsibilities: Build and manage a strong network of corporate partners, IT companies, startups, and HR departments across India and global markets. Organize on-campus and virtual placement drives, job fairs, and recruitment events. Coordinate with hiring managers, schedule interviews, and track placement progress. Maintain a placement management system to monitor offers, CTC, roles, and employer feedback. Mentor learners on resume building, LinkedIn profiling, interview techniques, and career planning. Achieve and exceed placement KPIs (e.g., 85%+ placement rate, average CTC benchmarks). Prepare detailed placement reports and analytics for leadership and accreditation purposes. Requirements: Bachelor’s or Master’s degree in Education, HR, Computer Science, or Business Administration. 4–6 years of proven experience as a Training & Placement Officer in a technical institute, EdTech company, or corporate training academy. Demonstrated success in placing candidates in AI, ML, Data Science, Software Development, or IT/ITES sectors. Strong corporate network with tech companies and recruitment agencies. Excellent communication, leadership, and organizational skills. Proficiency in placement tracking tools, CRM systems, MS Office, and Google Workspace. In-depth understanding of current hiring trends, job roles in AI/tech, and recruitment processes. Passion for education, technology, and youth empowerment. Preferred Qualifications: Prior experience in an AI/ML-focused EdTech startup or bootcamp. Familiarity with LMS platforms (e.g., Moodle, TalentLMS), AI-based career recommendation engines, or ATS systems. Experience managing remote/hybrid training and placement programs. Certification in Career Counseling or HR Development (e.g., NCCS, NCVT, or equivalent).
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Proven experience in an SRE or infrastructure engineering role with a focus on monitoring, automation, and orchestration. Expertise in monitoring tools (Prometheus, ELK, Grafana etc.,) with ability to optimize monitoring systems and integrate ML/AI models to improve visibility, anomaly detection, and proactive issue resolution. Extensive hands-on experience with automation tools such as Terraform, Ansible, and Jenkins, along with proficiency in CI/CD pipelines, to efficiently streamline and optimize network operations and workflows. Strong Linux administration skills Good understanding of of Networking and Security domain, with the ability to critically analyse infrastructure designs and propose innovative improvements to enhance performance, reliability, stability and security Extensive hands-on experience with automation tools such as Terraform, Ansible, and Jenkins, along with proficiency in CI/CD pipelines, to efficiently streamline and optimize network operations and workflows. Proficiency in scripting languages (Bash, Python, Go). Proficiency with containerization and orchestration (Docker, Kubernetes). Understanding of cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with microservices architecture and distributed systems. Candidate with below experience Candidate with 10+ years of experience. Proven experience in an SRE, DevOps, or infrastructure engineering role with a focus on monitoring, automation, and orchestration. Strong knowledge of Networking and Security domain, with the ability to critically analyse network designs and propose innovative improvements to enhance performance, reliability, stability and security Expertise in monitoring tools (Prometheus, ELK) with ability to optimize monitoring systems and integrate ML/AI models to improve visibility, anomaly detection, and proactive issue resolution. Extensive hands-on experience with automation tools such as Terraform, Ansible, and Jenkins, along with proficiency in CI/CD pipelines, to efficiently streamline and optimize network operations and workflows. Proficiency in scripting languages (Bash, Python, Go). Proficiency with containerization and orchestration (Docker, Kubernetes). Understanding of cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with microservices architecture and distributed systems. Familiarity with basic AI tools is considered an advantage. Allianz Group is one of the most trusted insurance and asset management companies in the world. Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer. Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us. We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer. We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in. We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Join us. Let's care for tomorrow. Note: Diversity of minds is an integral part of Allianz’ company culture. One means to achieve diverse teams is a regular rotation of Allianz Executive employees across functions, Allianz entities and geographies. Therefore, the company encourages its employees to have motivation in gaining varied skills from different positions and to collect experiences from across Allianz Group.
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Requisition ID # 25WD90121 Position Overview Are you a problem solver who thrives on building real-world AI applications? Do you geek out over LLMs, RAG, MCP and agentic architectures? Want to help shape a brand-new team and build cool stuff that actually ships? If so, read on. We’re building a new Applied AI team within Autodesk’s Data and Process Management (DPM) group. As a Founding Principal Engineer, you’ll be at the heart of this initiative — working in a highly dynamic environment, designing, building, and scaling AI-powered experiences across our diverse portfolio providing critical Product Lifecycle Management (PLM) and Product Data Management (PDM) capabilities to our customers. You’ll work on real production systems, solve hard problems, and help define the future of AI at Autodesk. Responsibilities Build AI-powered Experiences: Architect and develop production-grade AI applications that are scalable, resilient & secure Shape AI Strategy: Help define the AI roadmap for DPM by identifying opportunities, evaluating emerging technologies, and guiding long-term direction Operationalize LLMs: Fine-tune, evaluate, and deploy large language models in production environments. Balance performance, cost, and user experience while working with real-world data and constraints Build for Builders: Design frameworks and tools that make it easier for other teams to develop AI-powered experiences Guide Engineering Practices: Collaborate with other engineering teams to define and evolve best practices for AI experimentation, evaluation, and optimization. Provide technical guidance and influence decisions across teams Drive Innovation: Stay on top of the latest in AI technologies (e.g. LLMs, VLMs, Foundation Models), Architecture Patterns such as fine-tuning, RAG, function calling, MCP and more—and bring these innovations to production effectively Optimize for Scale: Ensure AI applications are resilient, performant, and can scale well in production Collaborate Across Functions: Partner with product managers, architects, engineers, and data scientists to bring AI features to life in Autodesk products Minimum Qualifications Masters in computer science, AI, Machine Learning, Data Science, or a related field 10+ years building scalable cloud-native applications, with 3+ years focused on production AI/ML systems Deep understanding of LLMs, VLMs, and foundation models, including their architecture, limitations, and practical applications Experience fine-tuning LLMs using real-world datasets and integrating them into production systems Experience with LLM related technologies including frameworks, embedding models, vector databases, and Retrieval-Augmented Generation (RAG) systems, MCP, in production settings Deep understanding of data modeling, system architectures, and processing techniques Experience with AWS cloud services and SageMaker Studio (or similar) for scalable data processing and model development Proven track record of building and deploying scalable cloud-native AI applications using platforms like AWS, Azure, or Google Cloud. Proficiency in Python or TypeScript You love tackling complex challenges and delivering elegant, scalable solutions You can explain technical concepts clearly to both technical and non- technical audiences Preferred Qualifications Experience building AI applications in the CAD or manufacturing domain. Experience designing evaluation pipelines for LLM-based systems (e.g., prompt testing, hallucination detection, safety filters) Familiarity with tools and frameworks for LLM fine-tuning and orchestration (e.g., LoRA, QLoRA, AoT P-Tuning etc.) A passion for mentoring and growing engineering talent Experience with emerging Agentic AI solutions such as LangGraph, CrewAI, A2A, Opik Comet, or equivalents Contributions to open-source AI projects or publications in the field Bonus points if you’ve ever explained RAG to a non-technical friend—and they got it Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site).
Posted 2 days ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description The role aims to leverage data analysis, engineering, and AI/ML techniques to drive strategic business decisions and innovations. This position is responsible for designing and implementing scalable data pipelines, developing innovative models, and managing cloud infrastructure to ensure efficient data processing and storage. The role also involves collaborating with cross-functional teams to translate business needs into technical solutions, mentoring junior team members, and staying abreast of the latest technological advancements. Effective communication, particularly in English, is essential to articulate complex insights and foster a collaborative environment. The ultimate goal is to enhance data-driven decision-making and maintain a competitive edge through continuous improvement and innovation. Data and AI Specialist, Consulting role Key Responsibilities: Python developer experienced with Azure Cloud using Azure Data bricks for Data Science: Create models and algorithms to analyze data and solve business problems Application Architecture: Knowledge of enterprise application integration and application design Cloud Management: Knowledge of hosting and supporting applications of Azure Cloud Data Engineering: Build and maintain systems to process and store data efficiently Collaboration: Work with different teams to understand their needs and provide data solutions. Share insights through reports and presentations Research: Keep up with the latest tech trends and improve existing models and systems Mentorship : Guide and support junior team members Must have: Python development in AI / ML and Data Analysis: Strong programming skills in Python or R, SQL Proficiency in statistical analysis and machine learning techniques Hands on experience in NLP and NLU Experience with data visualization and reporting tools (e.g., Power BI) Experience with Microsoft Power Platforms and SharePoint, including (e.g., Power Automate) Hands on experience if using SharePoint for content management Data Engineering: Expertise in designing and maintaining data pipelines and ETL processes Experience with data storage solutions (e.g. Azure SQL) Understanding of data quality and governance principles Experience with Databricks for big data processing and analytics Cloud Management: Proficiency in cloud platforms (e.g., Azure) Knowledge of hosting and supporting applications of Azure Cloud Knowledge of cloud security and compliance best practices Collaboration and Communication: Experience in agile methodologies and project management tools (e.g., Jira) Strong interpersonal and communication skills Ability to translate complex technical concepts into business terms Experience working in cross-functional teams Excellent English communication skills, both written and verbal Research and Development: Ability to stay updated with the latest advancements in data science, AI/ML, and cloud technologies Experience in conducting research and improving model performance Mentorship: Experience in guiding and mentoring junior team members Ability to foster a collaborative and innovative team environment Must exhibit following core behaviors: Taking ownership / accountability of the projects assigned Qualifications Bachelor's, Master's in Computer Science, or MCA degree, Data Science, AI/ML, IT, or related fields 7-9 years of relevant experience Proficiency in Python, R, cloud platforms (Azure), and data visualization tools like Power BI Advanced certifications and experience with big data technologies, real-time data processing Excellent English communication skills
Posted 2 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a Full Stack Developer to join our Cloud Engineering team to build intuitive, scalable, and cloud-integrated applications. You will work across both backend and frontend, primarily using Python and React , to develop modern tools and interfaces supporting cloud automation and internal platforms. The ideal candidate should have hands-on experience in building full-stack applications and exposure to cloud-native environments. Familiarity with infrastructure automation or ML-based systems is a strong plus. Full Stack Development (Primary Role) Build and maintain full-stack applications using Python (FastAPI/Flask) and React.js Design and develop REST APIs and data pipelines for cloud-integrated platforms Design and develop intuitive frontend UIs with modern JavaScript tooling (React, Redux, etc.) Collaborate with backend, DevOps, and UI/UX teams to deliver scalable features Participate in code reviews, design discussions, and performance optimizations Support cloud-native practices such as containerization, serverless, and CI/CD Develop automation scripts for deployment, monitoring, and diagnostics Must-Have Skills 3–6 years of experience with Python (FastAPI, Django, Flask) for backend development Strong proficiency in React.js and related frontend frameworks Solid understanding of API development, authentication, and secure data exchange Experience with databases (SQL or NoSQL), Git workflows, and Docker Familiarity with DevOps practices: CI/CD pipelines, version control, and cloud deployments Strong debugging, problem-solving, and software design skills #ADL
Posted 2 days ago
12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position : Senior Technical Leader - Backend Location : Mumbai, India [Thane] Team : Engineering Experience : 12+ Years 🚀 Are you a seasoned technical leader looking to drive engineering excellence at scale? At Netcore Cloud, we’re seeking a Senior Technical Leader who brings deep technical expertise, a track record of designing scalable systems, and a passion for innovation. This is a high-impact role where you will lead the architecture and design of mission-critical systems that power user engagement for thousands of global brands. 🛠️ What You’ll Do Architect highly available, scalable, and fault-tolerant backend systems handling billions of events and terabytes of data. Design real-time campaign processing engines capable of delivering 10 million+ messages per minute. Lead development of complex analytics frameworks including cohort analysis, funnel tracking, and user behavior modeling. Drive architecture decisions on distributed systems, microservices, and cloud-native platforms. Define technical roadmaps and work closely with engineering teams to ensure alignment and execution. Collaborate across product, engineering, DevOps, and data teams to deliver business-critical functionality. Mentor engineers and contribute to engineering excellence through code and design reviews, best practice evangelism, and training. Evaluate and implement tools and frameworks for continuous improvement in scalability, performance, and observability. 🧠 What You Bring 12+ years of hands-on experience in software engineering with a strong foundation in Java or Golang and related backend technologies . Proven experience designing distributed systems, microservices, and event-driven architectures . Deep knowledge of cloud platforms (AWS/GCP), CI/CD, containerization ( Docker , Kubernetes ) and infrastructure as code . Strong understanding of data processing at scale using Kafka , NoSQL DBs (MongoDB/Cassandra) , Redis , and RDBMS (MySQL/PostgreSQL). Exposure to stream processing engines (e.g., Apache Storm/Flink/Spark) is a plus. Familiarity with AI tools and their integration into scalable systems is a plus. Experience with application security, fault tolerance, caching, multithreading, and performance tuning. A mindset of quality, ownership, and delivering business value. 💡 Why Netcore? Being first is in our nature. Netcore Cloud is the first and leading AI/ML-powered customer engagement and experience platform (CEE) that helps B2C brands increase engagement, conversions, revenue, and retention. Our cutting-edge SaaS products enable personalized engagement across the entire customer journey and build amazing digital experiences for businesses of all sizes. Netcore’s Engineering team focuses on adoption, scalability, complex challenges, and fastest processing. We use versatile tech stacks like streaming technologies and queue management systems such as Kafka , Storm , RabbitMQ , Celery , and RedisQ . Netcore strikes a perfect balance between experience and agility. We currently work with 5000+ enterprise brands across 18 countries , serving over 70% of India’s Unicorns , positioning us among the top-rated customer engagement & experience platforms. Headquartered in Mumbai, we have a global footprint across 10 countries , including the United States and Germany . Being certified as a Great Place to Work for three consecutive years reinforces Netcore’s principle of being a people-centric company — where you're not just an employee but part of a family. 🌟 What’s in it for You? Immense growth and continuous learning. Solve complex engineering problems at scale. Work with top industry talent and global brands. An open, entrepreneurial culture that values innovation. 📩 Ready to shape the future of digital customer engagement? Apply now— your next big opportunity starts here. A career at Netcore is more than just a job — it’s an opportunity to shape the future. Learn more at netcorecloud.com .
Posted 2 days ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Title: Product Analyst (AI and Asset/Wealth Management) Location: Mumbai, India (Onsite) Fulltime. Job Description: We are seeking a highly skilled and analytical Technical Business Analyst / Product Analyst with a strong background in the Asset and Wealth Management Domain, combined with hands-on experience in Artificial Intelligence (AI) technologies, including LLMs like GPT, NLP, and AI-driven solutions. The ideal candidate will play a critical role in bridging the gap between business objectives and cutting-edge AI solutions, driving innovation and digital transformation initiatives. Key Responsibilities • Collaborate with stakeholders to gather and analyze business requirements related to AI product implementation in the Asset and Wealth Management domain. • Translate business needs into clear, actionable product and technical requirements for development teams. • Drive AI product roadmap planning and help prioritize features with tangible business impact. • Conduct deep-dive analyses of wealth and asset management data to identify opportunities for AI automation, personalization, and process optimization. • Partner with data scientists, machine learning engineers, and AI architects to develop and validate AI models, especially LLM-based use cases like document summarization, intelligent chatbots, fraud detection, etc. • Lead proof-of-concept (PoC) and pilot projects for AI/ML applications in products such as portfolio risk assessment, client service automation, KYC, compliance monitoring, etc. • Monitor AI model performance, suggest continuous improvements, and ensure explainability and regulatory compliance. • Stay up-to-date with the latest AI advancements (especially GPT-4/LLMs), asset and wealth management regulations, and competitive intelligence. Required Qualifications • 7+ years of experience as a Business Analyst or Product Analyst, with at least 2 years in AI/ML or Generative AI-related initiatives. • Proven experience in the Asset and Wealth Management industry (e.g., portfolio management, compliance, AML, KYC, client onboarding, investment advisory). • Familiarity with AI tools, frameworks, and platforms (e.g., OpenAI GPT, Azure OpenAI, Hugging Face, LangChain, etc.). • Strong understanding of AI concepts such as NLP, machine learning pipelines, LLM fine-tuning, embeddings, and vector databases. • Ability to write detailed BRDs, PRDs, and user stories with technical depth. • Experience working in Agile/Scrum environments. • Proficiency in SQL, Excel, and at least one data visualization or analysis tool (e.g., Power BI, Tableau, Jupyter Notebooks). • Excellent communication skills with both technical and non-technical stakeholders. Preferred Qualifications • Formal coursework or certification in AI, Machine Learning, or Data Science (e.g., Coursera, Stanford, DeepLearning.AI, etc.). • Hands-on experimentation with GPT APIs or prompt engineering in real-world projects. • Experience with AI use cases such as intelligent document processing, customer chatbots, RAG pipelines, or automated decision-making. • Exposure to MLOps, AI model monitoring, and explain ability frameworks.
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Technical Trainer – Cloud, Data & AI/ML Location: Pune Experience Required : 10+ Years About the Role: We’re looking for an experienced and passionate technical trainer who can help elevate our teams’ capabilities in cloud technologies, data engineering, and AI/ML. This role is ideal for someone who enjoys blending hands-on tech skills with a strong ability to simplify, teach, and mentor. As we grow and scale at Meta For Data, building internal expertise is a key part of our strategy—and you’ll be central to that effort. What You’ll Be Doing: Lead and deliver in-depth training sessions (both live and virtual) across areas like cloud architecture, data engineering, and machine learning. Build structured training content including presentations, labs, exercises, and assessments. Develop learning journeys tailored to different experience levels and roles—ranging from new hires to experienced engineers. Continuously update training content to reflect changes in tools, platforms, and best practices. Collaborate with engineering, HR, and L&D teams to roll out training schedules, track attendance, and gather feedback. Support on-going learning post-training through mentoring, labs, and knowledge checks. What We’re Looking For: Around 10 years of experience in a mix of software development, cloud/data/ML engineering, and technical training. Deep familiarity with at least one cloud platform (AWS, Azure, or GCP); AWS or Azure is preferred. Strong grip on data platforms, ETL pipelines, Big Data tools (like Spark or Hadoop), and warehouse systems. Solid understanding of the AI/ML lifecycle—model building, tuning, deployment—with hands-on experience in Python-based libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Confident communicator who’s comfortable speaking to groups and explaining complex concepts simply. Bonus if you hold any relevant certifications like AWS Solutions Architect, Google Data Engineer, or Microsoft AI Engineer. Nice to Have: Experience creating online training modules or managing LMS platforms. Prior experience training diverse audiences: tech teams, analysts, product managers, etc. Familiarity with MLOps and modern deployment practices for AI models. Why Join Us? You’ll have the freedom to shape how technical learning happens at Meta For Data. You’ll be part of a team that values innovation, autonomy, and real impact. Flexible working options and a culture that supports growth - for our teams and our trainers.
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description About TripStack - We are travel tech entrepreneurs, changing the way millions of people travel. Our proprietary virtual interlining technology provides access to billions of travel itineraries by combining flights from different airline carriers that don’t traditionally work together. We take our customers from point A to B via C, including land transportation, at the lowest possible price. We are impacting the way people travel and provide higher margin opportunities to our partners that are some of the largest in the travel industry. We pride ourselves on the performance-driven environment we have created for our teams to prosper and excel in. We come to work ready, to challenge and be challenged. We’re big enough to give our teams support but small enough that every person makes a difference. There are plenty of challenges to champion. Who we are : Etraveli Group is the globally leading company for tech solutions and fulfillment capabilities for online sales of flights. We are here to solve complexity by connecting millions of flights and travelers across the globe, from search and selection to trip and beyond. We hold consumer online travel agency brands like Mytrip, GoToGate & Flight Network and serve exclusively Booking.com with flights. Etraveli Group has also established strategic partnerships with companies like Skyscanner, Google Flights, and TUI. Every day we strive to make the world smaller for our customers and bigger for our people. Our diverse team of more than 2300 passionate professionals is what makes us the industry’s tech wonder and the best in the world at what we do. Our major offices are in Sweden (HQ), Canada, Greece, India, and Poland. Requirements 5+ years of well-rounded travel industry experience (with focus on NDC, GDS or OTAs) Strategic thinker with critical decision making skills Strong experience in data-backed decision making/statistics/market research Output and delivery driven; experience working under pressure in a fast-paced role with significant context switching Highly influential, proactive leader with a strong business acumen Strong technical skills and extensive experience working with APIs Exceptional organizational and project management skills Exceptional communication and negotiation skills Entrepreneurial at heart with a focus on culture building Passion for problem solving and root cause analysis Team player demonstrating awesome leadership skills and desire to help develop employees Outstanding research skills Ability to find the balance between user needs and business goals while questioning and validating assumptions along the way. Responsibilities Help define the vision for our Virtual Interlining (VI) product and create a roadmap in line with business goals. Translate the vision and strategy for the products into tasks for development. Understand the architecture of our VI product and how it interacts with other products and parts of the business. Work very closely with our VI Product Owner and Product Analysts and oversee the execution of our vision for this product. Own the end-to-end development of our VI. Contribute to the planning, execution, and review of each sprint. Manage and prioritize the backlog for our VI team. Be responsible for the P&L of the product with a focus on unit economics and efficiency. Keep internal technical teams accountable for quality output and product delivery. Work towards defining and tracking key KPIs to monitor the impact of delivered features. Lead R&D initiatives, including A/B testing, to enhance the product and work closely with our ML and Data Science teams. Work closely with internal and external stakeholders to understand and anticipate client and user needs. Review existing processes and flows to identify new opportunities and areas that need enhancements including the development process. Complete regular market analysis, always having an eye on the competitive landscape. Collect and analyze information, solve complex problems rationally, and make decisions on time for enhanced productivity. Interview, onboard, and manage direct reports. Provide guidance and be an example to others in the organization. Benefits What it takes to succeed here Ambition and dedication to make a difference and change the way people travel; Where we always play to each other's strength in a high performing team reaching for our common goal. We hold ourselves to the highest expectations, and move with a sense of urgency and hold ourselves accountable and win by staying true to what we believe in. Learn more about our values here What We Offer We offer an opportunity to work with a young, dynamic, and a growing team composed of high-caliber professionals. We value professionalism and promote a culture where individuals are encouraged to do more and be more. If you feel you share our passion for excellence, and growth, then look no further. We have an ambitious mission, and we need a world-class team to make it a reality. Upgrade to a First Class team!
Posted 2 days ago
10.0 years
0 Lacs
Greater Chennai Area
On-site
Do you want to make an impact on patient health around the world? Do you thrive in a fast-paced environment that brings together scientific, clinical, and commercial domains through engineering, data science, and analytics? Then join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) where you can leverage cutting-edge technology to inform critical business decisions and improve customer experiences for our patients and physicians. Our collection of engineering, data science, and analytics professionals are at the forefront of Pfizer’s transformation into a digitally driven organization that leverages data science and advanced analytics to change patients’ lives. The Data Science Industrialization team within Data Science Solutions and Initiatives leads the scaling of data and insights capabilities - critical drivers and enablers of Pfizer’s digital transformation. As the AI and Data Science Production Deployment Lead, you will be a leader within the Data Science Industrialization team charged with driving the deployment of AI use cases and reusable components into full production. You will lead a global team and partner with cross-functional business stakeholders and Digital leaders to catalyze identification, design, iterative development, and continuous improvements of deployment processes to support production data science workflows and AI applications. Your team will define and implement standard processes for quality assurance, testing, data ops, model ops, and dev ops while also providing SDLC, support, platform engineering, and cloud engineering guidance as needed. In addition, you will be responsible for providing critical input into the AI ecosystem and platform strategy to promote self-service, drive productization and collaboration, and foster innovation. Your team will be accountable to key Pfizer business functions (including Pfizer Biopharma, R&D, PGS, Oncology, and Enabling Functions) for production deployments of data science workflows and AI solutions that support major business objectives across all of Pfizer’s core business units. Role Responsibilities Lead deployment of production AI solutions and reusable software components with automated self-monitoring QA/QC processes Implement QA and testing, data ops, model ops, and DevOps for data science workflow products, industrialized workflow accelerators, and best practices in the production deployment of scalable AI/ML analytic insights products Enforce best practices for QA and testing and SDLC production support to ensure reliability and availability of deployed software Act as a subject matter expert for production deployment processes of data science workflows, AI solutions, and reusable software components on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support Direct QA and testing, data ops and model ops, DevOps, platform and cloud engineering research, advance data science workflow CI/CD orchestration capabilities, drive improvements in automation and self-service production deployment processes, implement best practices, and contribute to the broader talent building framework by facilitating related trainings Set a vision, prioritize workstreams, and provide day-to-day leadership, supervision, and mentorship for a global team with technical & functional expertise that includes QA and testing, DevOps, data science, and operations Coach direct reports to adopt best practices, improve technical skills, develop an innovative mindset, and achieve professional growth through technical and organizational thought leadership Communicate value delivered through reusable AI components to end user functions (e.g., Chief Marketing Office, Biopharma Commercial and Medical Affairs) and evangelize innovative ideas of reusable & scalable development approaches/frameworks/methodologies to enable new ways of developing and deploying AI solutions Partner with other leaders within the Data Science Industrialization team to define team roadmap and drive impact by providing strategic and technical input including platform evolution, vendor scan, and new capability development Partner with AI use case development teams to ensure successful integration of reusable components into production AI solutions Partner with AIDA Platforms team on end to end capability integration between enterprise platforms and internally developed reusable component accelerators (API registry, ML library / workflow management, enterprise connectors) Partner with AIDA Platforms team to define best practices for production deployment of reusable components to identify and mitigate potential risks related to component performance, security, responsible AI, and resource utilization Basic Qualifications Bachelor’s degree in AI, data science, or engineering related area (Computer Engineering, Computer Science, Information Systems, Engineering or a related discipline) 10+ years of work experience in data science, or engineering, or operations for a diverse range of projects 2-3 years of hands-on experience leading data science or AI/ML deployment and operations teams Track record of managing stakeholder groups and effecting change Recognized by peers as an expert in production deployment and AI/ML ops with deep expertise in CI/CD and DevOps for monitoring and orchestration of data science workflows, and hands-on development Understands how to synthesize facts and information from varied data sources, both new and pre-existing, into clear insights and perspectives that can be understood by business stakeholders Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; engages with stakeholders throughout to ensure buy-in Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support Ability to manage projects from end-to-end, from requirements gathering through implementation, hypercare, and development of support processes to ensure longevity of solutions Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions Strong understanding of data science development lifecycle (CRISP) Deep experience with CI/CD integration (e.g. GitHub, GitHub Actions or Jenkins) Deep understanding of MLOps principles and tech stack (e.g. MLFlow) Experience working in a cloud based analytics ecosystem (AWS, Snowflake, etc) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems or related discipline Experience in solution architecture & design Experience in software/product engineering Strong hands-on skills for data and machine learning pipeline orchestration via Dataiku (DSS 10+) platform Hands on experience working in Agile teams, processes, and practices Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Experience with Dataiku Data Science Studio Ability to work non-traditional work hours interacting with global teams spanning across the different regions (eg: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/ML Scientist – Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief In this role as an AI/ML Scientist on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. You should be able to design, develop, and implement machine learning models, conduct deep data analysis, and support decision-making with data-driven insights. Responsibilities include building and validating predictive models, supporting experiment design, and integrating advanced techniques like transformers, GANs, and reinforcement learning into scalable production systems. The role requires solving complex problems using NLP, deep learning, optimization, and computer vision. You should be comfortable working independently, writing reliable code with automated tests, and contributing to debugging and refinement. You’ll also document your methods and results clearly and collaborate with cross-functional teams to deliver high-impact AI/ML solutions that align with business objectives and user needs. What I'll be doing – your accountabilities? Design, develop, and implement machine learning models, conduct in-depth data analysis, and support decision-making with data-driven insights Develop predictive models and validate their effectiveness Support the design of experiments to validate and compare multiple machine learning approaches Research and implement cutting-edge techniques (e.g., transformers, GANs, reinforcement learning) and integrate models into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative models, develop algorithms, or optimize workflows for data-driven tasks Independently apply data-driven solutions to ambiguous problems, leveraging tools like Natural Language Processing, deep learning frameworks, machine learning, optimization methods and computer vision frameworks Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Write and integrate automated tests alongside their models or code to ensure reproducibility, scalability, and alignment with established quality standards Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Mastered Data Analysis and Data Science concepts and can demonstrate this skill in complex scenarios AI & Machine Learning, Programming and Statistical Analysis Skills beyond the fundamentals and can demonstrate the skills in most situations without guidance. Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance: Data Validation and Testing Model Deployment Machine Learning Pipelines Deep Learning Natural Language Processing (NPL) Optimization & Scientific Computing Decision Modelling and Risk Analysis. To understand fundamentals and can demonstrate this skill in common scenarios with guidance: Technical Documentation. Qualifications & Requirements Bachelor’s degree in B.E./B.Tech, preferably in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong practical understanding of: Machine Learning algorithms (classification, regression, clustering, time-series) Statistical inference and probabilistic modeling Data wrangling, feature engineering, and preprocessing at scale Proficiency in collaborative development tools: IDEs (e.g., VS Code, Jupyter), Git/GitHub, CI/CD workflows, unit and integration testing Excellent coding and debugging skills in Python (preferred), with knowledge of SQL for large-scale data operations Experience working with: Versioned data pipelines, model reproducibility, and automated model testing Ability to work in agile product teams, handle ambiguity, and communicate effectively with both technical and business stakeholders Passion for continuous learning and applying AI/ML in impactful ways Preferred Experiences 5+ years of experience in AI/ML or Data Science roles, working on applied machine learning problems in production settings 5+ years of hands-on experience with: Apache Spark, distributed computing, and large-scale data processing Deep learning using TensorFlow or PyTorch Model serving via REST APIs, batch/streaming pipelines, or ML platforms Hands-on experience with: Cloud-native development (Azure preferred; AWS or GCP also acceptable) Databricks, Azure ML, or SageMaker platforms Experience with Docker, Kubernetes, and orchestration of ML systems in production Familiarity with A/B testing, causal inference, business impact modeling Exposure to visualization and monitoring tools: Power BI, Superset, Grafana Prior work in logistics, supply chain, operations research, or industrial AI use cases is a strong plus Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 2 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer Consultant/Expert 34326 Location: Chennai Work Type: Contract (Onsite) Compensation: Up to ₹21–24 LPA (Based on experience) Notice Period: Immediate joiners preferred Experience: Minimum 7+ years (9 preferred) Position Summary Seeking a skilled and motivated Full Stack Java Developer to join a growing software engineering team responsible for building and supporting a global logistics data warehouse platform. This platform provides end-to-end visibility into vehicle shipments using GCP cloud technologies , microservices architecture , and real-time data processing pipelines . Key Responsibilities Design, develop, and maintain robust backend systems using Java, Spring Boot, and microservices architecture Implement and optimize REST APIs, and integrate with Pub/Sub, Kafka, and other event-driven systems Build and maintain scalable data processing workflows using GCP BigQuery, Cloud Run, and Terraform Collaborate with product managers, architects, and fellow engineers to deliver impactful features Perform unit testing, integration testing, and support functional and user acceptance testing Conduct code reviews and provide mentorship to other engineers to improve code quality and standards Monitor system performance and implement strategies for optimization and scalability Develop and maintain ETL/data pipelines to transform and manage logistics data Continuously refactor and enhance existing code for maintainability and performance Required Skills Strong hands-on experience with Java, Spring Boot, and full stack development Proficiency with GCP Cloud Platform, including at least 1 year of experience with BigQuery Experience with GCP Cloud Run, Terraform, and deploying containerized services Deep understanding of REST APIs, microservices, Pub/Sub, Kafka, and cloud-native architectures Experience in ETL development, data engineering, or data warehouse projects Exposure to AI/ML integration in enterprise applications is a plus Preferred Skills Familiarity with AI agents and modern AI-driven data products Experience working with global logistics, supply chain, or transportation domains Education Requirements Required: Bachelor’s degree in Computer Science, Information Technology, or related field Preferred: Advanced degree or specialized certifications in cloud or data engineering Work Environment Location: Chennai (Onsite required) Work closely with cross-functional product teams in an Agile setup Fast-paced, data-driven environment requiring strong communication and problem-solving skills Skills: rest apis,cloud run,bigquery,gcp,pub/sub,data,data engineering,kafka,microservices,terraform,cloud,spring boot,data warehouse,java,code,etl development,full stack development,gcp cloud platform
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Software Engineer Consultant/Expert – GCP Data Engineer 34350 Location: Chennai Engagement Type: Contract Compensation: Up to ₹18 LPA Notice Period: Immediate joiners preferred Work Mode: Onsite Role Overview This role is for a proactive Google Cloud Platform (GCP) Data Engineer who will contribute to the modernization of a cloud-based enterprise data warehouse. The ideal candidate will focus on integrating diverse data sources to support advanced analytics and AI/ML-driven solutions, as well as designing scalable pipelines and data products for real-time and batch processing. This opportunity is ideal for individuals who bring both architectural thinking and hands-on experience with GCP services, big data processing, and modern DevOps practices. Key Responsibilities Design and implement scalable, cloud-native data pipelines and solutions using GCP technologies Develop ETL/ELT processes to ingest and transform data from legacy and modern platforms Collaborate with analytics, AI/ML, and product teams to enable data accessibility and usability Analyze large datasets and perform impact assessments across various functional areas Build data products (data marts, APIs, views) that power analytical and operational platforms Integrate batch and real-time data using tools like Pub/Sub, Kafka, Dataflow, and Cloud Composer Operationalize deployments using CI/CD pipelines and infrastructure as code Ensure performance tuning, optimization, and scalability of data platforms Contribute to best practices in cloud data security, governance, and compliance Provide mentorship, guidance, and knowledge-sharing within cross-functional teams Mandatory Skills GCP expertise with hands-on use of services including: BigQuery, Dataflow, Data Fusion, Dataform, Dataproc Cloud Composer (Airflow), Cloud SQL, Compute Engine Cloud Functions, Cloud Run, Cloud Build, App Engine Strong knowledge of SQL, data modeling, and data architecture Minimum 5+ years of experience in SQL and ETL development At least 3 years of experience in GCP cloud environments Experience with Python, Java, or Apache Beam Proficiency in Terraform, Docker, Tekton, and GitHub Familiarity with Apache Kafka, Pub/Sub, and microservices architecture Understanding of AI/ML integration, data science concepts, and production datasets Preferred Experience Hands-on expertise in container orchestration (e.g., Kubernetes) Experience working in regulated environments (e.g., finance, insurance) Knowledge of DevOps pipelines, CI/CD, and infrastructure automation Background in coaching or mentoring junior data engineers Experience with data governance, compliance, and security best practices in the cloud Use of project management tools such as JIRA Proven ability to work independently in fast-paced or ambiguous environments Strong communication and collaboration skills to interact with cross-functional teams Education Requirements Required: Bachelor's degree in Computer Science, Information Systems, Engineering, or related field Preferred: Master's degree or relevant industry certifications (e.g., GCP Data Engineer Certification) Skills: bigquery,cloud sql,ml,apache beam,app engine,gcp,dataflow,microservices architecture,cloud functions,compute engine,project management tools,data science concepts,security best practices,pub/sub,ci/cd,compliance,cloud run,java,cloud build,jira,data,pipelines,dataproc,sql,tekton,python,github,data modeling,cloud composer,terraform,data fusion,cloud,data architecture,apache kafka,ai/ml integration,docker,data governance,infrastructure automation,dataform
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough