Jobs
Interviews

677 Drift Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Maersk, the world's largest shipping company, is transforming into an industrial digital giant that enables global trade with its land, sea and port assets. We are the digital and software development organization that builds products in the areas of predictive science, optimization and IoT. This position offers the opportunity to build your engineering career in a data and analytics intensive environment, delivering work that has direct and significant impact on the success of our company. Global data analytics delivers internal apps to grow revenue and optimize costs across Maersk's business units. We practice agile development in teams empowered to deliver products end-to-end, for which data and analytics are crucial assets. This is an extremely exciting time to join a fast paced, growing and dynamic team that solves some of the toughest problems in the industry and builds the future of trade & logistics. We are an open-minded, friendly and supportive group who strive for excellence together A.P. Moller - Maersk maintains a strong focus on career development, and strong team members regularly have broad possibilities to expand their skill set and impact in an environment characterized by change and continuous progress. The team - who are we: We are an ambitious team with the shared passion to use data, machine learning (ML) and engineering excellence to make a difference for our customers. We are a team, not a collection of individuals. We value our diverse backgrounds, our different personalities and strengths & weaknesses. We value trust and passionate debates. We challenge each other and hold each other accountable. We uphold a caring feedback culture to help each other grow, professionally and personally. We are now seeking a new team member who is excited about using experiments at scale and ML-driven personalisation to create a seamless experience for our users, helping them find the products and content they didn’t even know they were looking for, and drive engagement and business value. Our new member - who are you You are driven by curiosity and are passionate about partnering with a diverse range of business and tech colleagues to deeply understand their customers, uncover new opportunities, advise and support them in design, execution and analysis of experiments, or to develop ML solutions for ML-driven personalisation (e.g., supervised or unsupervised) that drive substantial customer and business impact. You will use your expertise in experiment design, data science, causal inference and machine learning to stimulate data-driven innovation. This is an incredibly exciting role with high impact. You are, like us, a team player who cares about your team members, about growing professionally and personally, about helping your teammates grow, and about having fun together. Basic Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or related field 3–5 years of professional experience in designing, building, and maintaining scalable data pipelines, both in on-premises and cloud (Azure preferred) environments. Strong expertise in working with large datasets from Salesforce, port operations, cargo tracking, and enterprise systems etc. Proficient writing scalable and high-quality SQL queries, Python coding and object-oriented programming, with a solid grasp of data structures and algorithms. Experience in software engineering best practices, including version control (Git), CI/CD pipelines, code reviews, and writing unit/integration tests. Familiarity with containerization and orchestration tools (Docker, Kubernetes) for data workflows and microservices. Hands-on experience with distributed data systems (e.g., Spark, Kafka, Delta Lake, Hadoop). Experience in data modelling, and workflow orchestration tools like Airflow Ability to support ML engineers and data scientists by building production-grade data pipelines Demonstrated experience collaborating with product managers, domain experts, and stakeholders to translate business needs into robust data infrastructure. Strong analytical and problem-solving skills, with the ability to work in a fast-paced, global, and cross-functional environment. Preferred Qualifications: Experience deploying data solutions in enterprise-grade environments, especially in the shipping, logistics, or supply chain domain. Familiarity with Databricks, Azure Data Factory, Azure Synapse, or similar cloud-native data tools. Knowledge of MLOps practices, including model versioning, monitoring, and data drift detection. Experience building or maintaining RESTful APIs for internal ML/data services using FastAPI, Flask, or similar frameworks. Working knowledge of ML concepts, such as supervised learning, model evaluation, and retraining workflows. Understanding of data governance, security, and compliance practices. Passion for clean code, automation, and continuously improving data engineering systems to support machine learning and analytics at scale. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com. Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

India

On-site

We are hiring a, high-agency ML/AI Engineer to architect and deliver cutting-edge AI solutions for our enterprise clients. This isn't just another ML engineering role - you'll be the technical owner driving complex AI projects end-to-end, from ideation through production deployment and ongoing monitoring and improvement. You'll spend your time: 50% building robust, scalable AI systems that solve real business problems 25% researching and prototyping innovative solutions using the latest AI advances 25% collaborating with clients and stakeholders to translate business needs into technical solutions About think bridge think bridge is how growth-stage companies can finally turn into tech disruptors. They get a new way there – with world-class technology strategy, development, maintenance, and data science all in one place. But solving technology problems like these involves a lot more than code. That’s why we encourage think ’ers to spend 80% of their time thinking through solutions and 20% coding them. With an average client tenure of 4+ years, you won’t be hopping from project to project here – unless you want to. So, you really can get to know your clients and understand their challenges on a deeper level. At thinkbridge, you can expand your knowledge during work hours specifically reserved for learning. Or even transition to a completely different role in the organization. It’s all about challenging yourself while you challenge small thinking. think bridge is a place where you can: Think bigger – because you have the time, opportunity, and support it takes to dig deeper and tackle larger issues. Move faster – because you’ll be working with experienced, helpful teams who can guide you through challenges, quickly resolve issues, and show you new ways to get things done. Go further – because you have the opportunity to grow professionally, add new skills, and take on new responsibilities in an organization that takes a long-term view of every relationship. think bridge.. there’s a new way there. Why This Role Is Different True Ownership: You'll be the technical architect making critical design decisions, not just implementing someone else's vision Production Focus: We need someone who's deployed models/systems AND kept them running - monitoring drift, handling failures, improving performance Diverse Projects: From GenAI applications (65%) to classical ML solutions (35%), across Retail, HRTech, Fintech, and Healthcare domains Technical Architecture: Design systems and guide implementation decisions without the overhead of formal people management What is expected of you? As part of the job, you will be required to Architect end-to-end ML/AI solutions that actually work in production Build and maintain production-grade systems with proper monitoring, alerting, and continuous improvement Make strategic technical decisions on approach, tools, and implementation Translate complex AI concepts into business value for clients Set technical direction for project teams through architecture and best practices Stay current with AI research and identify practical applications for client problems If your beliefs resonate with these, you are looking at the right place! Accountability –Finish what you started Communication–Context aware, pro-active, and clean communication Outcome –High throughput Quality –High-Quality work and consistency Ownership –Go Beyond Requirements Must have technical skills Strong Python proficiency with production ML experience Hands-on experience deploying AND maintaining ML systems in production Experience with both GenAI (LLMs, RAG systems) and classical ML techniques Understanding of ML monitoring, drift detection, and model lifecycle management Cloud deployment experience (Azure knowledge helpful; AWS experience highly valued) Containerization and basic MLOps practices Good to have technical skills Experience fine-tuning open-source models to match/beat proprietary models Advanced MLOps (CI/CD for ML, A/B testing, feature stores) Published work (papers, blogs, open-source contributions) Experience with streaming/real-time ML systems What We're Really Looking for Beyond technical skills, we need someone who: Takes initiative and drives projects without waiting for instructions Has actually felt the pain of their own technical decisions in production Can explain "why this approach" to both engineers and business stakeholders Thinks critically about when to use (and when NOT to use) GenAI Has opinions about ML best practices based on real experience Our Flagship Policies and Benefits: Work from anywhere! Flexible work hours All leaves taken are paid leaves Family Insurance Quarterly Collaboration Week Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra

On-site

- 10+ years of professional or military experience, including a Bachelor's degree. - 7+ years managing complex, large-scale projects with internal or external customers. - Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. - Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: • Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . • Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. • Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. • Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. • Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. • Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. • Assist internal customers with identifying model drift and retraining models. • Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 10+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 months ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 7+ years of professional or military experience, including a Bachelor's degree. - 7+ years managing complex, large-scale projects with internal or external customers. - Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. - Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. Key job responsibilities A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: • Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . • Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. • Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. • Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. • Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. • Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. • Assist internal customers with identifying model drift and retraining models. • Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 7+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 months ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

DESCRIPTION AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. Key job responsibilities A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. Assist internal customers with identifying model drift and retraining models. Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS 7+ years of professional or military experience, including a Bachelor's degree. 7+ years managing complex, large-scale projects with internal or external customers. Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. PREFERRED QUALIFICATIONS 7+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 months ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra

On-site

DESCRIPTION AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. Assist internal customers with identifying model drift and retraining models. Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS 10+ years of professional or military experience, including a Bachelor's degree. 7+ years managing complex, large-scale projects with internal or external customers. Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. PREFERRED QUALIFICATIONS 10+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 months ago

Apply

1.0 years

0 Lacs

Rawatsar, Rajasthan, India

Remote

Every second, the internet gets messier. Content floods in from humans and machines alike—some helpful, some harmful, and most of it unstructured. Forums, blogs, knowledge bases, event pages, community threads: these are the lifeblood of digital platforms, but they also carry risk. Left unchecked, they can drift into chaos, compromise brand integrity, or expose users to misinformation and abuse. The scale is too big for humans alone, and AI isn't good enough to do it alone—yet. That's where we come in. Our team is rebuilding content integrity from the ground up by combining human judgment with generative AI. We don't treat AI like a sidekick or a threat. Every moderator on our team works side-by-side with GenAI tools to classify, tag, escalate, and refine content decisions at speed. The edge cases you annotate and the feedback you give train smarter systems, reduce false positives, and make AI moderation meaningfully better with every cycle. This isn't a job where you manually slog through a never-ending moderation queue. It's not an outsourced content cop role. You'll spend your days interacting directly with AI to make decisions, flag patterns, streamline workflows, and make sure the right content sees the light of day. If you're the kind of person who thrives on structured work, enjoys hunting down ambiguity, and finds satisfaction in operational clarity, this job will feel like a control panel for the future of content quality. You'll be joining a team obsessed with platform integrity and operational scale. Your job is to keep the machine running smoothly: managing queues, moderating edge cases, annotating training data, and making feedback loops tighter and faster. If you've used tools like ChatGPT to get real work done—not just writing poems or brainstorming ideas, but actually processing or classifying information—this is your next level. What You Will Be Doing Review and moderate user- and AI-generated content using GenAI tools to enforce platform policies and maintain a safe, high-quality environment Coordinate content workflows across tools and teams, ensuring timely processing, clear tracking, and smooth handoffs Tag edge cases, annotate training data, and provide structured feedback to improve the accuracy and performance of AI moderation systems What You Won’t Be Doing A boring content moderation job focused on manually reviewing of blogpost after blogpost An entry-level admin role with low agency or impact, just checking boxes in a queue AI Content Analyst Key Responsibilities Drive continuous improvement of our AI-human content moderation system by identifying patterns, refining workflows, and providing critical feedback that directly enhances platform integrity and user trust. Basic Requirements At least 1 year of professional work experience Hands-on experience using GenAI tools (e.g., ChatGPT, Claude, Gemini) in a professional, academic, or personal productivity context Strong English writing skills Nice-to-have Requirements Experience with content moderation, trust and safety, or platform policy enforcement Background in data labeling, annotation, or training data preparation Familiarity with workflow management tools and structured feedback systems About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $15 USD/hour, which equates to $30,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5593-LK-COUNTRY-AIContentAnaly.002 Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Details Job Description: Join Altera, a leader in programmable logic technology, as we strive to become the #1 FPGA company. We are looking for a skilled Jr. Data Scientist to develop and deploy production-grade ML pipelines and infrastructure across the enterprise. This is a highly technical, hands-on role focused on building scalable, secure, and maintainable machine learning solutions within the Azure ecosystem. As a member of our Data & Analytics team, you will work closely with other data scientists, ML specialists, and engineering teams to operationalize ML models using modern tooling such as Azure Machine Learning, Dataiku, and Kubeflow, etc. You’ll drive MLOps practices, automate workflows, and help build a foundation for responsible and reliable AI delivery. Responsibilities Design, build, and maintain automated ML pipelines from data ingestion through model training, validation, deployment, and monitoring using Azure Machine Learning, Kubeflow, and related tools. Deploy and manage machine learning models in production environments using cloud-native technologies like AKS (Azure Kubernetes Service), Azure Functions, and containerized environments. Partner with data scientists to transform experimental models into robust, production-ready systems, ensuring scalability and performance. Drive best practices for model versioning, CI/CD, testing, monitoring, and drift detection using Azure DevOps, Git, and third-party tools. Work with large-scale datasets from enterprise sources using Azure Synapse Analytics, Azure Data Factory, Azure Data Lake, etc. Build integrations with platforms like Dataiku etc. to support collaborative workflows and low-code user interactions while ensuring underlying infrastructure is robust and auditable. Set up monitoring pipelines to track model performance, ensure availability, manage retraining schedules, and respond to production issues. Write clean, modular code with clear documentation, tests, and reusable components for ML workflows. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of hands-on experience developing and deploying machine learning models in production environments. Strong programming skills in Python, with experience in ML libraries such as scikit-learn, TensorFlow, PyTorch, or XGBoost. Proven experience with the Microsoft Azure ecosystem, especially: Azure Machine Learning (AutoML, ML Designer, SDK) Azure Synapse Analytics and Data Factory Azure Data Lake, Azure Databricks Azure OpenAI and Cognitive Services Experience with MLOps frameworks such as Kubeflow, MLflow, or Azure ML pipelines. Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins for model lifecycle automation. Experience working with APIs, batch and real-time data pipelines, and cloud security practices. Why Join Us? Build and scale real-world ML systems on a modern Azure-based platform. Help shape the AI and ML engineering foundation of a forward-looking organization. Work cross-functionally with experts in data science, software engineering, and operations. Enjoy a collaborative, high-impact environment where innovation is valued and supported. Job Type Regular Shift Shift 1 (India) Primary Location: Ecospace 1 Additional Locations: Posting Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

TATUS: 37.5 hours per week, Permanent. SALARY: Competitive and based on experience and qualifications. LOCATION: Kochi, India DUTIES AND RESPONSIBILITIES WILL INCLUDE : Design and implement software systems that embed or integrate AI/ML models Collaborate with data scientists to convert research models into production-grade code Build and maintain pipelines for model training, validation, deployment, and monitoring Optimize model inference for performance, scalability, and responsiveness Develop reusable components, libraries, and APIs for ML workflows Implement robust logging, testing, and CI/CD pipelines for ML-based applications Monitor deployed models for performance drift and help manage retraining cycles REQUIREMENTS Essential requirements include: Bachelor’s or master’s degree in computer science, AI/ML, Data Science, or related field 3+ years of experience in software development, with solid coding skills in Python (and optionally C++) Hands-on experience with machine learning frameworks Strong understanding of data structures, algorithms, and system design Experience building and deploying ML models in production environments Familiarity with ML Ops practices: model packaging, versioning, monitoring, A/B testing Experience with RESTful APIs, microservices, or distributed systems Proficient in Git and collaborative development workflows. Desirable requirements: Experience with cloud platforms Familiarity with data engineering workflows Exposure to deep learning model optimisation tools Understanding of NLP, computer vision, or time-series forecasting THE POSITION IPSA Power (www.ipsa-power.com) develops and maintains IPSA, a power system analysis tool, and other products based on it. IPSA Power is part of TNEI (www.tneigroup.com), an independent specialist energy consultancy providing technical, strategic, planning, and environmental advice to companies and organisations operating within the energy sector. The dedicated software and solutions team that develops IPSA and other tools based on it is based in Manchester and Kochi. We are looking for a software engineer with a strong foundation in AI/ML and solid software development skills, to help us build intelligent, scalable systems that bring real-world machine learning applications to life. You will work closely with data scientists and engineers to develop, deploy, and optimize ML-driven software products. If you are passionate about clean code and deploying ML models into production with high reliability. Why should you apply? Join a world class team in a rapidly growing industry Have a hands-on opportunity to make a real difference in a small company Excellent professional and personal development opportunities Professional membership fees Discretionary annual performance-based bonus 25 days annual leave Additional day off on your birthday! How to apply Please apply using the ‘Apply Now’ form on the Careers Page on our website, and upload your CV and covering letter, demonstrating why you are suitable for the role and any previous experience. Closing date for applications: 20 June 2025 We shall be interviewing suitable candidates on a continuous basis, therefore, if you are planning to apply, we recommend that you do so without delay. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Amta-I, West Bengal, India

On-site

Er du på udkig efter en praksisnær uddannelse, der giver mulighed for at udvikle dine salgs- og driftsevner inden for detail? Hos Silvan får du som salgselev en solid grunduddannelse inden for salg og service og bliver en del af et dynamisk team, hvor du kan tage din karriere til nye højder! Vi tilbyder Den officielle salgselevuddannelse med 4 x 2 ugers skoleophold på Business College Syd fordelt over 2 år med sociale aktiviteter under opholdene. En hverdag, hvor der er fokus på at udvikle dine kompetencer inden for salg, drift og kundeservice. 20% oveni minimumslønnen for salgselever på Butiksoverenskomsten. 2 x 1 uges lærerige rotationer i andre Silvan butikker for at få indblik i de forskellige koncepter. Afsluttende præsentation af fagprøve for ledelsen på hovedkontoret efterfulgt af fejring. Vores forventninger til dig Du har bestået en EUD (Detail), EUX, HHX eller en anden gymnasial eksamen*. Du kan starte den 1. august 2025. Du har høje ambitioner inden for detailbranchen. Du er en holdspiller der bidrager til fællesskabet. Du brænder for at skabe noget magisk for vores kunder. Har du ikke haft merkantile fag er det et krav fra uddannelsen, at du kan gennemføre 5 ugers EUS inden første skoleophold. Om salgselevuddannelsen Den 1. august 2025 starter du i en butik så tæt på din bopæl som muligt, medmindre andet ønskes. Du vil tilbringe de næste 2 år i butikken, hvor du vil blive klædt godt på indenfor salg af alle vores produkter samt den daglige drift af butikken. I løbet af de to år deltager du i fire skoleophold af to ugers varighed på Business College Syd i Mommark. Undervisningen er skabt til at give dig en dyb forståelse for detailhandlens mekanismer og forberede dig til de udfordringer, du møder i din dagligdag i butikken. Indenfor den første måned inviteres alle vores elever til en fælles onboarding-dag på hovedkontoret, så I har mødt hinanden inden første skoleophold. Efter fagprøven inviteres alle vores elever igen ind på hovedkontoret, hvor resultaterne af fagprøven præsenteres for ledelsen efterfulgt af en fejring. Læs mere om uddannelsen her! Din fremtid Vores mål er at uddanne dygtige salgstalenter som brænder for en karriere i Silvan. Udviser du engagement og ambitioner, er der ingen grænser for, hvad du kan opnå hos os efter endt uddannelse. Vi bestræber os altid på at fastholde vores færdiguddannede elever og fortsætte udviklingen. Interesseret? Vi screener løbende ansøgningerne, så tøv ikke med at sende dit CV og en kort ansøgning! Skriv gerne i din ansøgning, hvis der er flere butikker, du er interesseret i. Ansøgninger der ikke er sendt via linket, vil ikke blive behandlet grundet GDPR. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Engineering Hire Type: Employee Job ID 9309 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: As a Senior Staff AI Engineer focusing on AI Optimization & MLOps, you are a trailblazer in the AI landscape. You possess deep expertise in AI model development and optimization, with a keen interest in reinforcement learning and MLOps. Your ability to design, fine-tune, and deploy scalable, efficient, and continuously improving AI models sets you apart. You thrive in dynamic environments, staying at the forefront of AI technologies and methodologies, ensuring that AI solutions are not only cutting-edge but also production-ready. Your collaborative spirit and excellent communication skills enable you to work seamlessly with cross-functional teams, enhancing AI-powered IT automation solutions. With a strong background in AI frameworks and cloud-based AI services, you are committed to driving innovation and excellence in AI deployments. What You’ll Be Doing: Design, fine-tune, and optimize LLMs, retrieval-augmented generation (RAG), and reinforcement learning models for IT automation. Improve model accuracy, latency, and efficiency, ensuring optimal performance for IT service workflows. Experiment with cutting-edge AI techniques, including multi-agent architectures, prompt tuning, and continual learning. Implement MLOps best practices, ensuring scalable, automated, and reliable model deployment. Develop AI monitoring, logging, and observability pipelines to track model performance in production. Optimize GPU/TPU utilization and cloud-based AI model serving for efficiency and cost-effectiveness. Develop tools to measure model drift, inference latency, and operational efficiency. Implement automated retraining pipelines to ensure AI models remain effective over time. Work closely with cloud teams to optimize AI model execution across hybrid cloud environments. Stay ahead of emerging AI technologies, evaluating new frameworks, techniques, and research for real-world application. Collaborate to refine AI system architectures and capabilities, while also ensuring models are effectively embedded into IT automation workflows The Impact You Will Have: Enhance the efficiency and reliability of AI-powered IT automation solutions. Drive continuous improvement and innovation in AI model development and deployment. Ensure scalable and cost-effective AI model serving in cloud and hybrid environments. Improve real-time AI processing with minimal downtime and high performance. Optimize AI systems for performance, security, and cost in IT automation applications. Contribute to the advancement of Synopsys' AI capabilities and technologies. What You’ll Need: 8+ years of experience in AI/ML engineering, with a focus on model optimization and deployment. Strong expertise in AI frameworks (LangGraph, OpenAI, Hugging Face, TensorFlow/PyTorch). Experience implementing MLOps pipelines, CI/CD for AI models, and cloud-based AI deployment. Deep understanding of AI performance tuning, inference optimization, and cost-efficient deployment. Strong programming skills in Python, AI model APIs, and cloud-based AI services. Familiarity with IT automation and self-healing systems is a plus. Who You Are: Innovative and forward-thinking, constantly seeking to improve and optimize AI models. Collaborative and communicative, working effectively with cross-functional teams. Detail-oriented and meticulous, ensuring high standards in AI model performance and deployment. Adaptable and resilient, thriving in dynamic and fast-paced environments. Passionate about AI and its applications in IT automation and beyond. The Team You’ll Be A Part Of: You will join a dynamic team of AI engineers and IT professionals dedicated to advancing AI-powered IT automation. Our team focuses on optimizing model deployment, scaling AI workloads using Kubernetes, and enhancing AI observability and security. Together, we aim to make IT automation faster, more reliable, and cost-efficient, driving continuous technological innovation and excellence. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 months ago

Apply

0 years

0 Lacs

India

Remote

Client- UK Based client Availability: 8 hours per day Shift: 2 PM IST to 11 PM IST Exp -10+ Yrs Mode: WFH ( Freelancing) If you're interested, kindly share your CV-thara.dhanaraj@excelenciaconsulting.com/ Call7358452333 Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus Show more Show less

Posted 2 months ago

Apply

14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. Responsibilities We're looking for an Architect SRE to be part of our SRE Platform and Tooling team. Reporting to the Director, Software Engineering, you'll be responsible for: Developing scalable, secure, and resilient SRE platform and tooling solutions to enhance reliability and performance across cloud, on-prem, and private cloud environments Deploying observability tools (OpenTelemetry, Kloudfuse, OpenSearch, Grafana, ServiceNow) to improve system visibility and reduce MTTD/MTTR Leading automation efforts in self-healing, CI/CD, configuration, drift, and infrastructure to boost efficiency What We're Looking For (Minimum Qualifications) 14+ years in software development across Cloud-SRE, DevOps, and System Engineering, specializing in Infrastructure, Observability, Automation, and CI/CD Expertise in AIOps, AI/ML for operational efficiency and scalability, and building large-scale distributed systems Proficient in observability tools and skilled in Kubernetes, container orchestration, and microservices architectures Proficient in programming and scripting with Java, Python, Go, or similar languages Skilled in OpenStack for private cloud, Kafka, RabbitMQ, event-driven architectures, and ServiceNow Platform, including CMDB and ITSM solutions What Will Make You Stand Out (Preferred Qualifications) Experience in regulated markets (FedRAMP, SOC2, ISO27001), Cyber Security and compliance-driven environments Familiarity with AI/ML-driven operational intelligence and AIOps platforms Contributions to open-source projects related to SRE and platform engineering At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Benefits Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. Show more Show less

Posted 2 months ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less

Posted 2 months ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Every second, the internet gets messier. Content floods in from humans and machines alike—some helpful, some harmful, and most of it unstructured. Forums, blogs, knowledge bases, event pages, community threads: these are the lifeblood of digital platforms, but they also carry risk. Left unchecked, they can drift into chaos, compromise brand integrity, or expose users to misinformation and abuse. The scale is too big for humans alone, and AI isn’t good enough to do it alone—yet. That’s where we come in. Our team is rebuilding content integrity from the ground up by combining human judgment with generative AI. We don’t treat AI like a sidekick or a threat. Every moderator on our team works side-by-side with GenAI tools to classify, tag, escalate, and refine content decisions at speed. The edge cases you annotate and the feedback you give train smarter systems, reduce false positives, and make AI moderation meaningfully better with every cycle. This isn’t a job where you manually slog through a never-ending moderation queue. It’s not an outsourced content cop role. You’ll spend your days interacting directly with AI to make decisions, flag patterns, streamline workflows, and make sure the right content sees the light of day. If you’re the kind of person who thrives on structured work, enjoys hunting down ambiguity, and finds satisfaction in operational clarity, this job will feel like a control panel for the future of content quality. You’ll be joining a team obsessed with platform integrity and operational scale. Your job is to keep the machine running smoothly: managing queues, moderating edge cases, annotating training data, and making feedback loops tighter and faster. If you’ve used tools like ChatGPT to get real work done—not just writing poems or brainstorming ideas, but actually processing or classifying information—this is your next level. What You Will Be Doing Review and moderate user- and AI-generated content using GenAI tools to enforce platform policies and maintain a safe, high-quality environment Coordinate content workflows across tools and teams, ensuring timely processing, clear tracking, and smooth handoffs Tag edge cases, annotate training data, and provide structured feedback to improve the accuracy and performance of AI moderation systems What You Won’t Be Doing A boring content moderation job focused on manually reviewing of blogpost after blogpost An entry-level admin role with low agency or impact, just checking boxes in a queue Basic Requirements AI Content Reviewer key responsibilities At least 1 year of professional work experience Hands-on experience using GenAI tools (e.g., ChatGPT, Claude, Gemini) in a professional, academic, or personal productivity context Strong English writing skills About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $15 USD/hour, which equates to $30,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5593-IN-Hyderaba-AIContentRevie.002 Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

MACHINE-LEARNING ENGINEER ABOUT US Datacultr is a global Digital Operating System for Risk Management and Debt Recovery, we drive Collection Efficiencies, Reduce Delinquencies and Non-Performing Loans (NPL’s). Datacultr is a Digital-Only provider of Consumer Engagement, Recovery and Collection Solutions, helping Consumer Lending, Retail, Telecom and Fintech Organizations to expand and grow their business in the under-penetrated New to Credit and Thin File Segments. We are helping millions of new to credit consumers, across emerging markets, access formal credit and begin theirjourney towards financialhealth. We have clients acrossIndia, South Asia, South East Asia, Africa and LATAM. Datacultr is headquartered in Dubai, with offices in Abu Dhabi, Singapore, Ho Chi Minh City, Nairobi, and Mexico City; and our Development Center is located out of Gurugram, India. ORGANIZATION’S GROWTH PLAN Datacultr’s vision is to enable convenient financing opportunities for consumers, entrepreneurs and small merchants, helping them combat the Socio-economic problems this segment faces due to restricted access to financing. We are on a missionto enable 35 million unbanked& under-served people,access financial services by the end of 2026. Position Overview We’re looking for an experienced Machine Learning Engineer to design, deploy, and scale production-grade ML systems. You’ll work on high-impact projects involving deep learning, NLP, and real-time data processing—owning everything from model development to deployment and monitoring while collaborating with cross-functional teams to deliver impactful, production-ready solutions. Core Responsibilities Representation & Embedding Layer Evaluate, fine-tune, and deploy multilingual embedding models (e.g., OpenAI text-embedding-3, Sentence-T5, Cohere, or in-house MiniLM) on AWS GPU or serverless endpoints. Implement device-level aggregation to produce stable vectors for downstream clustering. Cohort Discovery Services Build scalable clustering workflows in Spark/Flink or Python on Airflow. Serve cluster IDs & metadata via feature store / real-time API for consumption. MLOps & Observability Own CI/CD for model training & deployment. Instrument latency, drift, bias, and cost dashboards; automate rollback policies. Experimentation & Optimisation Run A/B and multivariate tests comparing embedding cohorts against legacy segmentation; analyse lift in repayment or engagement. Iterate on quantisation, distillation, and batching to hit strict cost-latency SLAs. Collaboration & Knowledge-sharing Work hand-in-hand with Product & Data Strategy to translate cohort insights into actionable product features. Key Requirements 5–8 years of hands-on ML engineering / NLP experience; at least 2 years deploying transformer-based models in production. Demonstrated ownership of pipelines processing ≥100 million events per month. Deep proficiency in Python, PyTorch/TensorFlow, Hugging Face ecosystem, and SQL on cloud warehouses. Familiar with vector databases and RAG architectures. Working knowledge of credit-risk or high-volume messaging platforms is a plus. Degree in CS, EE, Statistics, or related; Tech Stack You’ll Drive Model & Serving – PyTorch, Hugging Face, Triton, BentoML Data & Orchestration – Airflow, Spark/Flink, Kafka Vector & Storage – Qdrant/Weaviate, S3/GCS, Parquet/Iceberg Cloud & Infra – AWS (EKS, SageMaker) Monitoring – Prometheus, Loki, Grafana What We Offer Opportunity to shape the future of unsecured lending in emerging markets Competitive compensation package Professional development and growth opportunities Collaborative, innovation-focused work environment Comprehensive health and wellness benefits Location & Work Model Immediate joining possible Work From Office only Based in Gurugram, Sector 65 Kindly share your updated profile with us at careers@datacultr.com to guide you further with this opportunity. ----- END ----- Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position : Environmental Data Scientist Salary: upto ₹50000 pm Location: Ahmedabad [Only preferring candidates from Gujarat] Experience: 2+ Years we are seeking a research-driven Environmental Data Scientist to lead the development of advanced algorithms that enhance the accuracy, reliability, and performance of air quality sensor data. This role goes beyond traditional data science — it focuses on solving real-world challenges in environmental sensing, such as sensor drift, cross-interference, and data anomalies. Key Responsibilities: Design and implement algorithms to improve the accuracy, stability, and interpretability of air quality sensor data (e.g., calibration, anomaly detection, cross-interference mitigation, and signal correction) Conduct in-depth research on sensor behavior and environmental impact to inform algorithm development Collaborate with software and embedded systems teams to integrate these algorithms into cloud or edge-based systems Analyze large, complex environmental datasets using Python, R, or similar tools Continuously validate algorithm performance using lab and field data; iterate for improvement Develop tools and dashboards to visualize sensor behavior and algorithm impact Assist in environmental research projects with statistical analysis and data interpretation Document algorithm design, testing procedures, and research findings for internal use and knowledge sharing Support team members with data-driven insights and code-level contributions as needed Assist other team members with writing efficient code and overcoming programming challenges Education/Experience Required Skills & Qualifications Bachelor’s or Master’s degree in one of the following fields: Environmental Engineering / Science, Chemical Engineering, Electronics / Instrumentation Engineering, Computer Science / Data Science, Physics / Atmospheric Science (with data or sensing background) 1-2 years of hands-on experience working with sensor data or IoT-based environmental monitoring systems Strong knowledge of algorithm development, signal processing, and statistical analysis Proficiency in Python (pandas, NumPy, scikit-learn, etc.) or R, with experience handling real-world sensor datasets Ability to design and deploy models in a cloud or embedded environment. Excellent problem-solving and communication skills. Passion for environmental sustainability and clean-tech. Preferred Qualifications: Familiarity with time-series anomaly detection, sensor fusion, signal noise reduction techniques or geospatial data processing. Exposure to air quality sensor technologies, environmental sensor datasets, or dispersion modeling. For Quick Response, please fill out this fo rm https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform?usp=sharing&ouid=106739769571157586077 Show more Show less

Posted 2 months ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Markovate. At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault Requirements : 9+ years of experience in data engineering and data architecture. Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness. Must be highly collaborative and team oriented with commitment to Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze ? silver ? gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great to have: Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in SQL and at least one programming language (e.g., Python, it's like to be at Markovate : At Markovate, we thrive on collaboration and embrace every innovative idea. We invest in continuous learning to keep our team ahead in the AI/ML landscape. Transparent communication is keyevery voice at Markovate is valued. Our agile, data-driven approach transforms challenges into opportunities. We offer flexible work arrangements that empower creativity and balance. Recognition is part of our DNAyour achievements drive our success. Markovate is committed to sustainable practices and positive community impact. Our people-first culture means your growth and well-being are central to our mission. Location : hybrid model 2 days onsite. (ref:hirist.tech) Show more Show less

Posted 2 months ago

Apply

5.0 years

5 - 9 Lacs

Hyderābād

On-site

JLL supports the Whole You, personally and professionally. Our people at JLL are shaping the future of real estate for a better world by combining world class services, advisory and technology to our clients. We are committed to hiring the best, most talented people in our industry; and we support them through professional growth, flexibility, and personalized benefits to manage life in and outside of work. Whether you’ve got deep experience in commercial real estate, skilled trades, and technology, or you’re looking to apply your relevant experience to a new industry, we empower you to shape a brighter way forward so you can thrive professionally and personally. The BMS Engineer is responsible for implementing and maintaining Building Management Systems that control and monitor various building functions such as HVAC, lighting, security, and energy management. This role requires a blend of technical expertise, problem-solving skills, and the ability to work with diverse stakeholders. Required Qualifications and skills: Diploma/Bachelor's degree in Electrical / Mechanical Engineering or related field 5+ years of experience in BMS Operations, Design implementation, and maintenance Proficiency in BMS software platforms (e.g. Schneider Electric, Siemens, Johnson Controls) Strong understanding of HVAC systems and building operations Knowledge of networking protocols (e.g. BACnet, Modbus, LonWorks) Familiarity with energy management principles and sustainability practices Excellent problem-solving and analytical skills Strong communication and interpersonal abilities Ability to work independently and as part of a team Preferred Qualifications: Professional engineering license (P.E.) or relevant industry certifications Experience with integration of IoT devices and cloud-based systems Knowledge of building codes and energy efficiency standards Project management experience Programming skills (e.g., Python, C++, Java) Roles and Responsibilities of BMS Engineer 1. Troubleshoot and resolve issues with BMS 2. Optimize building performance and energy efficiency through BMS tuning 3. Check LL BMS critical parameters & communicate with LL in case parameters go beyond operating threshold 4. Develop and maintain system documentation and operational procedures. Monitor BMS OEM PPM schedule & ensure diligent execution. Monitor SLAs & inform WTSMs in the event of breach. 5. Ensure real time monitoring of Hot / Cold Prism Tickets & resolve on priority. 6. Preparation of Daily / Weekly & Monthly reports comprising of Uptime / Consumption with break up / Temperature trends / Alarms & equipment MTBF 7. Ensure adherence to Incident escalation process & training to Ground staff. 8. Coordination with BMS OEM for ongoing operational issues (Graphics modification/ sensor calibration / controller configuration / Hardware replacement) 9. Supporting annual power down by gracefully shutting down the system & bringing up post completion of the activity. 10. Ensure healthiness of FLS (Panels / Smoke Detectors) & conduct periodic check for drift levels. 11. Provide technical support and training to facility management team 12. Collaborate with other engineering disciplines, WPX Team and project stakeholders and make changes to building environment if so needed. If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements below. We’re interested in getting to know you and what you bring to the table! Personalized benefits that support personal well-being and growth: JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. About JLL – We’re JLL—a leading professional services and investment management firm specializing in real estate. We have operations in over 80 countries and a workforce of over 102,000 individuals around the world who help real estate owners, occupiers and investors achieve their business ambitions. As a global Fortune 500 company, we also have an inherent responsibility to drive sustainability and corporate social responsibility. That’s why we’re committed to our purpose to shape the future of real estate for a better world. We’re using the most advanced technology to create rewarding opportunities, amazing spaces and sustainable real estate solutions for our clients, our people, and our communities. Our core values of teamwork, ethics and excellence are also fundamental to everything we do and we’re honored to be recognized with awards for our success by organizations both globally and locally. Creating a diverse and inclusive culture where we all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we’re headed in the future. And we know that unique backgrounds, experiences and perspectives help us think bigger, spark innovation and succeed together.

Posted 2 months ago

Apply

1.0 years

0 Lacs

Chennai

On-site

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Test Engineering General Summary: We are seeking a Engineer AI System-Level Test Engineer to lead end-to-end testing of Retrieval-Augmented Generation (RAG) AI systems for Hybrid, Edge-AI Inference solutions. This role will focus on designing, developing, and executing comprehensive test strategies for evaluating the reliability, accuracy, usability and scalability of large-scale AI models integrated with external knowledge retrieval systems. The ideal candidate needs to have deep expertise in AI testing methodologies, experience with large language models (LLMs), expertise in building test solutions for AI Inference stacks, RAG, search/retrieval architecture, and a strong background in automation frameworks, performance validation, and building E2E automation architecture. Experience testing large-scale generative AI applications, familiarity with LangChain, LlamaIndex, or other RAG-specific frameworks, and knowledge of adversarial testing techniques for AI robustness are preferred qualifications Key Responsibilities: Test Strategy & Planning Define end-to-end test strategies for RAG, retrieval, generation, response coherence, and knowledge correctness Develop test plans & automation frameworks to validate system performance across real-world scenarios. Hands-on experience in benchmarking and optimizing Deep Learning Models on AI Accelerators/GPUs Implement E2E solutions to integrate Inference systems with customer software workflows Identify and implement metrics to measure retrieval accuracy, LLM response quality Test Automation Build automated pipelines for regression, integration, and adversarial testing of RAG workflows. Validate search relevance, document ranking, and context injection into LLMs using rigorous test cases. Collaborate with ML engineers and data scientists to debug model failures and identify areas for improvement. Conduct scalability and latency tests for retrieval-heavy applications. Analyze failure patterns, drift detection, and robustness against hallucinations and misinformation. Collaboration Work closely with AI research, engineering teams & customer teams to align testing with business requirements. Generate test reports, dashboards, and insights to drive model improvements. Stay up to date with the latest AI testing frameworks, LLM evaluation benchmarks, and retrieval models. Required Qualifications: 1+ years of experience in AI/ML system testing, software quality engineering, or related fields. Bachelor’s or master’s degree in computer science engineering/ data science / AI/ML Hands-on experience with test automation frameworks (e.g., PyTest, Robot Framework, JMeter). Proficiency in Python, SQL, API testing, vector databases (e.g., FAISS, Weaviate, Pinecone) and retrieval pipelines. Experience with ML model validation metrics (e.g., BLEU, ROUGE, MRR, NDCG). Expertise in CI/CD pipelines, cloud platforms (AWS/GCP/Azure), and containerization (Docker, Kubernetes). Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 months ago

Apply

1.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Greetings from Synergy Resource Solutions, a leading Recruitment Consultancy. Our client is a Smart Air Quality Monitoring Solutions company offering data-driven environmental solutions for better decision making. Using our sensor-based hardware, we monitor various environmental parameters related to air quality, noise, odour, weather, radiation etc. Designation: - Environmental Data Scientist (Ahmedabad) Location: - Ahmedabad Experience : - 1 - 3 years Work timings: 10:00 am to 6:30 pm (5 days working) Job Description: We are seeking a research-driven Environmental Data Scientist to lead the development of advanced algorithms that enhance the accuracy, reliability, and performance of air quality sensor data. This role goes beyond traditional data science — it focuses on solving real-world challenges in environmental sensing, such as sensor drift, cross-interference, and data anomalies. Key Responsibilities: ● Design and implement algorithms to improve the accuracy, stability, and interpretability of air quality sensor data (e.g., calibration, anomaly detection, cross-interference mitigation, and signal correction) ● Conduct in-depth research on sensor behavior and environmental impact to inform algorithm development ● Collaborate with software and embedded systems teams to integrate these algorithms into cloud or edge-based systems ● Analyze large, complex environmental datasets using Python, R, or similar tools ● Continuously validate algorithm performance using lab and field data; iterate for improvement ● Develop tools and dashboards to visualize sensor behavior and algorithm impact ● Assist in environmental research projects with statistical analysis and data interpretation ● Document algorithm design, testing procedures, and research findings for internal use and knowledge sharing ● Support team members with data-driven insights and code-level contributions as needed ● Assist other team members with writing efficient code and overcoming programming challenges Required Skills & Qualifications ● Bachelor’s or Master’s degree in one of the following fields: Environmental Engineering / Science, Chemical Engineering, Electronics / Instrumentation Engineering, Computer Science / Data Science, Physics / Atmospheric Science (with data or sensing background) ● 1-2 years of hands-on experience working with sensor data or IoT-based environmental monitoring systems ● Strong knowledge of algorithm development, signal processing, and statistical analysis ● Proficiency in Python (pandas, NumPy, scikit-learn, etc.) or R, with experience handling real-world sensor datasets ● Ability to design and deploy models in a cloud or embedded environment. ● Excellent problem-solving and communication skills. ● Passion for environmental sustainability and clean-tech. Preferred Qualifications: ● Familiarity with time-series anomaly detection, sensor fusion, signal noise reduction techniques or geospatial data processing. ● Exposure to air quality sensor technologies, environmental sensor datasets, or dispersion modeling. Benefits: Competitive salary and benefits package Opportunities for professional growth and development A dynamic and collaborative work environment If your profile is matching with the requirement & if you are interested for this job, please share your updated resume with details of your present salary, expected salary & notice period. Show more Show less

Posted 2 months ago

Apply

1.5 - 2.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Summary: We are looking for a highly motivated and analytical Data Scientist / Machine Learning (ML) Engineer / AI Specialist with 1.5 -2 years of experience in Health data analysis, particularly with data sourced from wearable devices such as smartwatches and fitness trackers. The ideal candidate will be proficient in developing data models, analyzing complex datasets, and translating insights into actionable strategies that enhance health-related applications. Key Responsibilities: Develop and implement data models tailored to health data from wearable devices. Stay updated on industry trends and emerging technologies in health data analytics. Ensure data integrity and security throughout the analysis process , correlations relevant to health metrics. Analyze large datasets to extract actionable insights using statistical methods and machine learning techniques. Develop, train, test, and deploy machine learning models for classification, regression, clustering, NLP, recommendation, or computer vision tasks. Collaborate with cross-functional teams including product, engineering, and domain experts to define problems and deliver solutions. Design and build scalable ML pipelines for model development and deployment. Conduct exploratory data analysis (EDA), data wrangling, feature engineering, and model validation. Monitor model performance in production and iterate based on feedback and data drift. Stay up to date with the latest research and trends in machine learning, deep learning, and AI. Document processes, code, and methodologies to ensure reproducibility and collaboration. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or related field. 1.5-2 years of experience in data analysis, preferably within the health tech sector. Strong knowledge of Python or R and libraries such as NumPy, pandas, scikit-learn, TensorFlow, PyTorch, or XGBoost. Strong experience with data modeling, machine learning algorithms, and statistical analysis. Familiarity with health data privacy regulations (e.g., HIPAA) and data visualization tools (e.g., Tableau, Power BI). Proficiency in SQL and experience working with large-scale data systems (e.g., Spark, Hadoop, BigQuery, Snowflake). Ability to clearly communicate complex technical concepts to both technical and non-technical audiences. Experience with version control tools (e.g., Git) and ML pipeline tools (e.g., MLflow, Airflow, Kubeflow). Experience deploying models in cloud environments (AWS, GCP, Azure). Knowledge of NLP (e.g., Transformers, LLMs), computer vision, or reinforcement learning. Familiarity with MLOps, CI/CD for ML, and model monitoring tools. Experience - 1.5 - 2 years (Only Local Candidates) Location - Mohali Phase 8b Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🚀 We're Hiring: PropTech Solutions Consultant 📍 Location: Hyderabad | 💼 Full-time | 🏠 Industry: Real Estate + Technology 💰 Salary: ₹6 – ₹10 LPA (Based on experience & expertise) 🔗 Apply Now | 💡 Empowering Real Estate through Innovation About Us: At MKT Praneet Homes (MPH Developers), we're revolutionizing the real estate industry by integrating technology, data, and innovation into every part of our process. We're not just selling properties — we're creating smart, tech-enabled experiences for buyers, sellers, and real estate professionals. As we grow, we’re looking for a PropTech Solutions Consultant who bridges the gap between cutting-edge tech and impactful real estate solutions. What You'll Do: As a PropTech Solutions Consultant, you’ll be the strategic link between IT and marketing teams, helping us design, implement, and scale technology solutions that drive growth, improve customer experience, and simplify operations. 🔹 Key Responsibilities: Analyze and optimize the end-to-end real estate customer journey using digital tools. Recommend, implement, and manage PropTech platforms such as: 🛠 CRM : Salesforce, Zoho CRM, HubSpot CRM 📲 Virtual Tours & Listing Tech : Matterport, MagicBricks Pro, Square Yards 📈 Analytics & Dashboards : Google Analytics, Power BI, Tableau 🔁 Marketing Automation : Mailchimp, ActiveCampaign, MoEngage 🧠 AI Chatbots & Lead Nurturing : Tars, Drift, Intercom 📍 Geo & Mapping Tools : Mappls, Google Maps API 🧰 Real Estate Portals & Syndication Tools : NoBrokerHood, 99acres Partner Tools, Housing.com Pro Collaborate with sales and marketing teams to align digital strategies with revenue goals. Train and support internal teams on tool adoption and performance tracking. Provide data-driven insights to optimize tech-enabled marketing and sales campaigns. Stay current with global PropTech innovations and evaluate tools for future use. What We’re Looking For: 🧠 Skills & Experience: 2–5 years of experience in a tech-enabled marketing , real estate technology , or business consulting role. Familiarity with real estate business processes, including lead generation, site visits, and post-sales engagement. Hands-on experience with at least 3–5 of the tools listed above. Strong communication skills to explain technical concepts to non-technical teams. Bonus: Experience with integration tools (Zapier, Make), CMS platforms (WordPress), or APIs. 💬 Soft Skills That Set You Apart: Tech-Savvy Communicator – You can simplify the complex and build bridges between tech and business. Problem Solver – You don’t just spot issues; you create smart, scalable solutions. Collaborative Mindset – You thrive in cross-functional teams and enjoy working with marketing, sales, and IT alike. Initiative-Driven – You take ownership and act proactively to drive digital transformation. Adaptable Learner – You’re curious, open to feedback, and always ready to learn new tools and trends. Detail-Oriented Thinker – You ensure smooth integrations, clean data, and flawless execution. Why Join Us? 🧩 Be a key player in our digital transformation journey . 🌱 Opportunity to work at the intersection of technology, marketing, and real estate . 💡 Work with cutting-edge PropTech and build solutions that make a real impact. 🎓 Continuous learning, leadership development, and innovation culture. 💰 Competitive salary, performance incentives, and growth roadmap. How to Apply: Send your resume and a short note on why you're excited about PropTech to sales@mphdevelopers.com. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies