Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 years
30 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
2.0 years
30 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Job Description: @Only Founders via Only Founders Hiring Hackathon To Apply: https://www.dcodeblock.com/hackathon/6836b77fa6d70dcae62fa9e2 AI Engineer and researcher (Part-Time → Core Full-Time Role) Remote-First (Preferred: Web3-native time zones) 3-Month Paid Part-Time Role → Offer for Full-Time Core Contributor Position Compensation Part-Time: ₹43,000/month Full-Time: ₹13,00,000 CTC (post-offer) In hand Salary: 86,000/month + bonus Additional: Performance-based bonuses, token incentives, core contributor equity (subject to DAO onboarding) About OnlyFounders OnlyFounders is building the world’s first AI-native onchain fundraising engine — replacing pitch-based capital allocation with verifiable, decentralized reputation systems. Instead of asking “Who do you know?” , our protocol asks “What does your reputation prove?” We filter and score founders, investors, and partner contributors through privacy-preserving AI agents trained on onchain data, trust graphs, and encrypted narratives. We are building a future where access to capital is earned through proof , not persuasion. Role Overview: We are hiring 2 AI Agent Engineers who will initially join as part-time contributors, with a clear path to full-time roles in the core protocol team. You’ll collaborate with our engineering and AI research leads to design, test, and scale modular AI agents in areas like identity scoring, raise prediction, and decentralized traction mapping. This role is solo-first, ownership-driven, and deeply technical — perfect for builders who want to shape foundational systems from the protocol layer up. What you will build: You’ll take full ownership of one or more of the following agent categories (based on your strengths): Identity Model Agent: Score founder/investor credibility via DID-linked history and encrypted onchain data. Pitch Strength Agent: NLP-based evaluation of pitch narratives using local language model fine-tuning. Fundraise Prediction Agent: Forecast raise success using hybrid models (quantitative + qualitative). Investor Archetype Agent: Cluster investor behavior using wallet data and confidential profiling logic. Social Trust Graph Agent: Construct trust-based scoring from verified relationships and zero-knowledge attestations. Momentum Tracker Agent: Detect traction through real-time signal processing (tweets, commits, txns). Partner Impact Agent: Attribute meaningful value to collaborators via encrypted tracking of contributions. Ideal Candidate You should have strong technical depth in at least 4 of the following: Required: Solid grounding in Python + ML frameworks (PyTorch, Hugging Face) Familiarity with NLP, time-series, or tabular modeling Understanding of federated learning, distributed compute, or encrypted inference High curiosity and self-directed research habits Comfort working in async, remote-first environments Bonus: Hands-on with TEEs (Intel SGX, AMD SEV, etc.) Familiarity with zkML, MPC, or privacy-preserving ML stacks Worked with onchain data (e.g., The Graph, Ethereum RPC) Built or experimented with onchain agents, smart contracts, or AI x crypto tools Stack You’ll Use Python, PyTorch, Hugging Face Transformers Flower, OpenFL, OpenMined (for FL orchestration) IPFS, DIDs, Ceramic for decentralized identity + storage Confidential compute: Oasis, Secret, SGX SDKs Solidity (for model outputs that need onchain commitment) What You’ll Achieve By the end of your part-time engagement, you’ll: Own and ship 3 or more working AI agents (modular, tested, and swarm-ready) Be considered for a core full-time engineering role Help define the building blocks for reputation-driven fundraising Earn credits, tokens, and a contributor track within the OnlyFounders protocol Show more Show less
Posted 2 weeks ago
15.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Overview: We are seeking a visionary Director of People Insights & AI Strategy to lead a cutting-edge, enterprise-wide approach to workforce intelligence. This leader will serve as a trusted advisor to the Executive Leadership Team (ELT) and the HR Senior Leadership Team (HRSLT) , translating data into bold strategies that shape talent decisions and business outcomes. In this role, you'll elevate our People Insights function into a strategic, AI-enabled powerhouse—embedding predictive, prescriptive, and generative analytics into how we manage, develop, and invest in our global workforce. From succession planning to workforce design, you will be the voice of data in the most critical talent and business conversations. Key Responsibilities: Strategic Executive Influence (30%) Deliver high-impact insights and advisory support to the ELT and HRSLT—shaping talent strategies, workforce investments, and organizational decisions with data-driven narratives and real-time analytics. AI-Driven Talent Intelligence (25%) Have experience with AI/ML solutions (e.g., talent segmentation, skills inference, generative AI for workforce planning) that anticipate trends, uncover risks, and identify opportunities for growth and optimization. Enterprise Decision Support (20%) Lead talent intelligence initiatives that inform M&A assessments, org design, strategic workforce planning, and culture evolution—connecting people data to business outcomes. Advanced People Analytics & Workday Integration (15%) Guide robust statistical and machine learning analyses (e.g., SEM, clustering, forecasting, predictive modeling). Bridge analytics platforms with Workday to extract and operationalize insights across the talent lifecycle. Global Team Leadership (10%) Lead and evolve the People Insights team into a world-class function. Champion a culture of data literacy across HR through tools, enablement, and strategic partnerships. Qualifications: 10–15 years of progressive experience in People Analytics, Data Science, Strategy, or Organizational Effectiveness with significant exposure to executive-level influence. 8+ years leading high-performing teams, ideally in a global, matrixed organization. Deep fluency with Workday—including data extraction, integration, and reporting—and the ability to bridge it with analytics tools and platforms. Proficiency in Python, R, SQL, Power BI/Tableau, and cloud environments (Azure, AWS, or GCP). Proven ability to communicate complex insights through clear storytelling that drives action at the highest levels. Key Skills Analytics, Change Management, Data Science, Generative AI, Leadership, Organizational Analytics, Organizational Development (OD), People Analytics, Talent Intelligence, Workforce Analytics What’s In It For You? Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle. Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses. Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program. Diversity, Equity & Inclusion: It’s not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities. Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program. Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives. Don’t meet every single requirement? Apply anyway. At Tech Data, a TD SYNNEX Company, we’re proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you’re excited about working for our company and believe you’re a good fit for this role, we encourage you to apply. You may be exactly the person we’re looking for! We are an equal opportunity employer and committed to building a diverse team that represents and empowers a variety of backgrounds, perspectives, and skills. All qualified applicants will receive consideration for employment based on merit, without regard to race, colour, religion, national origin, gender, gender identity or expression, sexual orientation, protected veteran status, disability, genetics, age, or any other characteristic protected by law. To support our diversity and inclusion efforts, we may ask for voluntary gender disclosure information. This data will be used solely to improve our hiring practices and ensure fair treatment for all candidates. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: On Site Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less
Posted 2 weeks ago
50.0 years
0 Lacs
Ranjangaon, India
On-site
This job is with Jabil, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. At Jabil we strive to make ANYTHING POSSIBLE and EVERYTHING BETTER. We are proud to be a trusted partner for the world's top brands, offering comprehensive engineering, manufacturing, and supply chain solutions. With over 50 years of experience across industries and a vast network of over 100 sites worldwide, Jabil combines global reach with local expertise to deliver both scalable and customized solutions. Our commitment extends beyond business success as we strive to build sustainable processes that minimize environmental impact and foster vibrant and diverse communities around the globe. Job Summary Responsible for investigating and reducing component failures at electrical test through prompt, aggressive root cause analysis and effective corrective actions. Essential Duties And Responsibilities Improve component quality of non-conforming products by establishing failure analysis and obtaining corrective action from suppliers. Interact with various departments (Quality, Purchasing, Manufacturing) to resolve/improve component issues. Monitor components reject levels and respond to issues accordingly. Report technical information concerning component failures to various team members and customer. Reclaim revenue of rejected components via return of defective product. Implement proactive measures that will foster productivity and higher product yields while reducing economic loss. Respond to production line part problems immediately and address boards on component hold. Monitor the storage of moisture sensitive components in dry pack and nitrogen cases; shock sensitive devices. Follow ESD procedures at all times. Speak with customer engineering personnel in a professional and effective manner. Adhere to all safety and health rules and regulations associated with this position and as directed by supervisor. Comply and follow all procedures within the company security policy. May perform other duties and responsibilities as assigned. Job Qualifications KNOWLEDGE REQUIREMENTS Ability to read, analyze, interpret and communicate regarding common scientific and/or technical journals, financial reports, and legal documents. Ability to respond to common inquiries or complaints from customers, regulatory agencies, or members of the business community. Ability to effectively present information to top management, public groups, and/or boards of directors. Advanced PC skills, including training and knowledge of Jabil's software packages. Ability to work with mathematical concepts such as probability and statistics, statistical inference, fundamentals of plane and solid geometry and trigonometry, understanding of partial differential equations, integration, and numerical techniques. Expertise in applying concepts such as fractions, percentages, ratios, and proportions to practical situations. Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exists. Ability to interpret a variety of instructions furnished in written, oral, diagram, or schedule form. Ability to collect data, establish facts, and draw valid conclusions. Proven ability to interpret an extensive variety of technical instructions in mathematical or diagram form and effectively deal with several abstract and concrete variables. Ability to add, subtract, multiply, and divide in all units of measure, using whole numbers, common fractions, and decimals. Ability to compute rate, ratio, and percent and to draw and interpret graphs. BE AWARE OF FRAUD: When applying for a job at Jabil you will be contacted via correspondence through our official job portal with a jabil.com e-mail address; direct phone call from a member of the Jabil team; or direct e-mail with a jabil.com e-mail address. Jabil does not request payments for interviews or at any other point during the hiring process. Jabil will not ask for your personal identifying information such as a social security number, birth certificate, financial institution, driver's license number or passport information over the phone or via e-mail. If you believe you are a victim of identity theft, contact your local police department. Any scam job listings should be reported to whatever website it was posted in. Jabil, including its subsidiaries, is an equal opportunity employer and considers qualified applicants for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, age, disability, genetic information, veteran status, or any other characteristic protected by law. Accessibility Accommodation If you are a qualified individual with a disability, you have the right to request a reasonable accommodation if you are unable or limited in your ability to use or access Jabil.com/Careers site as a result of your disability. You can request a reasonable accommodation by sending an e-mail to Always_Accessible@Jabil.com with the nature of your request and contact information. Please do not direct any other general employment related questions to this e-mail. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to. #whereyoubelong Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Location - Bangalore/ Gurgaon Key Responsibilities: Lead the implementation and optimization of Microsoft Purview across the client’s data estate in MS Fabric/Azure Cloud Platform (ADF or Data Bricks etc). Define and enforce data governance policies, data classification, sensitivity labeling, and data lineage to ensure readiness for GenAI use cases. Collaborate with data engineers, architects, and AI/ML teams to ensure data discoverability, compliance, and ethical AI readiness. Design and implement data cataloging strategies to support GenAI model training and inference. Provide guidance on data access controls, privacy, and regulatory compliance (e.g., GDPR, HIPAA). Conduct workshops and training sessions for client stakeholders on Purview capabilities and best practices. Monitor and report on data governance KPIs and GenAI readiness metrics. Required Skills & Qualifications: Proven experience as a Microsoft Purview SME in enterprise environments. Strong knowledge of Microsoft Fabric, OneLake, and Synapse Data Engineering. Experience with data governance frameworks and metadata management. Hands-on experience with data classification, sensitivity labels, and data lineage tracking. Understanding of compliance standards and data privacy regulations. Excellent communication and stakeholder management skills. Preferred Qualifications: Microsoft certifications in Azure Data, Purview, or Security & Compliance. Experience working with Azure OpenAI, Copilot integrations, or other GenAI platforms. Background in data science, AI ethics, or ML operations is a plus. Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
Gachibowli, Hyderabad, Telangana
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Ashutosh Panda Sponsorship Available: No Relocation Assistance Available: No Position Description: As a valued member of Global IT SAP , You will report to the leader responsible for the respective functional area. You will play a critical role in analyzing business and technical needs, translating processes into practice using current Information Technology toolsets. You will also implement innovative solutions through configuration, create functional specifications, and facilitate seamless solution realization. Additionally, You will be responsible for conducting rigorous testing and troubleshooting solutions to ensure efficiency and accuracy. Principal Responsibilities: You will analyze business and technical needs and translate processes into practice using the latest Information Technology tools. You will implement new solutions by configuring systems, creating functional specifications, and facilitating their realization. You will conduct full integration testing of new or modified business and technical processes, ensuring seamless performance. You will also develop comprehensive business/technical cycle tests for stakeholders and document all process changes for knowledge sharing. You will troubleshoot, investigate, and persist in resolving complex issues, even when precedents do not exist. By applying logic, inference, creativity, and initiative, You will develop robust solutions while providing cross-functional support and maintenance in responsible areas. You will conduct cost-benefit analyses by evaluating alternative design approaches, ensuring the best-balanced solution—one that meets immediate stakeholder needs, aligns with system requirements, and allows for future adaptability. You will assume a leadership role in small-scale initiatives by contributing as a key player, facilitator, or group lead. Required Experience and Education: You must hold a Bachelor’s Degree in MIS, Computer Science, Engineering, Technology, Business Administration , or a related field. You must have at least six years of experience in IT . You must have a minimum of four years of hands-on experience with SAP , including SAP SRM, SAP ERP, SAP S4 Central Procurement, and SAP Ariba Guided Buying . Experience with SAP Fieldglass is a plus. Personal Skills, Attributes, and Qualifications: You must have techno-functional knowledge in SAP IT applications within the relevant business area. You need a strong ability to understand business processes, address business needs, and deliver high-quality, efficient solutions. You must demonstrate strong analytical and problem-solving skills , coupled with excellent written and verbal communication skills in English. You should have solid solution design capabilities across various functions and applications. You must be willing and able to work flexible hours as required on special occasions. #Li-Hybrid Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 2 weeks ago
0.0 - 2.0 years
0 Lacs
Gachibowli, Hyderabad, Telangana
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Maria Monica Canding Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities: You analyze business and technical needs, translating processes into practice using the current Information Technology toolsets. You implement new solutions through configuration and the creation of functional specifications, facilitating solution realization. You conduct full integration testing of new or changed business/technical processes, and develop business/technical cycle tests for use by stakeholders. You document all process changes and share relevant knowledge with stakeholders. You troubleshoot, investigate, and persist in finding solutions to problems with unknown causes, where precedents do not exist, by applying logic, inference, creativity, and initiative. You provide cross-functional support and maintenance for the responsible business/technical areas. You conduct cost/benefit analyses by evaluating alternative design approaches to determine the best-balanced solution. The best-balanced solution satisfies immediate stakeholder needs, meets system requirements, and facilitates subsequent change. You assume a leadership role in small initiatives, playing the key contributor, facilitator, or group lead. Qualifications: You hold a Bachelor's degree in MIS, Computer Science, Engineering, Technology, Business Administration, or, in lieu of a degree, have 9 years of IT experience. You have at least 4 years of experience in IT, with a minimum of 2 years of experience in SAP. You possess techno-functional knowledge of SAP IT applications in the relevant business area. You have the ability to understand business processes and needs, delivering prompt, efficient, and high-quality service to the business. You have strong analytical and problem-solving skills, with excellent written and verbal communication skills and a strong command of English. You have solid solution design capabilities across functions and applications. You are able to work flexible hours as required for special occasions. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 2 weeks ago
0.0 - 2.0 years
0 Lacs
Hyderabad, Telangana
On-site
LEAD DATA ENGINEER Location: Hyderabad Role: Permanent Mode: WFO JOB RESPONSIBILITIES: Tracks the various Machine learning projects and their data needs. Tracks and improves Kanban process of product maintenance Drives complex technical discussions both within company and outside data partners Actively Contributes to the design of machine learning solutions by having a deep understanding of how the data is used and how new sources of data can be introduced Advocates for investments in tools and technologies to streamline data workflows and reduce technical debt Continuously explores and adopts emerging technologies and methodologies in data engineering and machine learning Develops and maintains scalable data pipelines to support machine learning models and analytics Collaborates with data scientists to ensure efficient data processing and model deployment Ensures data quality, integrity, and security across all stages of the data pipeline Implements monitoring and alerting systems to detect anomalies in data processing and model performance Enhances data versioning, data lineage, and reproducibility practices to improve model transparency and auditing . QUALIFICATION 5+ years of experience in data engineering or related fields, with a strong focus on building scalable data pipelines to support machine learning workflows. Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or other relevant fields. Specific experience in Kafka needed . Snowflake and data bricks would be huge plus. Proven expertise in designing, implementing, and maintaining large-scale, high-performance data architectures and ETL processes managing 1TB a day. Strong knowledge of database management systems (SQL and NoSQL), distributed data processing (e.g., Hadoop, Spark), and cloud platforms (AWS, GCP, Azure). Experience working closely with data scientists and machine learning engineers to optimize data flows for model training and real-time inference with latency requirements. Hands-on experience with data wrangling, data preprocessing, and feature engineering to ensure clean, high-quality data for machine learning models. Solid understanding of data governance, security protocols, and compliance requirements (e.g., GDPR, HIPAA) to ensure data privacy and integrity. Preferred Experience in data pipelines and analytics for video-game development Experience in Advertising industry Experience in online businesses where transactions happen without human intervention. Job Types: Full-time, Permanent Pay: ₹503,603.23 - ₹1,851,406.88 per year Schedule: Day shift Monday to Friday Application Question(s): Are you an Immediate Joiner ? Experience: Kafka: 2 years (Required) snow flake: 2 years (Required) Data engineer: 5 years (Required) Data bricks: 2 years (Required) Location: Hyderabad, Telangana (Required) Work Location: In person
Posted 2 weeks ago
812.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description We're building the most personalized and intelligent news experiences for Indias next 750 million digital users. As our Principal Machine Learning (ML) / Personalization Engineer, you will : Architect and deploy ML-based personalization systems for our suite of digital news products, including recommender systems for content ranking, homepage personalization, push notification targeting, and audience segmentation. Collaborate closely with editors, product managers, and analysts to integrate machine learning into the editorial workflowmaking content creation, packaging, and distribution smarter and audience-aware. Analyze user behavior and content consumption patterns using large-scale datasets to build user understanding models and inform personalization strategies. Own the end-to-end ML pipeline: from data acquisition, feature engineering, model training & evaluation, to deployment and real-time inference. Drive experimentation culture: lead A/B testing and iterative optimization of recommendation and ranking models. Stay on top of global trends in personalization, news AI, large language models (LLMs), and recommendation systems, and bring best-in-class solutions to our stack. Who You Need To Be Bachelors or Masters degree in Computer Science, Data Science, Statistics, or a related field. 812 years of experience in machine learning, ideally in recommendation systems, personalization, or search relevance. Strong experience with Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Hands-on with recommendation engines (collaborative filtering, content-based, hybrid models) and vector similarity models. Experience with real-time data processing frameworks and deploying models in production. Solid understanding of SQL and data platforms (e.g., Snowflake, BigQuery, or Redshift). Exposure to BI tools (Metabase, Looker, Tableau) is a plus. Comfortable navigating ambiguous, fast-paced environments and leading cross-functional initiatives. Excellent communication and collaboration skillsable to explain complex ML concepts to non-technical stakeholders. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Key Responsibilities Fine-tune and deploy LLMs (e.g., LLaMA 3.2) using LoRA, QLoRA, and related optimization techniques Build intelligent systems combining NLP, image analysis, and pattern recognition Develop and integrate retrieval-augmented generation (RAG) pipelines using LangChain, LlamaIndex, and vector databases like FAISS or Weaviate Apply statistical analysis, feature engineering, and advanced data analysis techniques using NumPy and Pandas Train and evaluate models with CNNs, transformers, and neural networks in PyTorch, TensorFlow, or Keras Package and deploy scalable inference APIs with FastAPI on AWS or Azure Collaborate directly with product and engineering teams to ship AI features into production Requirements Possess 1+ years of hands-on experience in data science, machine learning, or deep learning roles Demonstrate strong experience with Python, NumPy, Pandas, and modern ML frameworks (PyTorch or TensorFlow) Show practical understanding of transformers, CNNs, and neural network training pipelines Have exposure to LLMs, vector databases, and/or retrieval-augmented generation (RAG) systems Be familiar with FastAPI and deploying ML models to the cloud (AWS or Azure) Hold a solid grounding in statistics, data wrangling, and model evaluation techniques About Company: Softsensor.ai is a USA and India-based corporation focused on delivering outcomes to clients using data. Our expertise lies in a collection of people, methods, and accelerators to rapidly deploy solutions for our clients. Our principals have significant experience with leading global consulting firms & corporations and delivering large-scale solutions. We are focused on data science and analytics for improving the process and organizational performance. We are working on cutting-edge data science technologies like NLP, CNN, and RNN and applying them in the business context. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as an MLOps Engineer at Barclays, where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as an MLOps Engineer you should have experience with: Strong programming skills in Python and experience with ML libraries (e.g., scikit-learn, TensorFlow, PyTorch) Experience with Jenkins, GitHub Actions, or GitLab CI/CD for automating ML pipelines Strong knowledge of Docker and Kubernetes for scalable deployments Deep experience with AWS services (e.g., SageMaker, Bedrock, Lambda, CloudFormation, Step Functions, S3 and IAM). Managing infrastructure for training and inference using AWS S3, EC2, EKS, and Step Functions. Experience with Infrastructure as Code (e.g., Terraform, AWS CDK) Familiarity with model lifecycle management tools (e.g., MLflow, SageMaker Model Registry). Strong understanding of DevOps principles applied to ML workflows. Some Other Highly Valued Skills May Include Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Knowledge of data engineering tools (e.g., Apache Airflow, Kafka, Spark). Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e.g., FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain infrastructure platforms and products that support applications and data systems, using hardware, software, networks, and cloud computing platforms as required with the aim of ensuring that the infrastructure is reliable, scalable, and secure. Ensure the reliability, availability, and scalability of the systems, platforms, and technology through the application of software engineering techniques, automation, and best practices in incident response. Accountabilities Build Engineering: Development, delivery, and maintenance of high-quality infrastructure solutions to fulfil business requirements ensuring measurable reliability, performance, availability, and ease of use. Including the identification of the appropriate technologies and solutions to meet business, optimisation, and resourcing requirements. Incident Management: Monitoring of IT infrastructure and system performance to measure, identify, address, and resolve any potential issues, vulnerabilities, or outages. Use of data to drive down mean time to resolution. Automation: Development and implementation of automated tasks and processes to improve efficiency and reduce manual intervention, utilising software scripting/coding disciplines. Security: Implementation of a secure configuration and measures to protect infrastructure against cyber-attacks, vulnerabilities, and other security threats, including protection of hardware, software, and data from unauthorised access. Teamwork: Cross-functional collaboration with product managers, architects, and other engineers to define IT Infrastructure requirements, devise solutions, and ensure seamless integration and alignment with business objectives via a data driven approach. Learning: Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to accommodations@uber.com . What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Develop Security Solutions: designing, developing, and implementing software solutions to enhance the security posture of the organization. Develop and evaluate large-scale machine learning models systems Research: Researching new techniques and tools to enhance the organization's cyber defense capabilities. Present findings to business and executive audiences Collaborate with engineers and product managers to implement ideas and plan future roadmaps Optimize retrieval-augmented generation (RAG) systems for enhanced performance and relevance. Fine-tune large language models (LLMs) to improve predictive accuracy and operational efficiency. Implement agentic workflows to streamline processes and enhance decision-making. Collaboration and Communication: Work closely with cross-functional teams such as IAM, network operations, incident response, and compliance to implement ideas, plan future roadmaps and ensure a cohesive approach to cybersecurity. Basic Qualifications Ph.D., MS or Bachelors degree in Statistics, Operations Research, Computer Science, Engineering, or other quantitative field 5+ years of industry experience in software engineering Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models. Knowledge of experimental design and analysis Ability to use a language like Python or R to work efficiently at scale with large data sets Preferred Qualifications Knowledge in modern machine learning techniques applicable to Cybersecurity domain Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Experience working with large scale data sets using technologies like Hive, Presto, and Spark Experience with synthetic data generation. Proficiency in fine-tuning and optimizing large language models (LLMs). Experience in retrieval-augmented generation (RAG) systems. Familiarity with agentic workflows and their applications in machine learning and AI systems. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to accommodations@uber.com . What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Develop Security Solutions: designing, developing, and implementing software solutions to enhance the security posture of the organization. Develop and evaluate large-scale machine learning models systems Research: Researching new techniques and tools to enhance the organization's cyber defense capabilities. Present findings to business and executive audiences Collaborate with engineers and product managers to implement ideas and plan future roadmaps Optimize retrieval-augmented generation (RAG) systems for enhanced performance and relevance. Fine-tune large language models (LLMs) to improve predictive accuracy and operational efficiency. Implement agentic workflows to streamline processes and enhance decision-making. Collaboration and Communication: Work closely with cross-functional teams such as IAM, network operations, incident response, and compliance to implement ideas, plan future roadmaps and ensure a cohesive approach to cybersecurity. Basic Qualifications Ph.D., MS or Bachelors degree in Statistics, Operations Research, Computer Science, Engineering, or other quantitative field 5+ years of industry experience in software engineering Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models. Knowledge of experimental design and analysis Ability to use a language like Python or R to work efficiently at scale with large data sets Preferred Qualifications Knowledge in modern machine learning techniques applicable to Cybersecurity domain Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Experience working with large scale data sets using technologies like Hive, Presto, and Spark Experience with synthetic data generation. Proficiency in fine-tuning and optimizing large language models (LLMs). Experience in retrieval-augmented generation (RAG) systems. Familiarity with agentic workflows and their applications in machine learning and AI systems. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description Key Responsibilities: Applies engineering and/or scientific skills to technical processes with support from experienced team members. Participates as a team member, helps define/refine methods, and actively contributes towards team goals. Carries out engineering responsibilities using accepted methods and practical experience. Demonstrates good understanding and applies knowledge of an engineering discipline. Continues to develop capability to create engineering solutions through training and experience. Responsibilities Qualifications: Master of Science, Bachelor of Science, or equivalent technical degree required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies Applies Principles of Statistical Methods: Analyzes technical data using descriptive statistics, probability distributions, graphical analysis, and statistical inference; models relationships between response and independent variables using analysis of variance, regression, and design of experiments to make rigorous, data-based decisions. Product Failure Mode Avoidance: Mitigates potential product failure modes by identifying interfaces, functions, functional requirements, interactions, control factors, noise factors, and prioritized potential failure modes and potential failure causes for the system of interest to effectively and efficiently improve the reliability of Cummins’ products. Product Failure Reporting and Corrective/Preventive Action Systems: Defines and leads a process to record, prioritize, and resolve product failures using cross-functional reviews, rigorous problem-solving methods, failed parts transfer processes, data management tools, and project management practices to effectively and efficiently improve the reliability of the product. Product Problem Solving: Solves product problems using a process that protects the customer; determines the assignable cause; implements robust, data-based solutions; and identifies the systemic root causes and recommended actions to prevent problem reoccurrence. Product Reliability and Reliability Risk Management: Plans and manages critical reliability activities during new product development by preventing failures before hardware, detecting failures before the customer does, and improving products before production in order to release a reliable and durable product; evaluates key technical and program measures to assess the launch readiness of a new product using prescribed indicators, measures, risk scales, and methods of tracking to focus attention on metrics to reduce risk and improve reliability. Quantitative Reliability Analysis: Analyzes failure data from existing and/or new products by establishing failure rate models for use in assessing the feasibility of meeting reliability targets, comparing the reliability of product alternatives, estimating reliability and product coverage costs, identifying emerging issues, or verifying that improvements implemented have had the desired reliability improvement. Reliability Test Planning: Develops and analyzes a test plan acknowledging the relationship among reliability, sample size, distribution parameters, and confidence; develops system-level reliability test plan by considering schedule, number of units, applications, noises, and locations to find unknown failure modes to improve reliability; creates an accelerated test plan by increased use, overstress testing, or combining multiple stresses to build and extrapolate a model to estimate reliability under normal use conditions. Customer Focus: Building strong customer relationships and delivering customer-centric solutions. Global Perspective: Taking a broad view when approaching issues, using a global lens. Values Differences: Recognizing the value that different perspectives and cultures bring to an organization. Qualifications Skills and Knowledge: Knowledge of engine and Genset components, functions, and failure modes. Experience with MS Office tools (Word, PowerPoint, Excel) is preferred. Proficiency in Excel programming (VBA) and Power Pivot is desirable. Familiarity with statistical software packages (e.g., Minitab, Weibull++, Winsmith, JMP, R-Programming) is preferred. Knowledge of engine performance measurement is advantageous. Awareness of warranty data analysis and life data analysis is preferable. Experience in service and quality functions is desirable. Strong analytical skills for handling large datasets and deriving meaningful conclusions. Six Sigma Yellow Belt certification required; Green Belt certification preferred. Experience Basic relevant work experience desired, such as internship, co-op, or other pertinent work experience. This is a Hybrid role. Job Quality Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2414907 Relocation Package No Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. The India Data & Analytics Global Capability Centre is an integral part of ACT’s Global Data & Analytics Team and the Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.) Show more Show less
Posted 2 weeks ago
4.0 - 5.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Description We are seeking an experienced AI Engineer with 4-5 years of hands-on experience in designing and implementing AI solutions. The ideal candidate should have a strong foundation in developing AI/ML-based solutions, including expertise in Computer Vision (OpenCV). Additionally, proficiency in developing, fine-tuning, and deploying Large Language Models (LLMs) is essential. As an AI Engineer, candidate will work on cutting-edge AI applications, using LLMs like GPT, LLaMA, or custom fine-tuned models to build intelligent, scalable, and impactful solutions. candidate will collaborate closely with Product, Data Science, and Engineering teams to define, develop, and optimize AI/ML models for real-world business applications. Key Responsibilities Research, design, and develop AI/ML solutions for real-world business applications, RAG is must. Collaborate with Product & Data Science teams to define core AI/ML platform features. Analyze business requirements and identify pre-trained models that align with use cases. Work with multi-agent AI frameworks like LangChain, LangGraph, and LlamaIndex. Train and fine-tune LLMs (GPT, LLaMA, Gemini, etc.) for domain-specific tasks. Implement Retrieval-Augmented Generation (RAG) workflows and optimize LLM inference. Develop NLP-based GenAI applications, including chatbots, document automation, and AI agents. Preprocess, clean, and analyze large datasets to train and improve AI models. Optimize LLM inference speed, memory efficiency, and resource utilization. Deploy AI models in cloud environments (AWS, Azure, GCP) or on-premises infrastructure. Develop APIs, pipelines, and frameworks for integrating AI solutions into products. Conduct performance evaluations and fine-tune models for accuracy, latency, and scalability. Stay updated with advancements in AI, ML, and GenAI technologies. Required Skills & Experience AI & Machine Learning: Strong experience in developing & deploying AI/ML models. Generative AI & LLMs: Expertise in LLM pretraining, fine-tuning, and optimization. NLP & Computer Vision: Hands-on experience in NLP, Transformers, OpenCV, YOLO, R-CNN. AI Agents & Multi-Agent Frameworks: Experience with LangChain, LangGraph, LlamaIndex. Deep Learning & Frameworks: Proficiency in TensorFlow, PyTorch, Keras. Cloud & Infrastructure: Strong knowledge of AWS, Azure, or GCP for AI deployment. Model Optimization: Experience in LLM inference optimization for speed & memory efficiency. Programming & Development: Proficiency in Python and experience in API development. Statistical & ML Techniques: Knowledge of Regression, Classification, Clustering, SVMs, Decision Trees, Neural Networks. Debugging & Performance Tuning: Strong skills in unit testing, debugging, and model evaluation. Hands-on experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone). Good To Have Experience with multi-modal AI (text, image, video, speech processing). Familiarity with containerization (Docker, Kubernetes) and model serving (FastAPI, Flask, Triton). Requirements We are seeking an experienced AI Engineer with 4-5 years of hands-on experience in designing and implementing AI solutions. The ideal candidate should have a strong foundation in developing AI/ML-based solutions, including expertise in Computer Vision (OpenCV). Additionally, proficiency in developing, fine-tuning, and deploying Large Language Models (LLMs) is essential. As an AI Engineer, candidate will work on cutting-edge AI applications, using LLMs like GPT, LLaMA, or custom fine-tuned models to build intelligent, scalable, and impactful solutions. candidate will collaborate closely with Product, Data Science, and Engineering teams to define, develop, and optimize AI/ML models for real-world business applications. Key Responsibilities: - Research, design, and develop AI/ML solutions for real-world business applications, RAG is must. - Collaborate with Product & Data Science teams to define core AI/ML platform features. - Analyze business requirements and identify pre-trained models that align with use cases. - Work with multi-agent AI frameworks like LangChain, LangGraph, and LlamaIndex. - Train and fine-tune LLMs (GPT, LLaMA, Gemini, etc.) for domain-specific tasks. - Implement Retrieval-Augmented Generation (RAG) workflows and optimize LLM inference. - Develop NLP-based GenAI applications, including chatbots, document automation, and AI agents. - Preprocess, clean, and analyze large datasets to train and improve AI models. - Optimize LLM inference speed, memory efficiency, and resource utilization. - Deploy AI models in cloud environments (AWS, Azure, GCP) or on-premises infrastructure. - Develop APIs, pipelines, and frameworks for integrating AI solutions into products. - Conduct performance evaluations and fine-tune models for accuracy, latency, and scalability. - Stay updated with advancements in AI, ML, and GenAI technologies. Required Skills & Experience: - AI & Machine Learning: Strong experience in developing & deploying AI/ML models. - Generative AI & LLMs: Expertise in LLM pretraining, fine-tuning, and optimization. - NLP & Computer Vision: Hands-on experience in NLP, Transformers, OpenCV, YOLO, R-CNN. - AI Agents & Multi-Agent Frameworks: Experience with LangChain, LangGraph, LlamaIndex. - Deep Learning & Frameworks: Proficiency in TensorFlow, PyTorch, Keras. - Cloud & Infrastructure: Strong knowledge of AWS, Azure, or GCP for AI deployment. - Model Optimization: Experience in LLM inference optimization for speed & memory efficiency. - Programming & Development: Proficiency in Python and experience in API development. - Statistical & ML Techniques: Knowledge of Regression, Classification, Clustering, SVMs, Decision Trees, Neural Networks. - Debugging & Performance Tuning: Strong skills in unit testing, debugging, and model evaluation. - Hands-on experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone). Good to Have: - Experience with multi-modal AI (text, image, video, speech processing). - Familiarity with containerization (Docker, Kubernetes) and model serving (FastAPI, Flask, Triton). Show more Show less
Posted 2 weeks ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! About The Team The Adobe Express team is building a path-breaking, all-in-one creative application for all platforms - web, desktop, and mobile. This platform combines the power of Adobe’s creative technologies with intuitive, AI-enhanced workflows that empower anyone to create standout content quickly and effortlessly. The Opportunity We’re looking for a Senior Machine Learning Engineer (Architect) to play a key role in shaping the next generation of AI-powered creative experiences in Adobe Express. This role will build high-impact AI Workflows for Adobe Express in Image editing space. Concentrating on brand new generative AI workflows, including personalized assistants and intelligent creative tools. Speed up AI culture by sharing knowledge, encouraging experimentation, and improving developer efficiency. Experience Requirements: 15+ years of proven experience in hands-on Machine Learning work. As a Machine Learning Engineer, you will: Build and scale advanced ML models to make Image editing easy and seamless for users Develop AI Agents for Adobe Express in Imaging space Partner closely with product, design and engineering teams across Adobe to integrate Adobe’s latest generative AI capabilities user-facing features. Help drive a culture of AI innovation and learning through internal knowledge-sharing, best-practice documentation, and experimentation frameworks that boost team productivity. Detailed Responsibilities: Research, design, and implement advanced ML models and scalable pipelines across training, inference, and deployment stages, using techniques in computer vision, NLP, deep learning, and generative AI. Integrate Large Language Models (LLMs) and agent-based frameworks to support multimodal creative workflows—enabling rich, context-aware, dynamic user experiences. Design, implement, and optimize subagent architectures for supporting modular and intelligent assistance across various creative tasks in Adobe Express. Collaborate with multi-functional teams to translate product requirements into ML solutions—especially those related to Harmony GenAI, smart recommendations, and generative tooling. Contribute to the development of internal platforms for model experimentation, A/B testing, performance monitoring, and continuous improvement. Stay up-to-date with evolving ML/GenAI research, tools, and frameworks—including federated learning, retrieval-augmented generation, and optimization for real-time inference. Champion an AI-first culture by mentoring peers, promoting learning opportunities, and encouraging innovation at both technical and organizational levels. Special Skills Requirements: Proficiency in Python for model development and C++ for systems integration. Strong hands-on experience with TensorFlow, PyTorch, and emerging GenAI toolkits. Experience working with LLMs, agent architectures, and user interfaces powered by AI technology. Deep understanding of computer vision and NLP techniques, especially for multimodal AI applications. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
India
Remote
This isn't your typical DevOps role. This is your chance to engineer the backbone of a next-gen AI-powered SaaS platform —where modular agents drive dynamic UI experiences, all running on a serverless AWS infrastructure with a Salesforce and SaaS-native backend. We're not building features—we're building an intelligent agentic ecosystem . If you've led complex multi-cloud builds, automated CI/CD pipelines with Terraform, and debugged AI systems in production, this is your arena. About Us We're a forward-thinking organization on a mission to reshape how businesses leverage cloud technologies and AI. Our approach is centered around delivering high-impact solutions that unify platforms across AWS, enterprise SaaS, and Salesforce. We don't just deliver software; we craft robust product ecosystems that redefine user interactions, streamline processes, and accelerate growth for our clients. The Role We are seeking a hands-on Agentic AI Ops Engineer who thrives at the intersection of cloud infrastructure , AI agent systems , and DevOps automation . In this role, you will build and maintain the CI/CD infrastructure for Agentic AI solutions using Terraform on AWS , while also developing, deploying, and debugging intelligent agents and their associated tools . This position is critical to ensuring scalable, traceable, and cost-effective delivery of agentic systems in production environments. The Responsibilities CI/CD Infrastructure for Agentic AI Design, implement, and maintain CI/CD pipelines for Agentic AI applications using Terraform , AWS CodePipeline , CodeBuild , and related tools. Automate deployment of multi-agent systems and associated tooling, ensuring version control, rollback strategies, and consistent environment parity across dev/test/prod Agent Development & Debugging Collaborate with ML/NLP engineers to develop and deploy modular, tool-integrated AI agents in production. Lead the effort to create debuggable agent architectures , with structured logging, standardized agent behaviors, and feedback integration loops. Build agent lifecycle management tools that support quick iteration, rollback, and debugging of faulty behaviors Monitoring, Tracing & Reliability Implement end-to-end observability for agents and tools, including runtime performance metrics , tool invocation traces , and latency/accuracy tracking . Design dashboards and alerting mechanisms to capture agent failures, degraded performance, and tool bottlenecks in real-time. Build lightweight tracing systems that help visualize agent workflows and simplify root cause analysis Cost Optimization & Usage Analysis Monitor and manage cost metrics associated with agentic operations including API call usage , toolchain overhead , and model inference costs . Set up proactive alerts for usage anomalies , implement cost dashboards , and propose strategies for reducing operational expenses without compromising performance Collaboration & Continuous Improvement Work closely with product, backend, and AI teams to evolve the agentic infrastructure design and tool orchestration workflows . Drive the adoption of best practices for Agentic AI DevOps , including retraining automation, secure deployments, and compliance in cloud-hosted environments. Participate in design reviews, postmortems, and architectural roadmap planning to continuously improve reliability and scalability Requirements 2+ years of experience in DevOps, MLOps, or Cloud Infrastructure with exposure to AI/ML systems . Deep expertise in AWS serverless architecture , including hands-on experience with: AWS Lambda - function design, performance tuning, cold-start optimization. Amazon API Gateway - managing REST/HTTP APIs and integrating with Lambda securely. Step Functions - orchestrating agentic workflows and managing execution states. S3, DynamoDB, EventBridge, SQS - event-driven and storage patterns for scalable AI systems. Strong proficiency in Terraform to build and manage serverless AWS environments using reusable, modular templates Experience deploying and managing CI/CD pipelines for serverless and agent-based applications using AWS CodePipeline, CodeBuild, CodeDeploy , or GitHub Actions Hands-on experience with agent and tool development in Python , including debugging and performance tuning in production. Solid understanding of IAM roles and policies , VPC configuration, and least-privilege access control for securing AI systems. Deep understanding of monitoring, alerting, and distributed tracing systems (e.g., CloudWatch, Grafana, OpenTelemetry). Ability to manage environment parity across dev, staging, and production using automated infrastructure pipelines. Excellent debugging, documentation, and cross-team communication skills Benefits Health Insurance, PTO, and Leave time Ongoing paid professional training and certifications Fully Remote work Opportunity Strong Onboarding & Training program Work Timings - 1pm -10 pm IST Next Steps We're looking for someone who already embodies the spirit of a boundary-breaking AI Technologist—someone who's ready to own ambitious projects and push the boundaries of what LLMs can do. Apply Now : Send us your resume and answer a few key questions about your experience and vision Show Us Your Ingenuity : Be prepared to talk shop on your boldest AI solutions and how you overcame the toughest technical hurdles Collaborate & Ideate : If selected, you'll workshop a real-world scenario with our team—so we can see firsthand how your mind works This is your chance to leave a mark on the future of AI—one LLM agent at a time. We're excited to hear from you! Our Belief We believe extraordinary things happen when technology and human creativity unite. By empowering teams with generative AI, we free them to focus on meaningful relationships, innovative solutions, and real impact. It's more than just code—it's about sparking a revolution in how people interact with information, solve problems, and propel businesses forward. If this resonates with you—if you're driven, daring, and ready to build the next wave of AI innovation—then let's do this. Apply now and help us shape the future. About Expedite Commerce At Expedite Commerce, we believe that people achieve their best when technology enables them to build relationships and explore new ideas. So we build systems that free you up to focus on your customers and drive innovations. We have a great commerce platform that changes the way you do business! See more about us at expeditecommerce.com. You can also read about us on https://www.g2.com/products/expedite-commerce/reviews, and on Salesforce Appexchange/ExpediteCommerce. EEO Statement All qualified applicants to Expedite Commerce are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran's status or any other protected characteristic. Show more Show less
Posted 2 weeks ago
0 years
2 - 9 Lacs
Chennai
On-site
Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Senior DevOps Engineer has an elevated role in bridging the gap between development and the networking areas of focus; main areas of focus are around low-level infrastructure and monitoring of all company services. This role is responsible for ownership of the build and deployment pipelines of our products, taking the lead in advising in the product and application development as necessary. Full-time | Hybrid - 3 days in office | Based out of Hyderabad, India Rs.12-20LPA (based on experience) Essential Duties and Responsibilities include the following. Other duties may be assigned. Collaborate on requirements with internal and external clients, providing a high level of service and using sound judgement when making or influencing the decisions of others. Responsible for the design, development, testing, and performance of our build, deployment, and networking layers of the platform/infrastructure being built. Implement and manage the CI/CD pipelines to contribute to the overall product roadmaps of both internal and external 3rd party systems by providing seamless build and deployment processes. Tune monitoring of application, systems, or AWS metrics, improving our internal automation through the use of “Infrastructure as Code” tools. Responsible for handling escalated issues related to team projects, collaborating with management as needed. Design, develop and implement infrastructure cost-savings measures. Identify and debug issues in production, resolve, and work to prevent their reoccurrence. Be a resource for development teams; provide a specific focus on improving reliability and performance. Assist management with interviewing candidates, training team members, planning, assigning, and monitoring work for the team. Other duties as assigned by management. Must be able to come to work promptly and regularly. Must be able to take direction and work well with others. Must be able to work under the stress of deadlines. Must be able to concentrate and perform accurately. Minimum Qualifications: Bachelor’s degree in Computer Science, Cybersecurity, Networking, or related field. Minimum 5 years’ experience in at least two IT disciplines, including technical architecture, application development, or operations. Relevant experience can be considered in lieu of level of degree. To perform this job successfully, an individual should have knowledge of Word Processing software; Spreadsheet software and Design software. Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to respond to common inquiries or complaints from customers, regulatory agencies, or members of the business community. Ability to effectively present information to top management, public groups, and/or boards of directors. Physical Demands The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. While performing the duties of this Job, the employee is occasionally required to stand; walk and sit. The employee must occasionally lift and/or move up to 10 pounds. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. The noise level in the work environment is usually moderate. Foundation Finance Company provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. These benefits are designed to support our employees in their professional growth, health, and overall well-being. Eligibility, coverage details, and enrollment processes will be provided during the onboarding process. At Foundation Finance Company, we are committed to fostering a positive work environment where employees can thrive both personally and professionally. Show more Show less
Posted 2 weeks ago
3.0 years
5 - 10 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Build platform features to for AI/ML lifecycle include data processing, AI inference (batch, real-time, hybrid) and end-to-end MLOps for non-generative and generative use-cases Utilize deep knowledge of AI/ML systems engineering to define future architecture patterns for a multi-cloud platform Develop robust, scalable, and maintainable code that meets high standards of quality and performance Utilize engineering expertise to coach and influence enterprise data scientists and ML engineers to adopt scaleable, robust process architecture for AI/ML use-cases Stay up-to-date with the latest advancements in AI/ML technology and introduce innovative techniques and tools to the team. Mentor and develop junior engineers on the team. Participate in the entire software development lifecycle – design, implementation, testing, CI/CD and production operations Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Master’s degree or Ph.D. with 3+ years of relevant experience or a bachelor's degree and at least 4+ years of relevant experience 4+ years industry experience as a Software Engineer, Software Developer, or AI/ML Engineer, with at least 3+ years in senior engineering roles with increasing scope 3+ years of demonstrable experience with AI/ML Engineering including data pipelines, model inference and MLOps for batch and real-time endpoints on cloud platforms (e.g. Databricks, Azure, AWS, GCP, Snowflake) 3+ years of experience with demonstrable proficiency in programming languages such as Python or Java 3+ years of demonstrable proficiency with querying and data processing tools (e.g. PySpark, SQL) as well as command-line tools (shell, regex, cloud CLI) Preferred Qualifications: Experience with DevOps practices, including CI/CD, containerization (Docker, Kubernetes), and infrastructure as code\ Experience with AI/ML development in industries such as healthcare, aerospace, insurance Deep understanding of AI/ML lifecycle and variations for different use-cases (non-generative, generative) Proven success in collaborating with geographically distributed team At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.
These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.
The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.
In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.
As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.