Jobs
Interviews

515 Mlops Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a DevOps/Data Platform Engineer at our company, you will play a crucial role in supporting our cloud-based infrastructure, CI/CD pipelines, and data platform operations. Your responsibilities will include building and managing CI/CD pipelines using tools like GitHub Actions, Jenkins, and Build Piper, provisioning infrastructure with Terraform, Ansible, and cloud services, as well as managing containers using Docker, Kubernetes, and registries. You will also be required to support data platforms such as Snowflake, Databricks, and data infrastructure, and monitor systems using Grafana, Prometheus, and Datadog. Additionally, securing environments using tools like AWS Secrets Manager and enabling MLOps pipelines will be part of your daily tasks to optimize infrastructure for performance and cost. To excel in this role, you should have strong experience in DevOps, cloud platforms, and Infrastructure as Code (IaC). Proficiency in Linux, automation, and system management is essential, along with familiarity with data platforms like Snowflake, Databricks, and CI/CD practices. Excellent troubleshooting skills, collaboration abilities, and effective communication are also required for this position. It would be advantageous if you have experience with ML model deployment, cost optimization, and infrastructure testing. Familiarity with data security best practices is considered a plus. This is a full-time, permanent position located in Gurgaon. If you possess the required experience and skills and are capable of meeting the outlined responsibilities, we encourage you to apply for this position.,

Posted 1 week ago

Apply

6.0 - 13.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

We are seeking a highly experienced candidate with over 13 years of experience for the role of Technical Project Manager(Data) in Trivandrum/Kochi location. As a Technical Project Manager, your responsibilities will revolve around owning the end-to-end delivery of data platform, AI, BI, and analytics projects. It is essential to ensure alignment with business objectives and stakeholder expectations. Your role will involve developing and maintaining comprehensive project plans, roadmaps, and timelines for various aspects including data ingestion, transformation, governance, AI/ML models, and analytics deliverables. Leading cross-functional teams, comprising data engineers, data scientists, BI analysts, architects, and business stakeholders, to deliver high-quality and scalable solutions within the defined budget and timeframe will be a key aspect of this role. Furthermore, you will be responsible for defining, prioritizing, and managing product and project backlogs covering data pipelines, data quality, governance, AI services, and BI dashboards or reporting tools. Collaboration with business units to capture requirements and translate them into actionable user stories and acceptance criteria for data and analytics solutions is crucial. Overseeing BI and analytics areas, including dashboard development, embedded analytics, self-service BI enablement, and ad hoc reporting capabilities, will also be part of your responsibilities. It is imperative to ensure data quality, lineage, security, and compliance requirements are integrated throughout the project lifecycle in collaboration with governance and security teams. Coordinating UAT, performance testing, and user training to ensure successful adoption and rollout of data and analytics products is vital. Acting as the primary point of contact for all project stakeholders, providing regular status updates, managing risks and issues, and escalating when necessary are essential aspects of this role. Additionally, facilitating agile ceremonies such as sprint planning, backlog grooming, demos, and retrospectives to foster a culture of continuous improvement is expected. Driving post-deployment monitoring and optimization of data and BI solutions to meet evolving business needs and performance standards is also a key responsibility. Primary Skills required for this role include: - Over 13 years of experience in IT with at least 6 years in roles such as Technical Product Manager, Technical Program Manager, or Delivery Lead - Hands-on development experience in data engineering, including data pipelines, ETL processes, and data integration workflows - Proven track record in managing data engineering, analytics, or AI/ML projects end to end - Solid understanding of modern data architecture, data lakes, warehouses, pipelines, ETL/ELT, governance, and AI tooling - Hands-on familiarity with cloud platforms (e.g., Azure, AWS, GCP) and DataOps/MLOps practices - Strong knowledge of Agile methodologies, sprint planning, and backlog grooming - Excellent communication and stakeholder management skills, including working with senior execs and technical leads Secondary Skills that would be beneficial for this role include: - Background in computer science, engineering, data science, or analytics - Experience or solid understanding of data engineering tools and services in AWS, Azure & GCP - Exposure or solid understanding of BI, Analytics, LLMs, RAG, prompt engineering, or agent-based AI systems - Experience leading cross-functional teams in matrixed environments - Certifications such as PMP, CSM, SAFe, or equivalent are a plus If you meet the above requirements and are looking for a challenging opportunity in Technical Project Management within the data domain, we encourage you to apply before the closing date on 18-07-2025.,

Posted 1 week ago

Apply

8.0 - 13.0 years

40 - 90 Lacs

Chennai

Work from Office

Short Description: Key responsibilities: 1. Product Strategy & Vision (AI/ML & Scale) 2. Technical Product Development & Execution Leadership 3. People Management & Team Leadership 4. Technical Expertise & Hands-On Contribution 5. Cross-functional Collaboration & Stakeholder Management Description: Product Strategy & Vision (AI/ML & Scale): Define and evangelize the product vision, strategy, and roadmap for our AI/ML platform, data pipelines, and scalable application infrastructure, aligning with overall company objectives. Identify market opportunities, customer needs, and technical trends to drive innovation and competitive advantage in the AI/ML and large-scale data domain. Translate complex technical challenges and opportunities into clear, actionable product requirements and specifications. Product Development & Execution Leadership: Oversee the entire product lifecycle from ideation and discovery through development, launch, and post-launch iteration for critical AI/ML and data products. Work intimately with engineering, data science, and operations teams to ensure seamless execution and delivery of high-quality, performant, and scalable solutions. Champion best practices for productionizing applications at scale and ensuring our systems can handle huge volumes of data efficiently and reliably. Define KPIs and metrics for product success, monitoring performance and iterating based on data-driven insights. People Management & Team Leadership: Lead, mentor, coach, and grow a team of Technical Product Managers, fostering a culture of innovation, accountability, and continuous improvement. Provide clear direction, set performance goals, conduct regular reviews, and support the professional development and career growth of your team members. Act as a leader and role model, promoting collaboration, open communication, and a positive team environment. Technical Expertise & Hands-On Contribution: Possess a deep understanding of the end-to-end ML lifecycle (MLOps), from data ingestion and model training to deployment, monitoring, and continuous improvement. Demonstrate strong proficiency in Google Cloud Platform (GCP) services, including but not limited to, compute, storage, networking, data processing (e.g., BigQuery, Dataflow, Dataproc), and AI/ML services (e.g., Vertex AI, Cloud AI Platform). Maintain a strong hands-on expertise level in Python programming , capable of contributing to prototypes, proof-of-concepts, data analysis, or technical investigations as needed. Extensive practical experience with leading AI frameworks and libraries, including Hugging Face for natural language processing and transformer models. Proven experience with LangGraph (or similar sophisticated agentic frameworks like LangChain, LlamaIndex), understanding their architecture and application in building intelligent, multi-step AI systems. Solid understanding of agentic frameworks, their design patterns, and how to productionize complex AI agents. Excellent exposure to GitHub and modern coding practices , including version control, pull requests, code reviews, CI/CD pipelines, and writing clean, maintainable code. Cross-functional Collaboration & Stakeholder Management: Collaborate effectively with diverse stakeholders across engineering, data science, design, sales, marketing, and executive leadership to gather requirements, communicate progress, and align strategies. Act as a bridge between technical teams and business stakeholders, translating complex technical concepts into understandable business implications and vice-versa. Responsibilities: Technical Skills: Deep expertise in Google Cloud Platform (GCP) services for data, AI/ML, and scalable infrastructure. Expert-level hands-on Python programming skills (e.g., for data manipulation, scripting, API interaction, ML prototyping, productionizing) Strong working knowledge of Hugging Face libraries and ecosystem. Direct experience with LangGraph and/or other advanced agentic frameworks (e.g., LangChain, LlamaIndex) for building intelligent systems. Solid understanding of software development lifecycle, GitHub , Git workflows, and modern coding practices (CI/CD, testing, code quality). Qualifications: Education: Bachelors or Master’s degree in Computer Science, Engineering, or a related technical field. Experience: 8+ years of progressive experience in technical product management roles, with a significant portion focused on AI/ML, data platforms, or highly scalable systems. 3+ years of direct people management experience, leading and mentoring a team of product managers or technical leads. Demonstrable track record of successfully bringing complex technical products from concept to production at scale. Proven ability to manage products that handle massive volumes of data and require high throughput. Extensive practical experience with AI/ML model deployment and MLOps best practices in a production environment. Leadership & Soft Skills: Exceptional leadership, communication, and interpersonal skills, with the ability to inspire and motivate a team. Strong analytical and problem-solving abilities, with a data-driven approach to decision-making. Ability to thrive in a fast-paced, ambiguous, and rapidly evolving technical environment. Excellent ability to articulate complex technical concepts to both technical and non-technical audiences.

Posted 1 week ago

Apply

10.0 - 18.0 years

0 Lacs

pune, maharashtra

On-site

You have 10 to 18 years of relevant experience in Data Science. As a Data Scientist, your responsibilities will include modeling and data processing using Scala Spark/PySpark. You should have expert level knowledge of Python for data science purposes. Additionally, you will be required to work on data science concepts, model building using sklearn/PyTorch, and Graph Analytics using networkX, Neo4j, or similar graph databases. Experience in model deployment and monitoring (MLOps) is also desirable. The required skills for this Data Science position include: - Data Science - Python - Scala - Spark/PySpark - MLOps - GraphDB - Neo4j - NetworkX Our hiring process consists of the following steps: 1. Screening (HR Round) 2. Technical Round 1 3. Technical Round 2 4. Final HR Round This position is based in Pune.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The System Analyst (Market Oriented) role requires applying system analysis expertise and market-driven research to elevate the company's competitive edge. Your primary responsibility will be to continuously assess the company's offerings in comparison to competitors, identify gaps, and suggest innovative, technology-driven solutions, particularly in cloud computing, high-performance computing (HPC), and distributed systems. Collaborating closely with product and development teams is essential to steer market leadership through data-backed insights and technical foresight. Your key responsibilities include: - Conducting in-depth market research, competitive benchmarking, and trend analysis to identify platform enhancement opportunities and guide product decisions. - Analyzing and recommending improvements across public cloud platforms, virtualization layers, container platforms, and infrastructure technologies. - Proposing innovative solutions leveraging knowledge of DevOps, AIOps, MLOps, and distributed systems to enhance platform scalability, reliability, and differentiation in the market. - Working closely with product managers, architects, and engineering teams to translate business needs into system requirements and ensure alignment with the product roadmap. - Developing detailed system specifications, UML diagrams, wireframes, and user stories for efficient planning and development. - Defining system-level KPIs, tracking performance metrics, and providing actionable insights to stakeholders for continuous improvement and strategic planning. - Presenting findings, technical analyses, and recommendations in a clear and compelling manner to technical and business stakeholders for informed decision-making. Key Requirements: - Proficiency in cloud computing, high-performance computing (HPC), and distributed systems. - Demonstrated ability to conduct market research and derive strategic, data-driven insights. - Strong communication and collaboration skills for effective cross-functional teamwork and stakeholder engagement. Educational Qualifications: - Bachelor's degree in Computer Science, Information Systems, or a related field. Experience: - 4+ years of experience in system analysis or related roles, with expertise in system architectures and analysis techniques. This role falls under the Software Division category.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

maharashtra

On-site

As a Data Scientist in our team, you will be responsible for manipulating and preprocessing structured and unstructured data to prepare datasets for analysis and model training. You will utilize Python libraries such as PyTorch, Pandas, and NumPy for data analysis, model development, and implementation. Additionally, you will fine-tune large language models (LLMs) to meet specific use cases and enterprise requirements. Collaboration with cross-functional teams to experiment with AI/ML models and iterate quickly on prototypes is also a key aspect of your role. Your responsibilities will include optimizing workflows to ensure fast experimentation and deployment of models to production environments. You will be expected to implement containerization and basic Docker workflows to streamline deployment processes. Writing clean, efficient, and production-ready Python code for scalable AI solutions is crucial for this role. It would be beneficial if you have exposure to cloud platforms like AWS, Azure, or GCP, knowledge of MLOps principles and tools, and a basic understanding of enterprise Knowledge Management Systems. The ability to work against tight deadlines, independently handle unstructured projects, and showcase strong initiative and self-motivation are desirable traits. Strong communication, collaboration acumen, and a problem-solving mindset with attention to detail are also highly valued. Required skills for this role include proficiency in Python with strong skills in libraries like PyTorch, Pandas, and NumPy, experience in handling both structured and unstructured datasets, and familiarity with fine-tuning LLMs and modern NLP techniques. A basic understanding of Docker and containerization principles, along with the demonstrated ability to experiment, iterate, and deploy code rapidly in a production setting, are essential. You should also possess the ability to learn and adapt quickly in a fast-paced, dynamic environment. In return, we offer you the opportunity to work on cutting-edge AI technologies and impactful projects, a collaborative and growth-oriented work environment, competitive compensation, and benefits package, as well as the chance to be a part of a team shaping the future of enterprise intelligence.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Scientist at Amdocs, you will be part of a global team that is dedicated to accelerating service providers" migration to the cloud, enabling differentiation in the 5G era, and digitalizing and automating operations. You will play a hands-on role in developing algorithms and performing data mining to provide valuable insights from data. Located in Pune, your responsibilities will include handling GenAI use cases, developing Databricks jobs for data ingestion, and creating partnerships with project stakeholders to provide technical assistance on important decisions. Your technical skills should include experience in Deep learning engineering, particularly in MLOps, strong NLP/LLM experience, and proficiency in Pyspark/Databricks & Python programming. You will be responsible for building backend applications using Python and Deep learning frameworks, deploying models, and building APIs. Experience with GPU, vector databases like Milvus, Azure cognitive search, and transformers will be essential. Knowledge of Kubernetes, Docker, cloud experience with VMs and Azure storage, as well as sound data engineering experience, would be advantageous. In this role, you will ensure timely resolution of critical issues within the agreed SLA, creating a positive customer support experience and building strong relationships with customers. You will also demonstrate an understanding of key business drivers to ensure strategic directions are followed for organizational success. Amdocs is a dynamic, multi-cultural organization that fosters innovation and empowers employees to grow. You will work in a diverse, inclusive workplace alongside passionate and dedicated teammates. Amdocs offers a wide range of benefits including health, dental, vision, and life insurance, paid time off, sick time, and parental leave. As an equal opportunity employer, Amdocs welcomes applicants from all backgrounds and is committed to fostering a diverse and inclusive workforce.,

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Dear Candidate, We have an urgent opening with one of Multinational Company for Bengaluru location. Interested candidate can share the resume on Deepaksharma@thehrsolutions.in OR WhatsApp on 8882505093 Experience : 5.5+ Years Profile : Mlops Notice period : Only immediate joiners OR Already serving. Please find the below job description : MLOPS Senior Engineer - Azure ML + Azure Databricks at BLR Location 5.5 years of experience in Al Domain & 3+ years in MLOPS (preferably in a large-scale enterprise) Mandatory Skill Experience in developing MLOps framework cutting ML lifecycle: model development, training, evaluation, deployment, monitoring including Model Governance Expert in Azure Databricks, Azure ML, Unity Catalog Hands-on experience with Azure DevOps, MLOPS CI/CD Pipelines, Python, Git, Docker Experience in developing standards and practices for MLOPS life cycle Nice to Have Skill Strong understanding of data privacy, compliance, and responsible Al Azure Data Factory (ADF)

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

As a Python Machine Learning & AI Developer at Chicmic Studios, you will be an integral part of our dynamic team, bringing your expertise and experience to develop cutting-edge web applications using Django and Flask frameworks. Your primary responsibilities will include designing and implementing RESTful APIs with Django Rest Framework (DRF), deploying and optimizing applications on AWS services, and integrating AI/ML models into existing systems. You will be expected to create scalable machine learning models using PyTorch, TensorFlow, and scikit-learn, implement transformer architectures like BERT and GPT for NLP and advanced AI use cases, and optimize models through techniques such as hyperparameter tuning, pruning, and quantization. Additionally, you will deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker, ensuring the scalability, performance, and reliability of both applications and models. Collaboration with cross-functional teams to analyze requirements, deliver technical solutions, and staying up-to-date with the latest industry trends in AI/ML will also be key aspects of your role. Your ability to write clean, efficient code following best practices, conduct code reviews, and provide constructive feedback to peers will contribute to the success of our projects. To be successful in this role, you should possess a Bachelor's degree in Computer Science, Engineering, or a related field, with at least 3 years of professional experience as a Python Developer. Proficiency in Python, Django, Flask, and AWS services is required, along with expertise in machine learning frameworks, transformer architectures, and database technologies. Familiarity with MLOps practices, front-end technologies, and strong problem-solving skills are also desirable qualities for this position. If you are passionate about leveraging your Python development skills and AI expertise to drive innovation and deliver impactful solutions, we encourage you to apply and be a part of our innovative team at Chicmic Studios.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

At ClearTrail, our work is not merely a job, but a calling to create solutions that empower individuals committed to ensuring the safety of their people, places, and communities. With a legacy spanning over 21 years, law enforcement and federal agencies worldwide have placed their trust in ClearTrail as their dedicated partner in protecting nations and enhancing lives. We are at the forefront of shaping the future of intelligence gathering through the development of artificial intelligence and machine learning-based lawful interception and communication analytics solutions, aimed at addressing the most complex global challenges. We are currently seeking a ML Engineer with 2-4 years of experience, possessing expertise in machine learning (ML) and Large Language Models (LLM) skill set to join our team in Indore, Madhya Pradesh. Roles and Responsibilities: - Develop and implement end-to-end machine learning pipelines for a diverse range of analytics problems, including model development and refinement. - Effectively communicate results to a varied audience with technical and non-technical backgrounds. - Utilize LLM expertise to tackle challenges using cutting-edge language models and off-the-shelf LLM services such as OpenAI models, focusing on techniques like Retrieval augmented generation to enhance the performance and capabilities of LLMs. - Stay abreast of the latest advancements in the field of Artificial Intelligence through continuous research and innovation. - Demonstrate strong problem-solving skills and proficiency in code debugging. - Hands-on experience in working with large language and generative AI models, both proprietary and open-source, including transformers and GPT models (preferred). Skills: - Mandatory hands-on experience with Python, Scikit-Learn, PyTorch, LangChain, and Transformers libraries. - Proficiency in Exploratory Data Analysis, Machine Learning, Neural Networks, hyperparameter tuning, model performance metrics, and model deployment. - Practical knowledge of Large Language Models (LLM) and fine-tuning techniques. Nice to have experience: - Deep Learning - MLOps - SQL Qualifications: - Bachelor's degree in Computer Science & Engineering. - Proven 2-4 years of experience as a Machine Learning Engineer or similar role, with a strong LLM skill set. - Solid theoretical and practical understanding of machine learning algorithms and hands-on experience with LLM applications.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining as a GCP Data Architect at TechMango, a rapidly growing IT Services and SaaS Product company located in Madurai and Chennai. With over 12 years of experience, you are expected to start immediately and work from the office. TechMango specializes in assisting global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. In this role, you will be leading data modernization efforts for a prestigious client, Livingston, in a highly strategic project. As a GCP Data Architect, your primary responsibility will be to design and implement scalable, high-performance data solutions on Google Cloud Platform. You will collaborate closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: - Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) - Define data strategy, standards, and best practices for cloud data engineering and analytics - Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery - Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) - Architect data lakes, warehouses, and real-time data platforms - Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) - Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers - Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards - Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: - 10+ years of experience in data architecture, data engineering, or enterprise data platforms - Minimum 3-5 years of hands-on experience in GCP Data Service - Proficient in: BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner - Python / Java / SQL - Data modeling (OLTP, OLAP, Star/Snowflake schema) - Experience with real-time data processing, streaming architectures, and batch ETL pipelines - Good understanding of IAM, networking, security models, and cost optimization on GCP - Prior experience in leading cloud data transformation projects - Excellent communication and stakeholder management skills Preferred Qualifications: - GCP Professional Data Engineer / Architect Certification - Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics - Exposure to AI/ML use cases and MLOps on GCP - Experience working in agile environments and client-facing roles What We Offer: - Opportunity to work on large-scale data modernization projects with global clients - A fast-growing company with a strong tech and people culture - Competitive salary, benefits, and flexibility - Collaborative environment that values innovation and leadership,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As an Amdocs Software Architect, you will have full autonomy to deliver agreed technical objectives and make decisions that require extensive analysis and interpretation. Your role will involve providing technical expertise in software usage, as well as functional and non-functional aspects. You will collaborate with software engineers and other architects to define and refine product structures to align with business needs. Furthermore, you will work closely with customers and product line management to identify and translate customer needs into technical requirements. You will support and lead architectural decisions within a product line or across multiple product lines. Additionally, you will lead projects, review technical designs, and provide guidance to software engineers on technical and architectural decisions. Your responsibilities will include researching, evaluating, and prototyping new methodologies, technologies, and products. You will propose and implement improvements in processes and tools. It is essential to acquire an in-depth understanding of the customer context when making technical decisions. Key Responsibilities: - Extensive experience working with Various LLMs such as Lama, Gpt, and others - Hands-on experience with GenAI use cases - Expertise in developing Cloud Jobs for data ingestion for learning purposes - Establish partnerships with project stakeholders to offer technical assistance for important decisions - Development and implementation of Gen AI use-cases in live production based on business/user requirements Technical Skills: Mandatory Skills: 1. Experience in deep learning engineering, particularly on MLOps 2. Strong NLP/LLM experience and text processing using LLM 3. Proficiency in Python & Terraform programming 4. Building backend applications (data processing, etc.) using Python and deep learning frameworks 5. Deploying models and creating APIs (FAST API, FLASK API) 6. Experience working with GPUs 7. Working knowledge of Vector databases like Milvus, Azure Cognitive Search, Quadrant, etc. 8. Experience with transformers and working with Hugging Face models like llama, Mixtral AI, and embedding models Good to have: 1. Knowledge and experience in Kubernetes, Docker, etc. 2. Cloud experience working with VMs and Azure storage 3. Sound data engineering experience Behavioral Skills: - Effective communication with clients/operational managers and providing solutions - Strong problem-solving skills and ability to gather information - Building relationships with clients/operational managers and colleagues - Adaptability, ability to prioritize, work under pressure, and meet deadlines - Proactive thinking, anticipating problems, and providing solutions - Innovative approach and good presentation skills - Willingness to work extended hours when required In this role, you will be challenged by crafting high-level designs and setting technical standards. You will have the opportunity to work with advanced technologies and be responsible for a suite of products. Amdocs offers a range of benefits, including health, dental, parental leave, and pet insurance. Amdocs is an equal opportunity employer, welcoming applicants from all backgrounds and committed to fostering a diverse and inclusive workforce.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The QA Engineer AI/ML will play a crucial role in evaluating and conducting manual functional, regression, and integrated testing on new or modified software programs. Your primary responsibility will be to ensure that these programs meet the specified requirements and adhere to established guidelines. By writing, revising, and validating test plans and procedures, you will contribute to the identification of defects, environmental needs, and product feature evaluations. As a QA Engineer AI/ML, you will be tasked with maintaining a comprehensive test library, executing test scenarios to ensure coverage of requirements and regression, and conducting both positive and negative testing. Your involvement in product design reviews will provide valuable insights into functional requirements, product designs, usability considerations, and testing implications. Additionally, you will be responsible for identifying, reporting, and tracking product defects, as well as addressing the need for additional product functionality in a clear and concise manner. In this role, you will also be required to prepare and review technical documentation for accuracy, completeness, and overall quality. Providing task estimations and ensuring timely delivery against specified schedules will be essential aspects of your responsibilities. Furthermore, your availability outside of standard business hours may be necessary as part of a rotational on-call schedule. To qualify for this position, you must hold a Bachelor's degree in computer science, engineering, or a related field, or possess equivalent relevant experience. You should have 2-4 years of experience in QA Manual and demonstrate a strong understanding of technology, software quality assurance standards, and practices. Your exceptional written and verbal communication skills, along with active listening abilities, will enable you to interact effectively with a diverse range of technical and non-technical personnel. A proficiency in identifying issues logically, troubleshooting, problem-solving, and predicting defects will be crucial for success in this role. You should be adept at interpreting business requirements, creating test specifications, test plans, and test scenarios. Additionally, a balance between individual and team efforts in collaborative processes while meeting deadlines will be necessary. Experience and skills in AI/ML fundamentals, data quality assessment, model performance testing, AI/ML testing tools familiarity, bias and fairness testing, and programming skills for AI/ML testing are highly desirable. Proficiency in Python and SQL for creating test scripts, data manipulation, and working with ML libraries will be beneficial. Familiarity with MLOps, model lifecycle testing, structured delivery processes, Agile methods, and various platforms and databases will also be advantageous. Enjoy a competitive compensation package and comprehensive benefits as part of this role at Netsmart, an Equal Opportunity Employer dedicated to diversity and inclusivity.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer - Backend (Python) with 7+ years of experience, you will be responsible for designing and building the backend components of the GenAI Platform in Hyderabad. Your role will involve collaborating with geographically distributed cross-functional teams and participating in an on-call rotation to handle production incidents. The GenAI Platform offers safe, compliant, and cost-efficient access to LLMs, including Opensource & Commercial ones, while adhering to Experian standards and policies. You will work on building reusable tools, frameworks, and coding patterns for fine-tuning LLMs or developing RAG-based applications. To succeed in this role, you must possess the following skills: - 7+ years of professional backend web development experience with Python - Experience with AI and RAG - Proficiency in DevOps & IaC tools like Terraform, Jenkins - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow - Expertise in web development frameworks such as Flask, Django, or FastAPI - Knowledge of concurrent programming designs like AsyncIO - Experience with public cloud platforms like AWS, Azure, GCP (preferably AWS) - Understanding of CI/CD practices, tools, and frameworks Additionally, the following skills would be considered nice to have: - Experience with Apache Kafka and developing Kafka client applications in Python - Familiarity with big data processing frameworks, especially Apache Spark - Knowledge of containers (Docker) and container platforms like AWS ECS or AWS EKS - Proficiency in unit and functional testing frameworks - Experience with various Python packaging options such as Wheel, PEX, or Conda - Understanding of metaprogramming techniques in Python Join our team and contribute to the development of cutting-edge technologies in a collaborative and dynamic environment.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The Data Science Analyst role at Citi within the GWFO team is a developing professional position that involves applying specialized knowledge in machine learning, statistical modeling, and data analysis to monitor, assess, analyze, and evaluate processes and data. The main responsibilities include identifying opportunities to leverage advanced analytics to improve business outcomes, automate processes, and generate actionable insights. Additionally, the individual in this role will contribute to the development and deployment of innovative AI solutions, including generative AI and agentic AI applications, while collaborating with cross-functional stakeholders. As a Data Science Analyst at Citi, you will be expected to: - Gather and process operational data from various cross-functional stakeholders to examine past business performance and identify areas for improvement. - Apply machine learning techniques to identify data patterns and trends and provide insights to enhance business decision-making capabilities in various areas. - Develop and implement machine learning models for predictive analytics, forecasting, and optimization. - Design, build, and deploy Generative AI solutions to enhance customer experience, automate tasks, and personalize interactions. - Experiment with agentic AI frameworks to create autonomous systems that can learn, adapt, and solve complex problems. - Develop tools and techniques to evaluate the performance and impact of AI solutions. - Translate data into consumer or customer behavioral insights to drive targeting and segmentation strategies and effectively communicate findings to business partners and senior leaders. - Continuously explore and evaluate new data sources, tools, and capabilities, focusing on cutting-edge AI technologies to improve processes and strategies. - Collaborate closely with internal and external business partners to build, implement, track, and enhance decision strategies. Skills and Experience: - 5+ years of relevant experience in data science, machine learning, or a related field. - Advanced process management skills, organized and detail-oriented. - Curiosity about learning and developing new skill sets, particularly in the area of artificial intelligence. - Positive outlook with a can-do mindset. - Strong programming skills in Python and proficiency in relevant data science libraries such as scikit-learn, TensorFlow, PyTorch, and Transformers. - Experience with statistical modeling techniques, including regression, classification, and clustering. - Experience building GenAI solutions using LLMs and vector databases. - Experience with agentic AI frameworks, such as Langchain, Langraph, MLOps. - Experience with data visualization tools such as Tableau or Power BI. - Other skills required include strong logical reasoning capabilities, willingness to learn new skills, and good communication and presentation skills. Education: - Bachelor's/University degree or equivalent experience in a quantitative field such as computer science, statistics, mathematics, or engineering. Master's degree preferred. Working at Citi offers more than just a job - it means joining a global family of dedicated individuals where you can grow your career, contribute to your community, and make a real impact.,

Posted 1 week ago

Apply

3.0 - 12.0 years

0 Lacs

maharashtra

On-site

As an Engineering Lead at Aithon Solutions, a dynamic organization specializing in operations and technology solutions for the alternative asset management sector, you will play a pivotal role in steering the development of our Generative AI products. Your primary mandate will involve overseeing the entire product lifecycle to ensure scalability, top-notch performance, and continuous innovation. Collaborating with diverse teams, you will champion engineering excellence and cultivate a culture that thrives on high performance. Your role demands a profound understanding of AI/ML, proficiency in Python coding, expertise in distributed systems, cloud architectures, and contemporary software engineering practices. Moreover, hands-on coding experience is crucial for this position, ensuring a comprehensive grasp of the technical landscape you will be navigating. Key Responsibilities: - **Technical Leadership & Strategy:** Craft and execute the technology roadmap for our Generative AI products, aligning technological advancements with overarching business objectives. - **AI/ML Product Development:** Drive the development of AI-powered products, fine-tuning models for optimal performance, scalability, and real-world applicability. - **Engineering Excellence:** Establish industry best practices in software development, DevOps, MLOps, and cloud-native architectures to foster a culture of continuous improvement. - **Team Leadership & Scaling:** Recruit, mentor, and supervise a proficient engineering team, nurturing a culture characterized by innovation and collaborative spirit. - **Cross-Functional Collaboration:** Collaborate closely with Product, Data Science, and Business teams to translate cutting-edge AI research into practical applications. - **Scalability & Performance Optimization:** Architect and enhance distributed systems to ensure the efficient deployment of AI models across cloud and edge environments. - **Security & Compliance:** Implement robust frameworks for AI ethics, data security, and compliance with industry standards and regulations. Qualifications & Skills: - Demonstrable experience of at least 12 years in software engineering, with a minimum of 3 years in leadership positions within AI-focused enterprises. - Profound expertise in Generative AI, Deep Learning, NLP, Computer Vision, and model deployment. - Hands-on familiarity with ML frameworks and leading cloud platforms such as AWS, GCP, and Azure. - Proven track record of scaling AI/ML infrastructure and optimizing models for superior performance and cost-effectiveness. - Comprehensive understanding of distributed systems, cloud-native architectures, and microservices. - Proficiency in MLOps, CI/CD, and DevOps practices. - Strong problem-solving acumen, strategic thinking, and adept stakeholder management skills. - Ability to attract, nurture, and retain top engineering talent in a competitive marketplace. Why Join Us - Lead a forward-thinking team at the forefront of Generative AI innovation. - Contribute to the development of scalable, high-impact AI products that are shaping the future of technology. - Embrace a truly entrepreneurial culture that champions imagination, innovation, and teamwork. - Collaborate with a high-caliber, collaborative team dedicated to driving excellence in the tech industry.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Software Developer at Amdocs in Pune, India, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. You will work on specific modules, applications, or technologies, dealing with sophisticated assignments during the software development process. Your key responsibilities will include owning specific modules within an application, providing technical support and guidance during solution design for new requirements, and resolving critical or complex issues. You will ensure that the code is maintainable, scalable, and supportable. Additionally, you will present software product demos to partners and customers, using your technical knowledge to influence product evolution. In this role, you will investigate issues by reviewing and debugging code, providing fixes, and workarounds. You will analyze and fix bugs, review changes for operability, and help mitigate risks from technical aspects. By bringing continuous improvements to software or business processes through innovative techniques and automation, you will reduce design complexity, response time, and enhance end-user experience. Your technical skills will be put to the test, with mandatory requirements including experience in deep learning engineering, strong NLP/LLM expertise, proficiency in Python and Terraform programming, building backend applications using Python and deep learning frameworks, deploying models, and building APIs. Knowledge of working with GPUs and vector databases like Milvus, Azure Cognitive Search, and experience with transformers and hugging face models is essential. You will work on developing and implementing Gen AI use-cases in live production, creating partnerships with project stakeholders, and providing technical assistance for important decisions. Good communication, problem-solving, relationship-building, adaptability, and prioritization skills will be crucial for success in this role. This position offers you the opportunity to specialize in software and technology, take an active role in technical mentoring within the team, and enjoy stellar benefits including health, dental, paid time off, and parental leave. Amdocs is an equal opportunity employer that values diversity and inclusivity in the workforce.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

The Data Science Analyst is a developing professional role within GWFO team. This role applies specialized knowledge in machine learning, statistical modeling, and data analysis to monitor, assess, analyze, and evaluate processes and data. You will identify opportunities to leverage advanced analytics to improve business outcomes, automate processes, and generate actionable insights. Additionally, you will contribute to the development and deployment of innovative AI solutions, including generative AI and agentic AI applications. Working with cross-functional stakeholders, you will gather and process operational data to examine past business performance and identify areas for improvement. Utilizing machine learning techniques, you will identify data patterns and trends, providing insights to enhance business decision-making capabilities in various areas such as business planning, process improvement, and solution assessment. Your responsibilities will include developing and implementing machine learning models for predictive analytics, forecasting, and optimization. You will design, build, and deploy Generative AI solutions to enhance customer experience, automate tasks, and personalize interactions. Furthermore, you will experiment with agentic AI frameworks to create autonomous systems that can learn, adapt, and solve complex problems. Developing tools and techniques to evaluate the performance and impact of AI solutions will be part of your role. You will translate data into consumer or customer behavioral insights to drive targeting and segmentation strategies, communicating findings clearly and effectively to business partners and senior leaders. Continuous improvement is key, and you will explore and evaluate new data sources, tools, and capabilities with a focus on cutting-edge AI technologies. Collaboration with internal and external business partners in building, implementing, tracking, and improving decision strategies will be essential. As a successful candidate, you should ideally have 7+ years of relevant experience in data science, machine learning, or a related field. You should possess advanced process management skills, be organized and detail-oriented, and have a curious mindset for learning and developing new skill sets, particularly in artificial intelligence. Strong programming skills in Python and proficiency in relevant data science libraries such as scikit-learn, TensorFlow, PyTorch, and Transformers are required. Experience with statistical modeling techniques, GenAI solutions, agentic AI frameworks, and data visualization tools like Tableau or Power BI is beneficial. Other essential skills include strong logical reasoning capabilities, willingness to learn new skills, and good communication and presentation skills. A Bachelors/University degree or equivalent experience is necessary for this role.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

About the Role As a skilled Lead Data Scientist at Yubi, you will play a crucial role in designing, developing, and deploying ML/ RL models for Collection Campaign Recommendations/ Optimizations. Your primary responsibility will be to experiment and optimize collection campaigns across digital, telecalling, and field platforms. Leveraging your expertise in machine learning and strong Python programming skills, you will drive innovation and deliver business value for our clients. Key Responsibilities - Develop, implement, and maintain advanced analytical models and algorithms tailored to enhance debt collection processes. - Collaborate with business stakeholders to identify data-driven opportunities for improving debt collection strategies. - Continuously enhance existing models and integrate new data sources to optimize our approach. - Take ownership of the end-to-end process of metric design, tracking, and reporting related to debt collection performance. - Regularly improve and update metrics to align with business objectives and regulatory requirements. - Effectively communicate findings to both technical and non-technical stakeholders. - Mentor, train, and support junior data scientists, fostering a culture of continuous improvement and collaboration. - Lead cross-functional teams on data projects, ensuring alignment between data science initiatives and business goals. - Stay updated on the latest industry trends, tools, and best practices in analytics and debt management. Required Skills Technical Skills: - Profound understanding of machine learning algorithms, including supervised, unsupervised, and ensemble methods, and their application to risk. - Expertise in statistical analysis, hypothesis testing, regression analysis, probability theory, and data modeling techniques to extract insights and validate machine learning models. - Experience in designing, developing, and delivering end-to-end data products and solutions. - Proficiency in model explainability techniques (e.g., SHAP, LIME) and regulatory compliance for risk models. - Strong proficiency in Python. - Familiarity with PySpark is a plus. - Ability to build and deploy models on cloud platforms (AWS). - Proficiency in developing backend microservices using Fast API and working knowledge of MLOps. - Good to have experience with NLP techniques. Domain Skills (Good to have): - Previous domain expertise in debt collection processes, credit risk assessment, regulatory compliance, and industry best practices. - Demonstrated ability to design, implement, and optimize performance metrics and key performance indicators tailored for debt recovery initiatives. Education and Experience: - Bachelor's/Advanced degree in Data Science, Statistics, Mathematics, Computer Science, or a related field. - 5 to 8 years of experience in the data science and machine learning domain. - Experience in the financial sector or Collection team is a bonus.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large-scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 32 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Major Tasks of Position: Design, develop and maintain UI applications and deploying/managing machine learning models. Design, build, and maintain Python and Flask applications using AWS and/or Azure cloud services. Implement standards for data ingestion, storage, and processing to support analytics and machine learning workflows. Understand graph theory and network analysis, including familiarity with libraries such as NetworkX, igraph, or similar. Implement and manage CI/CD pipelines for automated testing, deployment, and monitoring of machine learning models. Collaborate with data scientists, machine learning engineers, and software developers to operationalize machine learning models. Design and maintain infrastructure for automated deployment and scaling. Ensure compliance with security, privacy, and data governance requirements. Key Working Relation: Qualification & Competencies: Bachelors degree in computer science, engineering, or a related field, or equivalent practical experience with at least 8-10 years of combined experience as a Python and MLOps Engineer or similar roles. Strong programming skills in Python. Proficiency with AWS and/or Azure cloud platforms, including services such as EC2, S3, Lambda, SageMaker, Azure ML, etc. Solid understanding of API programming and integration. Hands-on experience with CI/CD pipelines, version control systems (e.g., git), and code repositories. Knowledge of containerization using Docker, Kubernetes and orchestration tools. Proficiency in creating data visualizations specifically for graphs and networks using tools like Matplotlib, Seaborn, or Plotly. Understanding of data manipulation and analysis using libraries such as Pandas and Numpy Problem-solving, analytical expertise, and troubleshooting abilities with attention to details.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Hyderabad, Chennai

Work from Office

#Hiring who have 3yrs of IT, 2yrs exp in Pyspark, Python, GCP, Airflow, BigQuery/SQL Mlops #Comp- US-Based an MNC #Loc- Hyderabad/Chennai #Mode- Hybrid & Gnrl shift #Bud- 16LPA Max Mehar DM -8522016118 anishaglobal4@gmail.com

Posted 2 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Varanasi

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies