Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Goa, India
On-site
OPTEL. Responsible. Agile. Innovative. OPTEL is a global company that develops transformative software, middleware and hardware solutions to secure and ensure supply chain compliance in major industry sectors such as pharmaceuticals and food, with the goal of reducing the effects of climate change and enabling sustainable living. If you are driven by the desire to contribute to a better world while working in a dynamic and collaborative environment, then you've come to the right place! Full Stack Developer (Javascript + Mobile Dev + .NET) Summary We are seeking a passionate and highly skilled Full Stack Developer to drive the design, development, and optimization of modern, cloud-hosted SaaS applications. You will be responsible for full solution delivery—from architecture to deployment—leveraging technologies such as C#/.NET Core, Node.js, React.js, and cloud platforms like Google Cloud Platform (GCP) and AWS. The ideal candidate embraces a DevSecOps mindset, contributes to AI/ML integrations, and thrives on building secure, scalable, and innovative solutions alongside cross-functional teams. Architecture & System Design Architect and design scalable, secure, and cloud-native applications. Establish technical best practices across frontend, backend, mobile, and cloud components. Contribute to system modernization efforts, advocating for microservices, serverless patterns, and event-driven design. Integrate AI/ML models and services into application architectures. Application Development Design, develop, and maintain robust applications using C#, ASP.NET Core, Node.js, and React.js. Build cross-platform mobile applications with React Native or .NET MAUI. Develop and manage secure RESTful and GraphQL APIs. Utilize Infrastructure as Code (IaC) practices to automate cloud deployments. Cloud Development & DevSecOps Build, deploy, and monitor applications on Google Cloud and AWS platforms. Implement and optimize CI/CD pipelines (GitHub Actions, GitLab, Azure DevOps). Ensure solutions align with security best practices and operational excellence (DevSecOps principles). AI Development and Integration Collaborate with AI/ML teams to design, integrate, and optimize intelligent features. Work with AI APIs and/or custom AI models. Optimize AI workloads for scalability, performance, and cloud-native deployment. Testing, Automation, and Monitoring Create unit, integration, and E2E tests to maintain high code quality. Implement proactive measures to reduce technical debt. Deploy monitoring and observability solutions. Agile Collaboration Work in Agile/Scrum teams, participating in daily standups, sprint planning, and retrospectives. Collaborate closely with product managers, UX/UI designers, and QA engineers. Share knowledge and actively contribute to a strong, collaborative engineering culture. Skills And Qualifications Required 5+ years experience in Full Stack Development (C#, .NET Core, Node.js, JavaScript/TypeScript). Solid frontend development skills with React.js (Vue.js exposure is a plus). Experience with multi-platform mobile app development (React Native or .NET MAUI). Expertise with Google Cloud Platform (GCP) and/or AWS cloud services. Hands-on experience developing and consuming RESTful and GraphQL APIs. Strong DevOps experience (CI/CD, Infrastructure as Code, GitOps practices). Practical experience integrating AI/ML APIs or custom models into applications. Solid relational and cloud-native database skills (Postgres, BigQuery, DynamoDB). Serverless development (Cloud Functions, AWS Lambda). Kubernetes orchestration (GKE, EKS) and containerization (Docker). Event streaming systems (Kafka, Pub/Sub, RabbitMQ). AI/ML workflow deployment (Vertex AI Pipelines, SageMaker Pipelines). Edge Computing (Cloudflare Workers, Lambda@Edge). Experience with ISO/SOC2/GDPR/HIPAA compliance environments. Familiarity with App Store and Google Play Store deployment processes. EQUAL OPPORTUNITY EMPLOYER OPTEL is an equal opportunity employer. We believe that diversity is essential for fostering innovation and creativity. We welcome and encourage applications from individuals of all backgrounds, cultures, gender identities, sexual orientations, abilities, ages, and beliefs. We are committed to providing a fair and inclusive recruitment process, where each candidate is evaluated solely on their qualifications, skills, and potential. At OPTEL, every employee's unique perspective contributes to our collective success, and we celebrate the richness that diversity brings to our team. See the offer on Jazzhr Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Req ID: 327890 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Developer - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). PYTHON Data Engineer Exposure to retrieval-augmented generation (RAG) systems and vector databases. Strong programming skills in Python (and optionally Scala or Java). Hands-on experience with data storage solutions (e.g., Delta Lake, Parquet, S3, BigQuery) Experience with data preparation for transformer-based models or LLMs Expertise in working with large-scale data frameworks (e.g., Spark, Kafka, Dask) Familiarity with MLOps tools (e.g., MLflow, Weights & Biases, SageMaker Pipelines) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Responsibilities: Development of AI/ML models and workflow to apply advanced algorithms and machine learning Enable team to run an automated design engine Creates design standards and assurance processes for easily deployable and scalable models. Ensure successful developments: Be a technical leader through strong example and training of more junior engineers, documenting all relevant product and design information to educate others on novel design techniques and provide guidance on product usage CI/CD Pipeline (Azure Devops/Git) integration as Code repository. Minimum Qualifications (Experience And Skills) 5+ years of Data science experience A strong software engineering background with emphasis on C/C++ or Python 1+ years of experience in AWS Sagemaker Services Exposure to AWS lambda ,API Gateway, AWs Amplify & AWS Serverless , AWS Cognotio, AWS Security Experience in debugging complex issues with a focus on object-oriented software design and development Experience with optimization techniques and algorithms Experience developing artificial neural networks and deep neural networks Previous experience working in an Agile environment, and collaborating with multi-disciplinary teams Ability to communicate and document design work with clarity and completeness Previous experience working on machine learning projects. Team player with a strong sense of urgency to meet product requirements with punctuality and professionalism Preferred Qualifications Programming Experience in Perl / Python / R / Matlab / Shell scripting Knowledge of neural networks, with hands-on experience using ML frameworks such as TensorFlow or PyTorch Knowledge of Convolutional Neural Networks (CNNs), RNN/LSTMs Knowledge of data management fundamentals and data storage principles Knowledge of distributed systems as it pertains to data storage and computing Knowledge of reinforcement learning techniques Knowledge of evolutionary algorithms AWS Certification Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: MLOps Engineer Urgent — High Priority requirement. 1. Location - Hyderabad/ Pune 2. Interview Rounds - 4 round. 3. Contract - 12 Months About Client: We are a fast-growing boutique data engineering firm that empowers enterprises to manage and harness their data landscape efficiently. Leveraging advanced machine learning (ML) methodologies, Job Overview: We are seeking a highly skilled and motivated MLOps Engineer with 3–5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams. Required Skills: ● Hands-on experience with MLOps platforms such as MLflow and Kubeflow. ● Proficiency in Infrastructure as Code (IaC) tools like Terraform or Ansible. ● Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch). ● Solid understanding of microservices architecture, service discovery, and load balancing. ● Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code. ● Proficient in Docker and container-based application deployments. ● Experience with CI/CD tools such as Jenkins or GitLab CI. ● Basic working knowledge of Kubernetes for container orchestration. ● Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex AI. ● Competency in Linux shell scripting and command-line operations. ● Proficiency with Git and version control best practices. ● Foundational knowledge of machine learning principles and typical ML workflow patterns. Good-to-Have Skills: ● Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection. ● Experience with scripting languages like Bash or PowerShell for automation tasks. ● Exposure to database scripting and data integration pipelines. Experience & Qualifications: ● 3–5+ years of experience in MLOps, Site Reliability Engineering (SRE), or Software Engineering roles. ● At least 2+ years of hands-on experience working on ML/AI systems in production settings. ● Deep understanding of cloud-native architectures, containerization, and the end-to-end ML lifecycle. ● Bachelor’s degree in Computer Science, Software Engineering, or a related technical field. ● Relevant certifications such as AWS Certified DevOps Engineer – Professional are a strong plus. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Jarvis Business Solutions is a leading eCommerce and CRM company specializing in implementing and delivering solutions for small to large enterprises. With expertise in SAP Hybris Commerce, Salesforce CRM, and Commerce, Jarvis serves clients globally by providing innovative solutions tailored to their needs. Role Description This is a full-time on-site role for an AI/ML Technical Lead located in Hyderabad. The Technical Lead will be responsible for leading AI/ML projects, developing algorithms, implementing machine learning models, and providing technical guidance to the team. They will collaborate with stakeholders to understand business requirements and ensure successful project delivery. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. 8+ years of overall software development experience with 3+ years in AI/ML roles. Strong knowledge of machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, etc. Hands-on experience with Python (preferred), R, or similar languages. Experience with cloud platforms (AWS/GCP/Azure) and ML Ops tools (MLflow, SageMaker, Kubeflow). Proven track record of delivering AI/ML solutions at scale. Strong knowledge of data preprocessing, model evaluation, and deployment strategies. Excellent problem-solving, analytical, and communication skills Show more Show less
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 763721 Join our Team Ericsson Enterprise Wireless Solutions (BEWS) is responsible for driving Ericsson’s Enterprise Networking and Security business. Our expanding product portfolio covers wide area networks, local area networks, and enterprise security. We are the #1 global market leader in Wireless-WAN enterprise connectivity and are rapidly growing in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions . Key Responsibilities Define and implement model validation processes and business success criteria in data science terms . Contribute to the architecture and data flow for machine learning models. Rapidly develop and iterate minimum viable solutions (MVS) that address enterprise needs. Conduct advanced data analysis and rigorous testing to enhance model accuracy and performance. Work with Data Architects to leverage existing data models and create new ones as required. Collaborate with product teams and business partners to industrialize machine learning models into Ericsson’s enterprise solutions . Build MLOps pipelines for continuous integration, continuous delivery, validation, and monitoring of AI/ML models. Design and implement effective big data storage and retrieval strategies (indexing, partitioning, etc.). Develop and maintain APIs for AI/ML models and optimize data pipelines. Lead end-to-end ML projects from conception to deployment. Stay updated on the latest ML advancements and apply best practices to enterprise AI solutions . Required Skills & Experience 4–6 years of hands-on experience in machine learning, AI, and data science . Strong knowledge of ML frameworks (Keras, TensorFlow, Spark ML, etc.). Proficiency in ML algorithms, deep learning, reinforcement learning (RL), and large language models (LLMs) . Expertise in MLOps , including model lifecycle management and monitoring. Experience with containerization & orchestration (Docker, Kubernetes, Helm charts). Hands-on expertise with workflow orchestration tools (Kubeflow, Airflow, Argo). Strong programming skills in Python and experience with C++, Scala, Java, R . Experience in API design & development for AI/ML models . Hands-on knowledge of Terraform for infrastructure automation. Familiarity with AWS services (Data Lake, Athena, SageMaker, OpenSearch, DynamoDB, Redshift). Strong understanding of self-hosted deployment of LLMs on AWS . Experience in RASA, LangChain, LangGraph, LlamaIndex, Django, Open Policy Agent . Working knowledge of vector databases, knowledge graphs, retrieval-augmented generation (RAG), agents, and agentic mesh architectures . Expertise in monitoring tools like Datadog for K8S environments. Ability to document, present , and communicate technical findings to business stakeholders . Proven ability to contribute to ML forums, patents, and research publications . Educational Qualifications B.Tech/B.E. in Computer Science , MCA, or a Master’s in Mathematics/Statistics from a top-tier institute . Join Ericsson and be part of a cutting-edge team that is revolutionizing enterprise AI, 5G, and security solutions . to shape the future of wireless connectivity! Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : AI/ML Engineer Location : Bengaluru, India Experience : 6 months - 2 years Company Overview IAI Solutions operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. Position Summary We are looking for an AI/ML Engineer with a strong background in Python, Flask/FastAPI and Object-Oriented Programming (OOP). The ideal candidate should have significant experience in prompt engineering, open source model fine-tuning, and using the HuggingFace libraries. Additionally, expertise in working with cloud platforms such as AWS SageMaker or similar services for training AI models is essential. Priority will be given to candidates with a research background, particularly those who have successfully fine-tuned and deployed AI models in real-world applications. Key Responsibilities Develop, fine-tune, and deploy AI models using Python and Flask/FastAPI frameworks. Apply prompt engineering techniques to optimize model outputs and improve accuracy. Utilize HuggingFace libraries and other ML tools to build and fine-tune state-of-the-art models. Work on cloud platforms like AWS SageMaker or equivalent to train and deploy AI models efficiently. Collaborate with research teams to translate cutting-edge AI research into scalable solutions. Implement object-oriented programming (OOP) principles and problem-solving strategies in developing AI solutions. Stay updated with the latest advancements in AI/ML and integrate new techniques into ongoing projects. Document and share findings, best practices, and solutions across the engineering team. An Ideal Candidate Will Have Strong proficiency in Python and Flask/FastAPI. Experience in prompt engineering and fine-tuning AI models. Extensive experience with HuggingFace libraries and similar AI/ML tools. Strong experience in AI Agentic Architecture Hands-on experience with cloud platforms such as AWS SageMaker for training and deploying models. Proficiency in Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Hands-on experience with Docker and Git for version control. Background in AI/ML research, with a preference for candidates from research institutes. Demonstrated experience in training and deploying machine learning models in real-world applications. Solid understanding of object-oriented programming and problem-solving skills. Strong analytical skills and the ability to work independently or in a team environment. Excellent communication skills, with the ability to present complex technical concepts to non-technical stakeholders. Must Have Skills Python Object-Oriented Programming (OOP) Prompt engineering HuggingFace libraries and similar AI/ML tools Open Source Model fine-tuning AI Agentic Architecture such as LangGraph and CrewA Docker and Git for version control. Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Good To Have Deep Learning and Machine Learning AWS SageMaker or similar services for training AI models Previous experience in academic or industrial research, with published work in AI/ML. Proven track record of successful AI model deployments and optimizations. Experience with databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch. Perks & Benefits Work on groundbreaking AI/ML projects in a collaborative and innovative environment. Access to state-of-the-art tools and cloud platforms. Opportunities for professional development and continuous learning. Competitive salary. Join IAI Solutions and help build the future of AI-powered software! (ref:hirist.tech) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc. Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g, GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc. Understanding of ML lifecycle management and reproducibility. Preferred Qualifications Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer Work on cutting-edge AI platforms and infrastructure. Cross-functional collaboration with top ML, research, and product teams. Competitive compensation package - no constraints for the right candidate. (ref:hirist.tech) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About The Role We're looking for a Senior Data Scientist with hands-on experience in machine learning, GenAI, and cloud platforms to solve real-world problems in the BFSI/Insurance space. You'll work with a collaborative team to build and deploy models that drive business impact - from fraud detection and churn prediction to customer segmentation and unstructured data analysis. What You'll Do Build ML models (XGBoost, Neural Networks, Random Forests) for classification, regression, anomaly detection, and more. Use unsupervised learning techniques for segmentation, clustering, and cohort creation. Apply LLMs and GenAI for insights from unstructured data (e.g., call transcripts, agent notes). Deploy solutions on cloud platforms (preferably AWS) using tools like Databricks, SageMaker, Lambda, Docker, etc. Implement MLOps best practices - versioning, CI/CD, monitoring. Collaborate with business teams to translate models into insights that matter. What You Bring 5+ years of experience in data science roles. 1-2 years working in BFSI or Insurance domain. Proficient in Python, SQL, Databricks. Strong understanding of ML fundamentals including optimization techniques like gradient descent. Experience with GenAI/LLMs (nice to have). Cloud experience (AWS preferred) with exposure to end-to-end model deployment. Great communication skills and a problem-solving mindset. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Key Responsibilities : Design, develop, and deploy machine learning models and AI-driven applications using Python. Collaborate with data scientists, software engineers, and product managers to understand business requirements and translate them into AI solutions. Utilize AWS Bedrock and other AWS cloud services for model deployment, training, and orchestration. Manage data pipelines and ensure efficient data flow and storage, including preprocessing and feature engineering. Implement and maintain databases (SQL/NoSQL) used for storing structured and unstructured data. Optimize model performance and continuously monitor accuracy, efficiency, and scalability of deployed systems. Stay updated with the latest research, trends, and technologies in AI and machine learning. Document design processes, code, and research findings clearly and concisely. Contribute to code reviews, best practices, and technical mentorship within the Qualifications : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Strong proficiency in Python and common machine learning libraries (e.g., TensorFlow, PyTorch, Scikit learn, Hugging Face Transformers). Proven experience building, training, and deploying machine learning models in real-world applications. Hands-on experience with AWS, particularly AWS Bedrock, SageMaker, Lambda, S3, and related services. Proficient in database management, including SQL and NoSQL systems such as PostgreSQL, MySQL, MongoDB, or DynamoDB. Understanding of data structures, algorithms, and software engineering best practices. Experience with version control systems (e.g., Git) and CI/CD pipelines for machine learning projects. Excellent communication and collaboration skills. (ref:hirist.tech) Show more Show less
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If you are a visionary leader who thrives at the intersection of technology and operations, this is your opportunity to make a global impact. Why Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes excellence Key Responsibilities (KRAs) Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity analytics Technical Skills & Tools Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, Kafka) Qualifications Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech) Show more Show less
Posted 1 month ago
58.0 years
0 Lacs
Greater Lucknow Area
On-site
Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58+ years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 month ago
0 years
0 Lacs
India
On-site
Key Responsibilities Design, develop, and deploy machine learning models for prediction, recommendation, anomaly detection, NLP, or image processing tasks. Work with large, complex datasets to extract insights and build scalable solutions. Collaborate with data engineers to create efficient data pipelines and feature engineering workflows. Evaluate model performance using appropriate metrics and improve models through iterative testing and tuning. Communicate findings, insights, and model outputs clearly to non-technical stakeholders. Stay up to date with the latest machine learning research, frameworks, and technologies. Required Skills Strong programming skills in Python (Pandas, NumPy, Scikit-learn, etc.). Hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, XGBoost, or LightGBM. Experience in building, deploying, and maintaining end-to-end ML models in production. Solid understanding of statistics, probability, and mathematical modeling. Proficiency with SQL and data manipulation in large-scale databases. Familiarity with version control (Git), CI/CD workflows, and model tracking tools (MLflow, DVC, etc.). Preferred Skills Experience with cloud platforms like AWS, GCP, or Azure (e.g., SageMaker, Vertex AI). Knowledge of MLOps practices and tools for scalable ML deployments. Exposure to real-time data processing or streaming (Kafka, Spark). Experience with NLP, Computer Vision, or Time Series Forecasting. Show more Show less
Posted 1 month ago
4.0 - 6.0 years
7 - 9 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements.
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
About Us Evangelist Apps is a UK-based custom software development company specializing in full-stack web and mobile app development, CRM/ERP solutions, workflow automation, and AI-powered platforms. Trusted by global brands like British Airways, Third Bridge, Hästens Beds, and Duxiana, we help clients solve complex business problems with technology. We’re now expanding into AI-driven services and are looking for our first Junior AI Developer to join the team. This is an exciting opportunity to help lay the groundwork for our AI capabilities. Role Overview As our first Junior AI Developer, you’ll work closely with our senior engineers and product teams to research, prototype, and implement AI-powered features across client solutions. You’ll contribute to machine learning models, LLM integrations, and intelligent automation systems that enhance user experiences and internal workflows. Key Responsibilities Assist in building and fine-tuning ML models for tasks like classification, clustering, or NLP Integrate AI services (e.g., OpenAI, Hugging Face, AWS, or Vertex AI) into applications Develop proof-of-concept projects and deploy lightweight models into production Preprocess datasets, annotate data, and evaluate model performance Collaborate with product, frontend, and backend teams to deliver end-to-end solutions Keep up to date with new trends in machine learning and generative AI Must-Have Skills Solid understanding of Python and popular AI/ML libraries (e.g., scikit-learn, pandas, TensorFlow, or PyTorch) Familiarity with foundational ML concepts (e.g., supervised/unsupervised learning, overfitting, model evaluation) Experience with REST APIs and working with JSON-based data Exposure to LLMs or prompt engineering is a plus Strong problem-solving attitude and eagerness to learn Good communication and documentation skills Nice-to-Haves (Good to Learn On the Job) Experience with cloud-based ML tools (AWS Sagemaker, Google Vertex AI, or Azure ML) Basic knowledge of MLOps and deployment practices Prior internship or personal projects involving AI or automation Contributions to open-source or Kaggle competitions What We Offer Mentorship from experienced engineers and a high-learning environment Opportunity to work on real-world client projects from day one Exposure to multiple industry domains including expert networks, fintech, healthtech, and e-commerce Flexible working hours and remote-friendly culture Rapid growth potential as our AI practice scales Show more Show less
Posted 1 month ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 1 month ago
1.0 - 2.0 years
0 Lacs
Hyderābād
On-site
General information Country India State Telangana City Hyderabad Job ID 44779 Department Development Experience Level EXECUTIVE Employment Status FULL_TIME Workplace Type On-site Description & Requirements As an Associate Machine Learning Engineer / Data Scientist, you will contribute to the advancement of research projects in artificial intelligence and machine learning. Your responsibilities will encompass areas such as large language models, image processing, and sentiment analysis. You will work collaboratively with development partners to incorporate AI research into products such as Digital Assistant and Document Capture. Essential Duties: Model Development: Assist in designing and implementing AI/ML models. Contribute to building innovative models and integrating them into existing systems. Fine-tuning Models: Support the fine-tuning of pre-trained models for specific tasks and domains. Ensure models are optimized for accuracy and efficiency. Data Clean-up: Conduct data analysis and pre-processing to ensure the quality and relevance of training datasets. Implement data cleaning techniques. Natural Language Processing (NLP): Assist in the development of NLP tasks like sentiment analysis, text classification, and language understanding. Large Language Models (LLMs): Work with state-of-the-art LLMs and explore their applications in various domains. Support continuous improvement and adaptation of LLMs. Research and Innovation: Stay updated with advancements in AI/ML, NLP, and LLMs. Experiment with new approaches to solve complex problems and improve methodologies. Deployment and Monitoring: Collaborate with DevOps teams to deploy AI/ML models. Implement monitoring mechanisms to track model performance. Documentation: Maintain clear documentation of AI/ML processes, models, and improvements to ensure knowledge sharing and collaboration. Basic Qualifications: Educational Background Programming and Tools Experience 1-2 years of total industry experience Minimum 6 months experience in ML & Data Science Skills Problem-Solving and Analytical Skills Good oral and written communication skills. Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mathematics, Statistics or a related field. Specialization or coursework in AI, ML, Statistics & Probability, DL, Computer Vision, Signal Processing, or NLP/NLU is a plus. Proficiency in programming languages commonly used in AI and ML, such as Python or R & querying languages like SQL. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Experience with relevant libraries and frameworks, such as scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. This role offers a great opportunity to work with cutting-edge AI/ML technologies and contribute to innovative projects in a collaborative environment. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 1 month ago
0 years
5 - 15 Lacs
Ahmedabad
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 1 month ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch. Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI.LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization. Also experienced in developing model wrapers. Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning. Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
With a startup spirit and 115,000 + curious and courageous minds, we have the expertise to go deep with the world’s biggest brands—and we have fun doing it! We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. We’re harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, we’re calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing. People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of AI Senior Engineer In this role you’ll be leveraging Azure’s advanced AI capabilities or AWS Advance Ai capability, including Azure Machine Learning , Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence,AWS Sage Maker, AWS Bedrocks to deliver scalable and efficient solutions. You will also ensure seamless integration into enterprise workflows and operationalize models with robust monitoring and optimization. Responsibilities AI Orchestration - Design and manage AI Orchestration flow using tools such as: Prompt Flow, Or LangChain; Continuously evaluate and refine models to ensure optimal accuracy, latency, and robustness in production. Document AI and Data Extraction, Build AI-driven workflows for extracting structured and unstructured data fromLearning, receipts, reports, and other documents using Azure AI Document Intelligence, and Azure Cognitive Services. RAG Systems - Design and implement retrieval-augmented generation (RAG) systems using vector embeddings and LLMs for intelligent and efficient document retrieval; Optimize RAG workflows for large datasets and low-latency operations. Monitoring and Optimization - Implement advanced monitoring systems using Azure Monitor, Application Insights, and Log Analytics to track model performance and system health; Continuously evaluate and refine models and workflows to meet enterprise-grade SLAs for performance and reliability. Collaboration and Documentation - Collaborate with data engineers, software developers, and DevOps teams to deliver robust and scalable AI-driven solutions; Document best practices, workflows, and troubleshooting guides for knowledge sharing and scalability. Qualifications we seek in you Proven experience with Machine Learning, Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence, AWS Bedrock, SageMaker; Proficiency in building and optimizing RAG systems for document retrieval and comparison. Strong understanding of AI/ML concepts, including natural language processing (NLP), embeddings, model fine-tuning, and evaluation; Experience in applying machine learning algorithms and techniques to solve complex problems in real-world applications; Familiarity with state-of-the-art LLM architectures and their practical implementation in production environments; Expertise in designing and managing Prompt Flow pipelines for task-specific customization of LLM outputs. Hands-on experience in training LLMs and evaluating their performance using appropriate metrics for accuracy, latency, and robustness; Proven ability to iteratively refine models to meet specific business needs and optimize them for production environments. Knowledge of ethical AI practices and responsible AI frameworks. Experience with CI/CD pipelines using Azure DevOps or equivalent tools; Familiarity with containerized environments managed through Docker and Kubernetes. Knowledge of Azure Key Vault, Managed Identities, and Azure Active Directory (AAD) for secure authentication. Experience with PyTorch or TensorFlow. Proven track record of developing and deploying Azure-based AI solutions for large-scale, enterprise-grade environments. Strong analytical and problem-solving skills, with a results-driven approach to building scalable and secure systems. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Hi Everyone Role : Senior Data scientist - AWS - sagemaker, MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Shift - 12pm to 9 pm IST 8 hour Exp - 5+yr Position Type : Remote & Contractual JD : Primary Skills : MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Secondary Skills : PythonRScalaIntegration Job Description : Shift: 12 PM-9 PM IST Mandatory - sagemaker, Mflow 5+ years of work experience in Software Engineering and MLOps Adhere to best practices for developing scalable, reliable, and secure applications Development experience on AWS, AWS Sagemaker required. AWS Data Zone experience is preferred Experience with one or more general purpose programming languages including but not limited to: Python, R, Scala, Spark Experience with production-grade development, integration, and support Candidate with good analytical mindset and smart candidate who will help us in Research in MLOps area Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France