Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 4.0 years
0 Lacs
karnataka
On-site
You should have a Bachelor's degree in Computer Science, Information Technology, or a related field. A strong understanding of operating systems, networking basics, and Linux command-line usage is essential. Proficiency in at least one scripting language such as Python or Bash is required. Basic knowledge of cloud computing concepts, with a preference for AWS, is expected. Familiarity with DevOps principles like CI/CD, automation, and cloud infrastructure management is a plus. Awareness of version control systems like Git is necessary. It would be beneficial to have exposure to cloud platforms, preferably AWS, and infrastructure services such as EC2, S3, RDS, and Kubernetes. Understanding of Infrastructure as Code concepts and knowledge of Terraform would be advantageous. Basic knowledge of CI/CD tools like GitLab or AzureDevOps is a plus. Awareness of monitoring concepts and observability tools like New Relic and Grafana is beneficial. Basic knowledge about containerization, automation, or data/ML infra tools such as Docker, Ray, Dagster, Weights & Biases is an advantage. Exposure to scripting tasks for automation and ops workflows using Python is desired. Joining Sanas will allow you to gain real-world experience in managing cloud infrastructure, including AWS and Azure, as well as COLO datacenter. You will work on infrastructure automation using Terraform and Python, CI/CD pipeline development and management with GitLab and Spinnaker, and observability and monitoring with tools like New Relic, Grafana, and custom alerting mechanisms. You will also have the opportunity to work with cutting-edge tools in ML/AI infrastructure like Ray, Dagster, W&B, and data analytics tools such as ClickHouse and Aurora PostgreSQL. Additionally, you will learn about agile delivery models and collaborate with Engineering, Science, InfoSec, and ML teams. We offer hands-on experience with modern DevOps practices and enterprise cloud architecture, mentorship from experienced DevOps engineers, exposure to a scalable infrastructure supporting production-grade AI and ML workloads, and an opportunity to contribute to automation, reliability, and security for systems. You will participate in occasional on-call rotations to maintain system availability in a collaborative and fast-paced learning environment where your work directly supports engineering and innovation.,
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As an Applied AI/GenAI ML Director within the Asset and Wealth Management Technology Team at JPMorgan Chase, you will provide deep engineering expertise and work across agile teams to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. You will leverage your deep expertise to consistently challenge the status quo, innovate for business impact, lead the strategic development behind new and existing products and technology portfolios, and remain at the forefront of industry trends, best practices, and technological advances. This role will focus on establishing and nurturing common capabilities, best practices, and reusable frameworks, creating a foundation for AI excellence that accelerates innovation and consistency across business functions. Your responsibilities will include establishing and promoting a library of common ML assets, including reusable ML models, features stores, data pipelines, and standardized templates. You will lead efforts to create shared tools and platforms that streamline the end-to-end ML lifecycle across the organization. Additionally, you will create curative solutions using GenAI workflows through advanced proficiency in large language models (LLMs) and related techniques, and gain experience with creating a Generative AI evaluation and feedback loop for GenAI/ML pipelines. You will advise on the strategy and development of multiple products, applications, and technologies, serving as a lead advisor on the technical feasibility and business need for AIML use cases. Furthermore, you will liaise with firm-wide AI ML stakeholders, translating highly complex technical issues, trends, and approaches to leadership to drive the firm's innovation and enable leaders to make strategic, well-informed decisions about technology advancements. You will also influence across business, product, and technology teams and successfully manage senior stakeholder relationships, championing the firm's culture of diversity, opportunity, inclusion, and respect. To be successful in this role, you must have formal training or certification on Machine Learning concepts and at least 10 years of applied experience, along with 5+ years of experience leading technologists to manage, anticipate, and solve complex technical items within your domain of expertise. An MS and/or PhD in Computer Science, Machine Learning, or a related field is required, as well as at least 10 years of experience in one of the programming languages like Python, Java, C/C++, etc., with intermediate Python skills being a must. You should have a solid understanding of using ML techniques, especially in Natural Language Processing (NLP) and Large Language Models (LLMs), hands-on experience with machine learning and deep learning methods, and the ability to work on system design from ideation through completion with limited supervision. Practical cloud-native experience such as AWS is necessary, along with good communication skills, a passion for detail and follow-through, and the ability to work effectively with engineers, product managers, and other ML practitioners. Preferred qualifications for this role include experience with Ray, MLFlow, and/or other distributed training frameworks, in-depth understanding of Embedding based Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies, advanced knowledge in Reinforcement Learning or Meta Learning, and a deep understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Experience with building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc., is also desirable.,
Posted 1 week ago
9.0 - 14.0 years
22 - 32 Lacs
Bengaluru
Hybrid
NOTE: We are only looking for candidates who can join Immediately to available to join in 15 days Experience level- 6+ years Location: Bangalore, Hyderabad, Chennai (Candidates who are currently in one of these 3 locations can apply) Job Summary: We are seeking a highly skilled and experienced Senior Big Data + Python Developer to join our team in Bangalore . The ideal candidate will have a strong background in distributed computing, data engineering, and systems design. You will work on building scalable data pipelines, integrating with modern data platforms, and contributing to architectural decisions. Prior experience with AIML platforms is a plus but not mandatory. Key Responsibilities: Design, develop, and maintain large-scale data processing pipelines using Python , Spark , and Ray . Manage and optimize workflows with Airflow . Work with Apache Hive , Iceberg , and Druid for efficient data storage, querying, and real-time analytics. Deploy, monitor, and manage containerized applications using Kubernetes . Develop dashboards and visualizations using Apache Superset or similar tools. Collaborate with architects and data scientists to design high-performance systems. Ensure data quality, scalability, and performance in all solutions delivered. Participate in code reviews and provide technical mentorship to junior team members. Required Skill Set: Strong programming experience with Python . Hands-on experience with Ray , Apache Spark , and Hive . Solid understanding of Apache Iceberg for data lake management. Experience working with Kubernetes for container orchestration. Workflow orchestration with Apache Airflow . Familiarity with Apache Druid for real-time analytics. Expertise in data visualization tools such as Superset . Strong knowledge of data architecture and design patterns. Proven track record of delivering scalable and production-grade systems. Preferred Qualifications: Exposure to AI/ML workflows or platforms is a plus. Experience in system and data architecture design. Contributions to open-source or published technical blogs/articles. Excellent problem-solving and analytical skills.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an SDE-3 Backend at Crimson Enago, you will lead a team of web developers and play a major role in end-to-end project development and delivery. Your responsibilities will include setting engineering examples, hiring and training the team, and coding 100% of the time. You will collaborate with the Engineering Manager, Principal Engineer, SDE-3 leads, and Technical Project Manager to ensure successful project outcomes. The ideal candidate for this position is an SDE-3 Backend with over 5 years of enterprise backend web experience, particularly in the NodeJS-AWS stack. You should possess excellent research skills to address complex business problems, experience in unit and integration testing, and a commitment to maintaining highly performant and testable code following software design patterns. Your role will involve designing optimized scalable solutions, breaking down complex problems into manageable tasks, and conducting code reviews to ensure quality and efficiency. In addition to your technical skills, you should have a strong background in backend technologies such as Postgres, MySQL, MongoDB, Redis, Kafka, Docker, Kubernetes, and CI/CD. Experience with AWS technologies like Lambda functions, DynamoDB, and SQS is essential. You should be well-versed in HTML5, CSS3, and CSS frameworks, and prioritize developer tooling, testing, monitoring, and observability in your work. Collaboration and effective communication with team members, product managers, and stakeholders are key aspects of this role. If you have a proven track record of architecting cost-efficient and scalable solutions, backend development experience, and a passion for delivering customer value, we encourage you to apply. Experience with Elasticsearch server cluster optimization and Apache Spark/Ray would be an added advantage. Join us at Crimson Enago to revolutionize the research industry and make a positive impact on the world through innovative technology and collaborative team efforts. For more information, visit our websites: - Trinka: www.trinka.ai - RAx: www.raxter.io - Crimson Enago: www.crimsoni.com,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
At Crimson Enago, we are dedicated to developing AI-powered tools and services that enhance the productivity of researchers and professionals. Through our flagship products Trinka and RAx, we aim to streamline the stages of knowledge discovery, acquisition, creation, and dissemination. Trinka is an AI-driven English grammar checker and writing assistant tailored for academic and technical writing. Crafted by linguists, scientists, and language enthusiasts, Trinka identifies and rectifies numerous intricate writing errors, including contextual spelling mistakes, advanced grammar issues, and vocabulary enhancements in real-time. Moreover, Trinka offers writing suggestions to ensure professional, concise, and engaging content. With subject-specific corrections, Trinka guarantees that the writing aligns with the subject matter, and its Enterprise solutions provide unlimited access and customizable features. RAx is a smart workspace designed to support researchers, including students, professors, and corporate researchers, in their projects. Powered by proprietary AI algorithms, RAx serves as an integrated workspace for research endeavors, connecting various sources of information to user behaviors such as reading, writing, annotating, and discussions. This synergy reveals new insights and opportunities in the academic realm, revolutionizing traditional research practices. Our team comprises passionate researchers, engineers, and designers united by the vision of transforming research-intensive projects. By alleviating cognitive burdens and facilitating the conversion of information into knowledge, we strive to simplify and enrich the research process. With a focus on scalability, data processing, AI integration, and global user interactions, our engineering team aims to empower individuals worldwide in their research pursuits. As a Principal Engineer Fullstack at Trinka, you will lead a team of web developers, driving top-tier engineering standards and overseeing end-to-end project development and delivery. Collaborating with the Engineering Manager, Principal Engineer, SDE-3 leads, and Technical Project Manager, you will play a pivotal role in team management, recruitment, and training. Your primary focus will be hands-on coding, constituting a significant portion of your daily responsibilities. Ideal candidates for this role at Trinka possess over 7 years of enterprise frontend-full-stack web experience, with expertise in the AngularJS-Java-AWS stack. Key characteristics we value include exceptional research skills, a commitment to testing and code quality, a penchant for scalable solutions, adeptness at project estimation and communication, and proficiency in cloud infrastructure optimization. Additionally, candidates should exhibit a keen eye for detail, a passion for user experience excellence, and a collaborative spirit essential for high-impact project delivery. Experience requirements for this role encompass proven expertise in solution architecting, frontend-full-stack development, backend technologies, AWS services, HTML5, CSS3, CSS frameworks, developer workflows, testing practices, and collaborative software engineering. A deep-rooted interest in profiling, impact analysis, root cause analysis, and Elasticsearch server cluster optimization is advantageous, reflecting a holistic approach to software development and problem-solving. Join us at Crimson Enago and be part of a dynamic team committed to reshaping research practices and empowering professionals worldwide with innovative tools and services.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
vadodara, gujarat
On-site
As a Machine Learning Engineer, you will be responsible for designing and implementing scalable machine learning models throughout the entire lifecycle - from data preprocessing to deployment. Your role will involve leading feature engineering and model optimization efforts to enhance performance and accuracy. Additionally, you will build and manage end-to-end ML pipelines using MLOps practices, ensuring seamless deployment, monitoring, and maintenance of models in production environments. Collaboration with data scientists and product teams will be key in understanding business requirements and translating them into effective ML solutions. You will conduct advanced data analysis, create visualization dashboards for insights, and maintain detailed documentation of models, experiments, and workflows. Moreover, mentoring junior team members on best practices and technical skills will be part of your responsibilities to foster growth within the team. In terms of required skills, you must have at least 3 years of experience in machine learning development, with a focus on the end-to-end model lifecycle. Proficiency in Python using Pandas, NumPy, and Scikit-learn for advanced data handling and feature engineering is crucial. Strong hands-on expertise in TensorFlow or PyTorch for deep learning model development is also a must-have. Desirable skills include experience with MLOps tools like MLflow or Kubeflow for model management and deployment, familiarity with big data frameworks such as Spark or Dask, and exposure to cloud ML services like AWS SageMaker or GCP AI Platform. Additionally, working knowledge of Weights & Biases and DVC for experiment tracking and versioning, as well as experience with Ray or BentoML for distributed training and model serving, will be considered advantageous. Join our team and contribute to cutting-edge machine learning projects while continuously improving your skills and expertise in a collaborative and innovative environment.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are a seasoned Staff Data Scientist with over 8 years of experience in Data Science/ML, specializing in Generative AI, Large Language Models (LLMs), and deep learning. Your role at PhysicsWallah involves leading and driving innovation in Generative AI, LLMs, Retrieval-Augmented Generation (RAG), Reinforcement Learning, and model fine-tuning. You will collaborate closely with cross-functional teams to create next-gen AI solutions that enhance content generation, adaptive learning, and personalized experiences for millions of learners. As a Staff Data Scientist at PhysicsWallah, your key responsibilities include designing and developing advanced AI/ML solutions using LLMs, RAG, and RL techniques, fine-tuning and customizing models for PW-specific education scenarios, building scalable pipelines for training and deployment of GenAI models, integrating models into real-world applications, conducting experiments and testing to enhance model performance, staying updated on GenAI trends to influence internal strategies, and mentoring junior data scientists while promoting best practices within the AI team. You should possess expertise in Retrieval-Augmented Generation (RAG) architectures, transforming transformer-based models like BERT, GPT, LLaMA, and Mistral, hands-on experience in Reinforcement Learning with a focus on RLHF or similar approaches, proficiency in Python and ML frameworks such as PyTorch, TensorFlow, HuggingFace, LangChain, and Ray, strong knowledge of cloud platforms like AWS/GCP and MLOps for scalable AI model deployment, a track record of deploying ML models in production environments, excellent communication skills, and the ability to work in cross-functional environments. Preferred qualifications for this role include published papers, patents, or contributions to open-source GenAI projects, experience in edtech or adaptive learning platforms, and exposure to vector databases like FAISS, Pinecone, and Weaviate, as well as semantic search systems. Join PhysicsWallah and be part of a mission-driven organization that aims to make quality education accessible to all. You will have the opportunity to work on cutting-edge GenAI challenges with significant real-world impact in a collaborative and fast-paced environment that offers immense learning opportunities. Shape the future of education with GenAI at PhysicsWallah by applying now.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing architectures for meta-learning, self-reflective agents, and recursive optimization loops. Your role will involve building simulation frameworks for behavior grounded in Bayesian dynamics, attractor theory, and teleo-dynamics. Additionally, you will develop systems that integrate graph rewriting, knowledge representation, and neurosymbolic reasoning. Conducting research on fractal intelligence structures, swarm-based agent coordination, and autopoietic systems will be part of your responsibilities. You are expected to advance Mobius's knowledge graph with ontologies supporting logic, agency, and emergent semantics. Integration of logic into distributed, policy-scoped decision graphs aligned with business and ethical constraints is crucial. Furthermore, publishing cutting-edge results and mentoring contributors in reflective system design and emergent AI theory will be part of your duties. Lastly, building scalable simulations of multi-agent, goal-directed, and adaptive ecosystems within the Mobius runtime is an essential aspect of the role. In terms of qualifications, you should have proven expertise in meta-learning, recursive architectures, and AI safety. Proficiency in distributed systems, multi-agent environments, and decentralized coordination is necessary. Strong implementation skills in Python are required, with additional proficiency in C++, functional, or symbolic languages being a plus. A publication record in areas intersecting AI research, complexity science, and/or emergent systems is also desired. Preferred qualifications include experience with neurosymbolic architectures and hybrid AI systems, fractal modeling, attractor theory, complex adaptive dynamics, topos theory, category theory, logic-based semantics, knowledge ontologies, OWL/RDF, semantic reasoners, autopoiesis, teleo-dynamics, biologically inspired system design, swarm intelligence, self-organizing behavior, emergent coordination, and distributed learning systems. In terms of technical proficiency, you should be proficient in programming languages such as Python (required), C++, Haskell, Lisp, or Prolog (preferred for symbolic reasoning), frameworks like PyTorch and TensorFlow, distributed systems including Ray, Apache Spark, Dask, Kubernetes, knowledge technologies like Neo4j, RDF, OWL, SPARQL, experiment management tools like MLflow, Weights & Biases, and GPU and HPC systems like CUDA, NCCL, Slurm. Familiarity with formal modeling tools like Z3, TLA+, Coq, Isabelle is also beneficial. Your core research domains will include recursive self-improvement and introspective AI, graph theory, graph rewriting, and knowledge graphs, neurosymbolic systems and ontological reasoning, fractal intelligence and dynamic attractor-based learning, Bayesian reasoning under uncertainty and cognitive dynamics, swarm intelligence and decentralized consensus modeling, top os theory, and the abstract structure of logic spaces, autopoietic, self-sustaining system architectures, and teleo-dynamics and goal-driven adaptation in complex systems.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Staff Data Scientist at PhysicsWallah (PW), you will play a pivotal role in driving innovation in Generative AI and advanced Machine Learning (ML) to enhance personalized and scalable learning outcomes for students across Bharat. With your expertise and experience in Generative AI, Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), Reinforcement Learning, and model fine-tuning, you will lead the design and development of cutting-edge AI solutions that power intelligent content generation, adaptive learning, and personalized experiences for millions of learners. Your responsibilities will include collaborating with cross-functional teams to create next-gen AI solutions using LLMs, RAG, and RL techniques. You will fine-tune and customize models for PW-specific education use cases, architect scalable pipelines for model training and deployment, integrate models into real-world applications, conduct rigorous experimentation and testing to enhance model performance, and stay updated with the latest trends in AI frameworks and research. To excel in this role, you will need a minimum of 8 years of hands-on experience in Data Science/ML, with a recent focus on GenAI, LLMs, and deep learning. Proficiency in RAG architectures, transformer-based models, Reinforcement Learning, Python, ML frameworks such as PyTorch and TensorFlow, cloud platforms like AWS/GCP, and MLOps for scalable AI model deployment is essential. Your track record of shipping ML models into production environments, strong communication skills, and ability to work in cross-functional settings will be crucial. If you have published papers, patents, or contributions to open-source GenAI projects, experience in edtech or adaptive learning platforms, exposure to vector databases and semantic search systems, and are driven by a passion to make quality education accessible to all, then joining PW will provide you with an opportunity to work on cutting-edge GenAI problems with a large-scale real-world impact in a collaborative and fast-paced environment that offers immense learning opportunities. Shape the future of education with GenAI at PhysicsWallah by applying now and contributing to our mission of democratizing learning and transforming the way students access affordable, high-quality education.,
Posted 2 weeks ago
5.0 - 10.0 years
2 - 7 Lacs
Chennai
Work from Office
Roles and Responsibilities Assist the Design Architect by creating detailed AutoCAD/Revit drawing sets. Research and Source materials specifications. Create 3D models & renderings. Visit sites & supervise work. Coordinate with vendors and contractors. Desired Candidate Profile Graduated from an Architecture / Interior design program (We are currently not accepting applications from any other programs like Engineering, etc). Well-versed with AutoCAD. Has interned/worked for a period of 6-24 months. Passionate & detail oriented. Awesome presentation skills. Should confirm orders and projects. Can take responsibility of an entire project drawing set. Should have the desire to learn new things. We always want to grow as a company and we encourage bringing new ideas to the table. Can take responsibility & can work to meet client deadlines. (We do not encourage working after office hours culture but some days we may have deliverables which need to be completed within a timeframe).
Posted 1 month ago
0.0 - 5.0 years
0 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
IBM Research is the innovation and growth engine of the IBM corporation. It is the largest industrial research organization in the world with 12 labs on 6 continents. IBM Research produces more breakthroughsmore than 9 patents are produced every daythan any other organization in the world. IBM employs over 3200 researchers worldwide. IBM Research India (IRL) is the leading industrial research lab in India, shaping the future of computing across AI, Hybrid Cloud and Quantum Computing. IRL has a long legacy of ground-breaking innovation in the areas of computer science and its applications to a wide variety of disciplines and offerings for IBM. IRL researchers are working on projects that are pushing the state of the art across Foundation Models, optimized runtime stacks for FM workloads such as tuning, large scale data engineering and pre-training, multi-accelerator model optimization, agentic workflows and modalities across language, code, time series, IT automation and geospatial. We are strong proponents of open-source community-driven software and model development, and our work spans a wide spectrum from research collaborations with academia to developing enterprise-grade commercial software. Your role and responsibilities Research Engineer position at IBM India Research Lab is a challenging, dynamic and highly innovative role. Some of our current areas of work where we are actively looking for top talent are: Optimized runtime stacks for foundation model workloads including fine-tuning, inference serving and large-scale data engineering, with a focus on multi-stage tuning including reinforcement learning, inference-time compute, and data preparation needs for complex AI systems. Optimizing models to run on multiple accelerators including IBM's AIU accelerator leveraging compiler optimizations, specialized kernels, libraries and tools. Developing use cases that effectively leverage the infrastructure and models to deliver value Pre-training language and multi-modal foundation models working with large scale distributed training procedures, model alignment, creating specialized pipelines for various tasks including effective LLM-generated data pipelines, creating frameworks for collecting human data and deploying models in user-centric platforms. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise You should have one or more of the following: A master's degree in computer science, AI or related fields from a top institution 0-8 years of experience working with modern ML techniques including but not limited to model architectures, data processing, fine-tuning techniques, reinforcement learning, distributed training, inference optimizations Experience with big data platforms like Ray and Spar Experience working with Pytorch FSDP and HuggingFace libraries Programming experience in one of the following: Python, web development technologies Growth mindset and a pragmatic attitude Preferred technical and professional experience Peer-reviewed research at top machine learning or systems conferences Experience working with pytorch.compile, CUDA, triton kernels, GPU scheduling, memory management Experience working with open-source communities
Posted 1 month ago
5.0 - 10.0 years
3 - 7 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar)
Posted 2 months ago
1 - 3 years
4 - 8 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Key Responsibilities: Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL.. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems. Develop tools and scripts to support model monitoring, logging, and retraining workflows. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring. Contribute to the development of a feature store and standardized processes to support reproducible data science. Required Skills & Experience: 13 years of hands-on experience in Python programming for data science or ML engineering tasks. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc. Familiarity with ETL pipelines and data transformation best practices. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins. Strong analytical mindset with the ability to interpret and debug data/model issues. Preferred Qualifications: Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar. Knowledge of ML lifecycle tools (e.g., MLflow, Ray). Familiarity with cloud platforms (AWS preferred) and scalable infrastructure. Experience with data or model versioning tools and feature engineering frameworks.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough