Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Title: Python / AI-ML Developer Experience Required: 2–5 Years Location: Kolkata Job Type: Full-time Job Summary: We are looking for a passionate and skilled Python / AI-ML Developer with 2–5 years of experience to join our data science and engineering team. The ideal candidate will be responsible for building intelligent solutions, designing machine learning models, and developing scalable Python-based systems. This role offers the opportunity to work on real-world data-driven projects with a focus on innovation and performance. Key Responsibilities: Design, build, and deploy machine learning models and AI-driven solutions. Write clean and efficient Python code for data preprocessing, model training, and evaluation. Work with large datasets to extract insights and create predictive models. Collaborate with cross-functional teams including data scientists, engineers, and business stakeholders. Deploy models into production using REST APIs or cloud services. Monitor and retrain models to ensure accuracy and efficiency over time. Research and experiment with new algorithms and AI techniques Required Skills and Qualifications : 2–5 years of hands-on experience in Python development with a focus on AI/ML. Solid understanding of machine learning algorithms (supervised, unsupervised, deep learning). Experience with ML libraries such as Scikit-learn, TensorFlow, PyTorch, Keras, or XGBoost. Proficient in data handling using Pandas, NumPy, and data visualization tools like Matplotlib or Seaborn. Experience in training, evaluating, and tuning models using appropriate metrics. Strong foundation in mathematics and statistics (linear algebra, probability, optimization). Familiarity with REST APIs and model deployment strategies. Experience with Git and collaborative development Preferred Qualifications: Bachelor's or master’s degree in computer science, Data Science, AI, Engineering, or a related field. Experience with cloud platforms (AWS, Azure, GCP) and ML Ops tools. Exposure to Natural Language Processing (NLP), Computer Vision, or Recommendation Systems. Knowledge of Big Data tools (Spark, Hadoop) is a plus. Understanding of CI/CD pipelines and containerization using Docker/Kubernetes
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Career Area Technology, Digital and Data Job Description Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Caterpillar Inc. is more than big, heavy equipment and much more than yellow iron. Our products have evolved from simple mechanical workhorses to sophisticated, electronically controlled work-site solutions. This transformation along with our smart factories and our integrated dealer network has wealth of data ready to be harvested to open new markets, fine tune our processes or deliver differentiated customer solutions. Analytics Professionals are the data driven, business professionals providing solutions to all areas across the Caterpillar enterprise. Marketing & Branding Analytics uses quantitative methods such as business simulations, data mining, and advanced statistical techniques to solve marketing challenges for internal and external Caterpillar customers. The Data Scientist II contributes to this mission by leveraging his or her quantitative analysis, data management, modeling and/or data visualization skills as an individual contributor to project teams tasked with solving business problems. Job Duties Typical problems include enhancing customer satisfaction by delivering key insights on customer experience, identifying sales, rental, and service opportunities for Caterpillar dealers, determining the principal drivers to maximize marketing return on investment; recommending optimum investment in each marketing channel. The principal responsibility of the Data Scientist is to be an independent contributor to multi-person analytic teams. This position has a depth of knowledge in quantitative analytic methods, data management, and or associated digital technologies suitable to handle all but the most complex issues. Data Scientist is expected to be familiar with the company’s processes, products, and organization, as well as its customers, competitors, and stakeholders. Work is typically directed by a direct supervisor, project or team lead through a review of results. Decisions on routine, medium risk issues that may affect the project team, suppliers or internal customers may be made by this position. Challenges include meeting expectations in delivering results, learning to refine solutions to better fit complex situations, making timely decisions, and communicating effectively with all project stakeholders. The Data Scientist also mentors and develops the capabilities and organizational knowledge of junior data scientists and associates. The Data Scientist demonstrates thorough knowledge of statistical approaches, data management techniques, and/or related digital technologies, and the ability to handle complex issues. The incumbent demonstrates very good communication and presentation skills, being able to explain conclusions to customers who have limited knowledge and experience with quantitative analytical methods. As an individual contributor on teams, they should also exhibit strong initiative and teamwork skills, and a comprehensive knowledge of Caterpillar Inc., its products and services; its internal systems, processes, and procedures; and the external environment in which it competes. Background/Experience: Bachelor’s degree, preferably in AI, Data Science, Computer Science, Statistics or a similar field with quantitative coursework, and 4-5 years of marketing analytics experience utilizing quantitative analysis, a Master’s degree and 2-4 years of experience, or a PhD in one of the associated fields. Strong understanding of machine learning algorithms, data structures, and statistical methods. Experience with deep learning techniques and neural networks Ability to work with large datasets in cloud environment and perform data preprocessing and feature engineering. Proficiency in Python and experience with machine learning frameworks such as TensorFlow or PyTorch. Experience in designing, developing, deploying Machine Learning models using AWS Sage Maker. Preference: GA4(Google analytics) Expertise. Strong understanding of GA4 analytics concepts like channels/medium/source, tracking campaign engagement (UTM), events/sessions interactions etc. Ability to work with large amount of GA4 data, capable of doing in depth analysis on user behaviour , campaign attribution & customer churn and present actionable insights to the business. Posting Dates July 23, 2025 - July 29, 2025 Caterpillar is an Equal Opportunity Employer. Qualified applicants of any age are encouraged to apply Not ready to apply? Join our Talent Community.
Posted 1 week ago
10.0 - 12.0 years
0 Lacs
Greater Bengaluru Area
On-site
Role: Senior Principal - Data Scientist Position overview Work in an innovative and fast paced AI application development team to conceptualize and execute projects to leverage the power of AI/ML and analytics. The work will be related to producing business outcomes for Cloud and Cybersecurity products and services of Novamesh, a wholly owned subsidiary of Tata Communications Ltd. Success in this role requires a mix of data science skills, appreciation of the business, and ability to work across teams. A special focus area of this role would be to identify and execute ideas for creating monetizable product differentiators by working with Domain Experts from individual product teams and acquire domain skills in the process. Detailed job description Develop, Test, and Deploy ML/AI models for various products 10 to 12 Years of industry experience with demonstratable outcomes in field of Data Science Perform data preprocessing, feature engineering and ML/DL model evaluation Optimize and fine-tune models for performance and scalability Good understanding of Statistical, ML, AI models Good understanding of NLP concepts and projects involving entity recognition, text classification, and language modelling like GPT Build and refine RAG models to improve information retrieval and answer generation. Integrate RAG methods into existing applications to enhance data accessibility and user experience. Work closely with cross-functional teams including software engineers, product managers, and domain experts. Document processes, methodologies, and model development for internal and external stakeholders. Go-getter attitude and will to “Make it happen" Qualification and Skills Bachelor or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field from reputed institutions. Strong knowledge of probability and statistics. Expertise in machine learning and deep learning skills Hand on experience with GenAI, LLMs and SLMs Strong programming skills Python, PyTorch, Sci-kit, NumPy, Gen AI tools like langchain/llamaIndex , OpenAI SQL, flat file DB, Datalakes, data stores, data frames (Pandas, Cudf etc.) Working knowledge of MLOPs principles and implementing projects with Big Data in batch and streaming mode. Good knowledge of Data Engineering and working knowledge of the same Excellent problem-solving skills and a proactive attitude. Excellent communication skills and teamwork abilities.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Staff - Full Stack We are seeking a Senior Conversational Developer with extensive experience in designing and implementing complex solutions on Python, data engineering, JavaScript, ReactJS and node.JS. The ideal candidate should also have experience in conversational AI including leveraging Generative AI for developing chatbot solutions. Key Responsibilities: Lead the design, development, and deployment of applications. Write custom JavaScript code and integration with backend services. Collaborate with stakeholders to gather requirements and meet business needs. Test, debug, and refine to ensure optimal performance and user experience. Provide mentorship and code reviews for junior developers. Qualifications: Requires 1-3 years minimum prior relevant experience Bachelor's degree in computer science, Information Technology, or related field. Minimum of 5 – 8 years of experience in chatbot development. Advanced proficiency in JavaScript, including experience with asynchronous programming, AJAX, and API integrations. Knowledge of NLP and ML techniques in the context of conversational AI. Experience with other chatbot platforms is a plus. Strong problem-solving skills and the ability to work in a dynamic environment. Excellent communication skills for effective collaboration with technical and non-technical teams. A portfolio demonstrating successful chatbot projects is preferred. Skills: Strong skills in Python, JavaScript framework / related web technologies, server-side programming such as node.JS and Azure cloud. Familiarity with API design and development, including RESTful APIs and web services. Experience with databases, including relational and understanding of NoSQL databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data Familiarity with the Conversational AI domain, conversational design & implementation, customer experience metrics, and industry-specific challenges Familiarity with NLP and ML applications in chatbots, feature extraction, entity extraction, intent classification etc Understanding of conversational (chats, emails and calls) data and its preprocessing (including feature engineering if required) to train Conversational AI systems. Strong problem-solving, analytical and project management skills Excellent communication and collaboration skills to work effectively with cross-functional teams and stakeholders. Familiarity with Agile development methodologies and version control systems. Ability to stay updated with the latest advancements and trends EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Staff - Full Stack We are seeking a Senior Conversational Developer with extensive experience in designing and implementing complex solutions on Python, data engineering, JavaScript, ReactJS and node.JS. The ideal candidate should also have experience in conversational AI including leveraging Generative AI for developing chatbot solutions. Key Responsibilities: Lead the design, development, and deployment of applications. Write custom JavaScript code and integration with backend services. Collaborate with stakeholders to gather requirements and meet business needs. Test, debug, and refine to ensure optimal performance and user experience. Provide mentorship and code reviews for junior developers. Qualifications: Requires 1-3 years minimum prior relevant experience Bachelor's degree in computer science, Information Technology, or related field. Minimum of 5 – 8 years of experience in chatbot development. Advanced proficiency in JavaScript, including experience with asynchronous programming, AJAX, and API integrations. Knowledge of NLP and ML techniques in the context of conversational AI. Experience with other chatbot platforms is a plus. Strong problem-solving skills and the ability to work in a dynamic environment. Excellent communication skills for effective collaboration with technical and non-technical teams. A portfolio demonstrating successful chatbot projects is preferred. Skills: Strong skills in Python, JavaScript framework / related web technologies, server-side programming such as node.JS and Azure cloud. Familiarity with API design and development, including RESTful APIs and web services. Experience with databases, including relational and understanding of NoSQL databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data Familiarity with the Conversational AI domain, conversational design & implementation, customer experience metrics, and industry-specific challenges Familiarity with NLP and ML applications in chatbots, feature extraction, entity extraction, intent classification etc Understanding of conversational (chats, emails and calls) data and its preprocessing (including feature engineering if required) to train Conversational AI systems. Strong problem-solving, analytical and project management skills Excellent communication and collaboration skills to work effectively with cross-functional teams and stakeholders. Familiarity with Agile development methodologies and version control systems. Ability to stay updated with the latest advancements and trends EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Staff - Full Stack We are seeking a Senior Conversational Developer with extensive experience in designing and implementing complex solutions on Python, data engineering, JavaScript, ReactJS and node.JS. The ideal candidate should also have experience in conversational AI including leveraging Generative AI for developing chatbot solutions. Key Responsibilities: Lead the design, development, and deployment of applications. Write custom JavaScript code and integration with backend services. Collaborate with stakeholders to gather requirements and meet business needs. Test, debug, and refine to ensure optimal performance and user experience. Provide mentorship and code reviews for junior developers. Qualifications: Requires 1-3 years minimum prior relevant experience Bachelor's degree in computer science, Information Technology, or related field. Minimum of 5 – 8 years of experience in chatbot development. Advanced proficiency in JavaScript, including experience with asynchronous programming, AJAX, and API integrations. Knowledge of NLP and ML techniques in the context of conversational AI. Experience with other chatbot platforms is a plus. Strong problem-solving skills and the ability to work in a dynamic environment. Excellent communication skills for effective collaboration with technical and non-technical teams. A portfolio demonstrating successful chatbot projects is preferred. Skills: Strong skills in Python, JavaScript framework / related web technologies, server-side programming such as node.JS and Azure cloud. Familiarity with API design and development, including RESTful APIs and web services. Experience with databases, including relational and understanding of NoSQL databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data Familiarity with the Conversational AI domain, conversational design & implementation, customer experience metrics, and industry-specific challenges Familiarity with NLP and ML applications in chatbots, feature extraction, entity extraction, intent classification etc Understanding of conversational (chats, emails and calls) data and its preprocessing (including feature engineering if required) to train Conversational AI systems. Strong problem-solving, analytical and project management skills Excellent communication and collaboration skills to work effectively with cross-functional teams and stakeholders. Familiarity with Agile development methodologies and version control systems. Ability to stay updated with the latest advancements and trends EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title ML Engineer – Predictive Maintenance Job Description ML Engineer at DSP – Predictive Maintenance Hay Level Hay 60 Job Location Veghel Vanderlande provides baggage handling systems for 600 airports globally, moving over 4 billion pieces of baggage annually. For the parcel market, our systems handle 52 million parcels daily. All these systems generate massive amounts of data. Do you see the challenge in building models and solutions that enable data-driven services, including predictive insights using machine learning? Would you like to contribute to Vanderlande's fast-growing Technology Department and its journey to become more data-driven? If so, join our Digital Service Platform team! Your Position You will work as a Data Engineer with Machine Learning expertise in the Predictive Maintenance team. This hybrid and multi-cultural team includes Data Scientists, Machine Learning Engineers, Data Engineers, a DevOps Engineer, a QA Engineer, an Architect, a UX Designer, a Scrum Master, and a Product Owner. The Digital Service Platform focuses on optimizing customer asset usage and maintenance, impacting performance, cost, and sustainability KPIs by extending component lifetimes. In your role, you will Participate in solution design discussions led by our Product Architect, where your input as a Data Engineer with ML expertise is highly valued. Collaborate with IT and business SMEs to ensure delivery of high-quality end-to-end data and machine learning pipelines. Your Responsibilities Data Engineering Develop, test, and document data (collection and processing) pipelines for Predictive Maintenance solutions, including data from (IoT) sensors and control components to our data platform. Build scalable pipelines to transform, aggregate, and make data available for machine learning models. Align implementation efforts with other back-end developers across multiple development teams. Machine Learning Integration Collaborate with Data Scientists to integrate machine learning models into production pipelines, ensuring smooth deployment and scalability. Develop and optimize end-to-end machine learning pipelines (MLOps) from data preparation to model deployment and monitoring. Work on model inference pipelines, ensuring efficient real-time predictions from deployed models. Implement automated retraining workflows and ensure version control for datasets and models. Continuous Improvement Contribute to the design and build of a CI/CD pipeline, including integration test automation for data and ML pipelines. Continuously improve and standardize data and ML services for customer sites to reduce project delivery time. Actively monitor model performance and ensure timely updates or retraining as needed. Your Profile Minimum 4 years' experience building complex data pipelines and integrating machine learning solutions. Bachelor's or Master's degree in Computer Science, IT, Data Science, or equivalent. Hands-on experience with data modeling and machine learning workflows. Strong programming skills in Java, Scala, and Python (preferred for ML tasks). Experience with stream processing frameworks (e.g., Spark) and streaming storage (e.g., Kafka). Proven experience with MLOps practices, including data preprocessing, model deployment, and monitoring. Familiarity with ML frameworks and tools (e.g., TensorFlow, PyTorch, MLflow). Proficient in cloud platforms (preferably Azure and Databricks). Experience with data quality management, monitoring, and ensuring robust pipelines. Knowledge of Predictive Maintenance model development is a strong plus. What You’ll Gain Opportunity to work at the forefront of data-driven innovation in a global organization. Collaborate with a talented and diverse team to design and implement cutting-edge solutions. Expand your expertise in data engineering and machine learning in a real-world industrial setting. If you are passionate about leveraging data and machine learning to drive innovation, we’d love to hear from you!
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role Overview: We’re looking for freshers who are passionate about Python and Artificial Intelligence. You should be a fast learner, eager to experiment, and unafraid to fail fast and iterate. You’ll work on AI-driven projects, prototypes, and production systems that challenge the status quo. Key Responsibilities: · Write clean, efficient Python code for AI and data-driven applications · Experiment, prototype, and rapidly test AI models and solutions · Work closely with senior developers and AI engineers to build scalable systems · Research and learn new AI tools, frameworks, and technologies · Debug, improve, and optimize existing code and AI workflows · Participate in brainstorming and solution design sessions · Document your work clearly for knowledge sharing Must-Have Skills: · Strong Python programming skills · Solid understanding of AI/ML fundamentals (e.g. supervised learning, neural networks, NLP, computer vision, etc.) · Basic knowledge of popular AI libraries (e.g. TensorFlow, PyTorch, scikit-learn, OpenCV, etc.) · Strong problem-solving skills and logical thinking · Curiosity and willingness to explore new technologies quickly · Good communication skills Nice-to-Have: · Experience with AI model deployment (Flask/FastAPI, Docker, cloud services) · Understanding of data preprocessing and analysis (NumPy, Pandas, etc.) · Participation in AI hackathons, competitions, or personal projects · Familiarity with Git and version control workflows · Basic knowledge of frontend technologies (optional but helpful) What We Offer: · Opportunity to work on real-world AI projects from scratch · A culture that values learning, experimentation, and speed · Guidance and mentorship from experienced developers and AI experts · Flexible work environment · Clear growth paths and opportunities to level up quickly · Exposure to global clients and cutting-edge projects
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Highspot Highspot is a software product development company and a recognized global leader in the sales enablement category, leveraging cutting-edge AI and GenAI technologies at the core of its robust Software-as-a-Service (SaaS) platform. Highspot is revolutionizing how millions of individuals work worldwide. Through its AI-powered platform, Highspot drives enterprise transformation to empower sales teams through intelligent content management, training, contextual guidance, customer engagement, meeting intelligence, and actionable analytics. The Highspot platform delivers advanced features tailored to business needs, in a modern design that sales and marketing executives appreciate and is the #1 rated sales enablement platform on G2 Crowd. While headquartered in Seattle, Highspot has expanded its footprint across America, Canada, the UK, Germany, Australia, and now India, solidifying its presence in the Asia Pacific markets. About The Role You will safeguard the quality of our AI and GenAI features by evaluating model outputs, creating “golden” datasets, and guiding continuous improvements in collaboration with data scientists and engineers. Be the guide to the team as the team creates a robust methodology and framework that will drive evaluation of hundreds of AI agents. Responsibilities Validate AI Output – Review model results across text, documents, audio, and video; flag errors and improvement opportunities. Create Golden Datasets – Build and maintain high-quality reference data with help from subject-matter experts. Data Annotation – Accurately label multi-modal data and perform light preprocessing (e.g., text cleanup, image resizing). Quality Analytics – Run accuracy metrics, trend analyses, and clustering to gauge model performance. Fine-Tune Model Code – Adjust training scripts and parameters to boost accuracy and keep behavior consistent across models. Process Improvement & Documentation – Refine annotation workflows and keep clear records of methods, findings, and best practices. Cross-Functional Collaboration – Partner with ML engineers, product managers, and QA peers to ship reliable, user-ready features. Required Qualifications 2 to 4 years of experience in data science/AI/ML. Specific experience in evaluation of AI results is strongly preferred Working knowledge of AI/ML evaluation techniques Solid analytical skills and meticulous attention to detail. Bachelor’s or Master’s in Computer Science, Data Science, or a related field. Strong English written and verbal communication. Self-directed, organized, and comfortable with ambiguous problem spaces. Equal Opportunity Statement We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of age, ancestry, citizenship, color, ethnicity, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or invisible disability status, political affiliation, veteran status, race, religion, or sexual orientation. Did you read the requirements as a checklist and not tick every box? Don't rule yourself out! If this role resonates with you, hit the ‘apply’ button.
Posted 1 week ago
3.0 years
1 - 3 Lacs
India
Remote
We are looking for a skilled and passionate AI Developer with 3+ years of hands-on experience in building and deploying AI/ML solutions. The ideal candidate will have a strong foundation in data science, machine learning algorithms, and deep learning frameworks, and will be responsible for developing scalable AI applications tailored to our business needs. Key Responsibilities: Design, develop, and deploy machine learning models and AI-driven applications. Collaborate with data engineers and software developers to integrate AI models into production systems. Conduct data preprocessing, feature engineering, and model tuning. Research and implement state-of-the-art ML/DL algorithms for predictive analytics, NLP, computer vision, etc. Monitor and evaluate model performance and retrain when necessary.Stay updated with the latest AI trends, technologies, and best practices. Required Skills & Qualifications Bachelor’s/Master’s degree in Computer Science, Data Science, AI, or related field. 3+ years of professional experience in AI/ML development. Proficient in Python and common ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with NLP, computer vision, recommendation systems, or other AI domains.Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a collaborative team environment. Preferred Qualifications: Experience deploying AI models using REST APIs, Docker, or Kubernetes. Exposure to MLOps tools and frameworks. Contribution to open-source AI/ML projects or publications in the field. If you are passionate and have the required expertise, we would love to hear from you. Please send your resume to anisha.mohan@pearlsofttechnologies.co.in. Join us at PearlSoft Technologies and be a part of a team that creates innovative solutions for businesses worldwide! Job Type: Full-time Pay: ₹15,348.97 - ₹25,000.00 per month Benefits: Work from home Work Location: In person Application Deadline: 07/07/2025 Expected Start Date: 08/08/2025
Posted 1 week ago
2.0 years
2 - 3 Lacs
India
On-site
Key Responsibilities: Develop and deploy machine learning and deep learning models Work on NLP, computer vision, or recommendation systems Optimize models for performance and scalability Stay updated with the latest AI research and trends Skills We’re Looking For: Strong Python programming skills Experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn) Solid understanding of data preprocessing, model evaluation, and LOps Hands-on with tools like Pandas, NumPy, OpenCV, NLTK/spaCy Exposure to cloud platforms (AWS, GCP, Azure) is a plus Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Provident Fund Ability to commute/relocate: Palarivattom, Kochi, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI: 2 years (Preferred) Work Location: In person Expected Start Date: 25/07/2025
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are—with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Responsibilities Develop and expand our core data platform in MS Azure, Fabric building robust data infrastructure and scalable solutions. Enhance datasets and transformation toolsets on the MS Azure platform, leveraging distributed processing frameworks to modernize processes and expand the codebase Design and maintain ETL pipelines to ensure data is transformed, cleaned, and standardized for business use. Collaborate with cross-functional teams to deliver high-quality data solutions, contributing to both UI and backend development while translating UX/UI designs into functional interfaces. Develop scripts for building, deploying, and maintaining data systems, while utilizing tools for data exploration, analysis, and visualization throughout the project lifecycle. Utilize SQL and NoSQL databases for effective data management and support Agile practices with tools like Jira and GitHub. Contribute to technology standards and best practices in data warehousing and modeling, ensuring alignment with overall data strategy. Lead and motivate teams through periods of change, fostering a collaborative and innovative work environment. Skills and Competencies 3-6 years of cloud-based data engineering experience, with expertise in Microsoft Azure and other cloud platforms. Proficient in SQL and experienced with NoSQL databases, message queues, and streaming platforms like Kafka. Strong knowledge of Python and big data processing using PySpark, along with experience in CI/CD pipelines (Jenkins, GitHub, Terraform). Familiar with machine learning libraries such as TensorFlow and Keras, and skilled in data visualization tools like Power BI/Fabric and Matplotlib. Expertise in data wrangling, including cleaning, preprocessing, and transformation, with a solid foundation in statistics and probability. Excellent communication skills for engaging with technical and non-technical audiences across all organizational levels. Experience in UI development, translating UX/UI designs into code, data warehousing concepts, API development and integration, and workflow orchestration tools is desired. Education Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics, or a related field Relevant certifications in data science and machine learning are a plus About The Team Our Technology Services Group (TSG) Team is responsible for delivering Innovative, data driven tech solutions. We build solutions that power analytics, enable machine learning, and provide critical insights across the organization. By joining our team, you will be part of exciting work in building scalable, next-generation data solutions that directly impact business strategy. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AI Engineer Location: Udyog Vihar, Gurugram Company: Novus Loyalty Experience: Minimum 3 yrs About Novus: Novus is a forward-thinking technology solutions provider specializing in cutting-edge digital transformation services. We help businesses enhance their operations with innovative software solutions, AI-driven analytics, and customer-centric platforms. Job Overview: We are currently seeking a skilled and passionate AI Engineer to join our team, with a strong emphasis on Azure AI technologies. Key Responsibilities: Design, develop, and deploy AI models using Azure AI services. Work on various AI domains, including Generative AI, Agentic AI , and other advanced ML models. Collaborate with cross-functional teams to integrate AI solutions into production environments. Conduct data analysis and preprocessing to support model development. Monitor and optimize AI model performance and accuracy. Stay up to date with emerging AI technologies, tools, and best practices Qualifications & Skills: Minimum 3 years of hands-on experience in AI/ML model development. Proven experience with Azure AI services (e.g., Azure Machine Learning, Cognitive Services, Azure OpenAI). Strong programming skills in Python or other relevant languages. Solid understanding of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with large language models (LLMs), NLP, or generative AI applications is a plus. Familiarity with MLOps practices and cloud-based model deployment. Excellent problem-solving and communication skills. Why Join Us? Work with cutting-edge AI technologies and cloud infrastructure. Opportunity to lead innovative projects with real-world impact. Collaborative and growth-oriented work culture.
Posted 1 week ago
0 years
6 Lacs
Mohali
On-site
Key Responsibilities: Develop, fine-tune, and deploy LLM-based NLP models for tasks such as text classification, summarization, entity recognition, and question answering. Implement deep learning architectures using TensorFlow and/or PyTorch . Integrate OpenAI APIs and other foundation models into internal applications and tools. Design and build computer vision models for image detection, recognition, classification, and segmentation. Collaborate with cross-functional teams including product managers, data engineers, and UI/UX designers. Stay updated with the latest research trends in AI/ML and apply them to enhance system capabilities. Optimize model performance for scalability, accuracy, and speed. Build pipelines for data preprocessing, model training, validation, and deployment. Document models, experiments, and system behavior for team knowledge sharing. Key Skills & Qualifications: Bachelor's or Master’s degree in Computer Science , Artificial Intelligence , Data Science , or a related field. Strong hands-on experience with TensorFlow (or PyTorch). Deep understanding of NLP techniques , transformers , BERT , GPT , T5 , etc. Experience working with OpenAI , Hugging Face , or other LLM frameworks. Solid foundation in Computer Vision concepts and frameworks like OpenCV , YOLO , CNNs , Detectron , etc. Proficient in Python and relevant libraries (e.g., Numpy, Pandas, Scikit-learn). Experience with REST APIs, Flask/FastAPI for deploying ML models is a plus. Excellent problem-solving and analytical skills. Preferred: Prior experience in building AI-based SaaS products or intelligent automation solutions. Knowledge of MLOps , model versioning , and cloud platforms (AWS/GCP/Azure). Familiarity with Reinforcement Learning and Generative AI is a bonus. Job Type: Full-time Pay: Up to ₹50,000.00 per month Schedule: Morning shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
🚀 Join SmartXAlgo Private Limited — 90-Day Internship Opportunities Who we are SmartXAlgo is a fintech startup developing AI-powered trading strategies and smart tools that bring clarity and confidence to traders. We’re a team united by innovation, integrity, and a human-first approach to technology. Program Duration & Stipend 📅 Duration: 90 days 💰 Stipend: ₹3,000 – ₹7,000 (based on role, skills, and performance) Location: Bhubaneswar Job Type: On-Site We’re hiring for three roles: 1. HR Intern Role Overview: Support hiring, onboarding, and employee engagement initiatives. Gain hands-on HR experience in a fast-paced startup environment. Key Responsibilities: Assist with recruitment and interview scheduling Help prepare HR documents, onboarding, and orientation Support team engagement activities and internal communications What You Bring: Strong interpersonal and communication skills Organized, proactive, and team-oriented Currently pursuing HR, business, or related field 2. AI/ML Intern Role Overview: Work alongside our AI team to research and develop smart algorithmic-trading models. Key Responsibilities: Assist in building and fine-tuning ML models for trade signal generation Run data preprocessing and technical analysis feature engineering Support backtesting efforts using historical market data What You Bring: Knowledge of machine learning, Python, libraries like scikit‑learn or TensorFlow Analytical mindset and problem‑solving skills Pursuing studies in CS, AI/ML, data science or related field 3. Backend Intern Role Overview: Help build and maintain our trading platform’s backend systems—APIs, authentication, and integrations. Key Responsibilities: Develop and maintain backend services in Python, Node.js, or Java Integrate trading APIs and database management Debug, optimize performance, and assist deployment What You Bring: Experience with RESTful APIs, databases (SQL/NoSQL), and Git Logical thinking, attention to detail, and collaborative attitude Pursuing a degree in CS, software engineering, or related field 📝 How to Apply Send your CV , role preference , and a brief note explaining what excites you about the role to hr@smartxalgo.com with subject line: “Internship Application – [Role] – Your Name” 🌟 Why Join Us? Meaningful learning in algorithmic trading & fintech tech stack Mentorship from an experienced team of technologists and traders Be part of a collaborative, trustworthy, and growing culture Applications close soon—grab this chance to learn, contribute, and grow! Outstanding interns may be offered a full-time position based on performance and company requirements.
Posted 1 week ago
1.0 - 3.0 years
2 - 4 Lacs
India
On-site
Location: Indore (Work from Office) Job Type: Full-Time About the Role: Engineer Sahab Education is looking for a passionate and knowledgeable AI/ML Mentor with 1 to 3 years of industry or teaching experience to join our growing team in Indore . This is a full-time, in-office role where you will guide and mentor students, helping them build strong foundations in Artificial Intelligence and Machine Learning through hands-on learning, real-world projects, and industry-relevant tools. Key Responsibilities: Deliver engaging and interactive sessions on AI/ML concepts , including Python for ML, Data Preprocessing, Supervised & Unsupervised Learning, Deep Learning, NLP, Computer Vision, and more. Guide students in building capstone projects , participating in hackathons, and solving real-world problems. Provide 1-on-1 mentorship and doubt-solving support to help learners strengthen their understanding. Stay updated with the latest AI/ML trends and continuously improve the course material. Conduct regular assessments, feedback sessions, and progress reviews . Collaborate with curriculum designers and the academic team to enhance learning outcomes. Requirements: Bachelor's or Master’s degree in Computer Science, AI/ML, Data Science , or related field. 1–3 years of experience in AI/ML development or teaching/training roles. Proficient in Python , machine learning libraries like Scikit-learn, TensorFlow, Keras, Pandas, NumPy , etc. Strong communication skills and passion for mentoring students. Experience in building and deploying ML models is a plus. What We Offer: A collaborative and student-focused work environment. Opportunity to impact the careers of future tech professionals. Access to continuous learning and upskilling resources. Competitive salary with growth opportunities. If you’re passionate about AI/ML and love sharing knowledge, we’d love to hear from you! Apply now and become a part of Engineer Sahab Education’s mission to shape the next generation of AI/ML professionals. Job Type: Full-time Pay: ₹200,000.00 - ₹400,000.00 per year Supplemental Pay: Performance bonus Shift allowance Work Location: In person
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Join our Team About this opportunity: We are seeking an experienced Senior Backend Engineer to join our Intelligent Automation and AI engineering team. This role is critical to driving the development of scalable backend systems that enable the enterprise-wide adoption of Generative AI solutions. The ideal candidate will bring strong technical expertise in API orchestration, data engineering, and memory optimization, coupled with a passion for operationalizing AI at scale across business functions. As a key member, you will design and build backend frameworks that power intelligent agents, streamline workflows, and unlock value for diverse business units. What you will do: Architect and develop robust backend systems to enable enterprise AI solutions across finance, operations, HR, and customer support functions. Design and implement data preprocessing and chunking pipelines to ensure efficient use of Large Language Models (LLMs) and cost optimization. Integrate vector databases and relational databases to support hybrid memory architectures for AI agents. Develop API orchestration layers leveraging e.g. LangChain, FastAPI/Django, and enterprise-grade middleware tools. Collaborate with business domain leaders to understand functional requirements and design domain-aware memory strategies for AI-powered workflows. Implement caching, validation, and summarization strategies to deliver accurate, cost-effective, and scalable AI solutions. Ensure secure, reliable integration of AI systems with internal enterprise platforms via REST/SOAP APIs. Document architectural blueprints, technical specifications, and operational workflows for enterprise-wide adoption. Stay ahead of trends in Generative AI and backend engineering to support enterprise transformation objectives. The skills you bring: 8-10 Years of experience. Technical Expertise: Proven experience designing and delivering backend solutions for AI/ML applications at scale. Python Proficiency: Strong hands-on skills with Python (Pandas, Numpy) for data engineering and validation workflows. API Orchestration: Expertise in developing and managing APIs (FastAPI, Django) and orchestrating workflows. Data Management: Experience with vector databases and relational databases LLM Integration: Working knowledge of cloud-based LLM APIs (Claude, OpenAI, Hugging Face) and memory optimization techniques. Enterprise Mindset: Ability to work within large, complex organizations, balancing innovation with operational stability and security requirements. Collaboration: Strong interpersonal skills to work with cross-functional business and IT teams in a transformation context. Preferred: Exposure to agentic AI architectures and designing token-efficient AI workflows for enterprise use cases. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Gurgaon Req ID: 769360
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai & Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or related field (Ph.D. is a plus) Minimum of 5 hands-on ML/AI projects , preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy , and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Location: Indore (Work from Office) Job Type: Full-Time About the Role: Engineer Sahab Education is looking for a passionate and knowledgeable AI/ML Mentor with 1 to 3 years of industry or teaching experience to join our growing team in Indore . This is a full-time, in-office role where you will guide and mentor students, helping them build strong foundations in Artificial Intelligence and Machine Learning through hands-on learning, real-world projects, and industry-relevant tools. Key Responsibilities: Deliver engaging and interactive sessions on AI/ML concepts , including Python for ML, Data Preprocessing, Supervised & Unsupervised Learning, Deep Learning, NLP, Computer Vision, and more. Guide students in building capstone projects , participating in hackathons, and solving real-world problems. Provide 1-on-1 mentorship and doubt-solving support to help learners strengthen their understanding. Stay updated with the latest AI/ML trends and continuously improve the course material. Conduct regular assessments, feedback sessions, and progress reviews . Collaborate with curriculum designers and the academic team to enhance learning outcomes. Requirements: Bachelor's or Master’s degree in Computer Science, AI/ML, Data Science , or related field. 1–3 years of experience in AI/ML development or teaching/training roles. Proficient in Python , machine learning libraries like Scikit-learn, TensorFlow, Keras, Pandas, NumPy , etc. Strong communication skills and passion for mentoring students. Experience in building and deploying ML models is a plus. What We Offer: A collaborative and student-focused work environment. Opportunity to impact the careers of future tech professionals. Access to continuous learning and upskilling resources. Competitive salary with growth opportunities. If you’re passionate about AI/ML and love sharing knowledge, we’d love to hear from you! Apply now and become a part of Engineer Sahab Education’s mission to shape the next generation of AI/ML professionals.
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Senior Analyst - Data Analytics Location: Pan India Candidate Specifications: Candidate should have 3+ years of experience in Data Analytics and reporting, Databricks, Power BI, Snowflake. Strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R. Experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue. Hands-on experience in building and deploying machine learning models. Ability to translate complex data into actionable insights. Excellent problem-solving and communication skills Job Description: Design and build interactive dashboards and reports using Power BI and Microsoft Fabric Perform advanced data analysis and visualisation to support business decision-making. Develop and maintain data pipelines and queries using SQL and Python. Apply data science techniques such as predictive modelling, classification, clustering, and regression to solve business problems and uncover actionable insights. Perform feature engineering and data preprocessing to prepare datasets for machine learning workflows. Build, validate, and tune machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. Deploy models into production environments and monitor their performance over time, ensuring they deliver consistent value. Collaborate with stakeholders to translate business questions into data science problems and communicate findings in a clear, actionable manner. Use statistical techniques and hypothesis testing to validate assumptions and support decision-making. Document data science workflows and maintain reproducibility of experiments and models Support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts Design and build interactive dashboards and reports using Power BI and Microsoft Fabric. Professional Certifications (preferred or in progress): Microsoft Certified: Power BI Data Analyst Associate (PL-300) SnowPro Core Certification (Snowflake) Microsoft Certified: Azure Data Engineer Associate AWS Certified: Data Analytics – Specialty
Posted 1 week ago
0.0 - 4.0 years
10 - 15 Lacs
Pune, Maharashtra
On-site
Job Title: Senior AI Engineer Location: Pune Job Type: Full-Time Experience Required: 4 to 6 years Key Responsibilities: · Design, develop, and deploy scalable AI/ML solutions for real-world problems. · Build and train models using machine learning and deep learning frameworks (TensorFlow, PyTorch, etc.). · Collaborate with cross-functional teams to understand business needs and translate them into technical solutions. · Optimize and fine-tune models for performance and accuracy. · Maintain and monitor deployed models and continuously improve them based on feedback and data. · Conduct research and stay updated with the latest trends in AI and machine learning. · Mentor junior engineers and contribute to best practices in AI development. Requirements: · Strong programming skills in Python ; familiarity with R, Java , or Scala is a plus. · Experience with machine learning libraries (Scikit-learn, XGBoost, LightGBM). · Hands-on experience in deep learning frameworks like TensorFlow, PyTorch , or Keras . · Solid understanding of data preprocessing, feature engineering, model evaluation , and MLOps practices . · Experience with cloud platforms such as AWS, GCP or Azure . · Familiarity with NLP, computer vision or generative AI models is a strong advantage. · Proficiency in using SQL and working with large-scale datasets. Preferred Qualifications : · Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or related field. · Experience in deploying AI models to production environments. · Knowledge of containerization tools like Docker and orchestration frameworks like Kubernetes . · Experience working in Agile/Scrum development processes. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your CTC? What is your Expected CTC? What is Your Notice Period? Education: Bachelor's (Preferred) Experience: AI : 4 years (Required) Work Location: In person
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities Hands-on Development: Develop and implement machine learning models and algorithms, including supervised, unsupervised, deep learning, and reinforcement learning techniques. Implement Generative AI solutions using technologies like RAG (Retrieval-Augmented Generation), Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic Ai. Utilize popular AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Design and deploy NLP models and techniques, including text classification, RNNs, CNNs, and Transformer-based models like BERT. Ensure robust end-to-end AI/ML solutions, from data preprocessing and feature engineering to model deployment and monitoring. Technical Proficiency: Demonstrate strong programming skills in languages commonly used for data science and ML, particularly Python. Leverage cloud platforms and services for AI/ML, especially AWS, with knowledge of AWS Sagemaker, Lambda, DynamoDB, S3, and other AWS resources. Mentorship: Mentor and coach a team of data scientists and machine learning engineers, fostering skill development and professional growth. Provide technical guidance and support, helping team members overcome challenges and achieve project goals. Set technical direction and strategy for AI/ML projects, ensuring alignment with business goals and objectives. Facilitate knowledge sharing and collaboration within the team, promoting best practices and continuous learning. Strategic Advisory: Collaborate with cross-functional teams to integrate AI/ML solutions into business processes and products. Provide strategic insights and recommendations to support decision-making processes. Communicate effectively with stakeholders at various levels, including technical and non-technical audiences. Qualifications Bachelor’s degree in a relevant field (e.g., Computer Science) or equivalent combination of education and experience. Typically, 8-10 years of relevant work experience in AI/ML/GenAI 15+ years of overall work experience. With proven ability to manage projects and activities. Extensive experience with generative AI technologies, including RAG, Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic AI Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning. Extensive experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Strong knowledge of natural language processing (NLP) techniques and models, including Transformer-based models like BERT. Proficient programming skills in Python and experience with cloud platforms like AWS. Experience with AWS Cloud Resources, including AWS Sagemaker, Lambda, DynamoDB, S3, etc., is a plus. Proven experience leading a team of data scientists or machine learning engineers on complex projects. Strong project management skills, with the ability to prioritize tasks, allocate resources, and meet deadlines. Excellent communication skills and the ability to convey complex technical concepts to diverse audiences. Preferred Qualifications Experience in setting technical direction and strategy for AI/ML projects. Experience in the Insurance domain Ability to mentor and coach junior team members, fostering growth and development. Proven track record of successfully managing AI/ML projects from conception to deployment.
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Cyber Towers, Hyderabad & Vijayawada 🏢 Company: Datavalley India Private Limited 📅 Type: Full-Time (Immediate Joiner) 🎓 Education: B.Tech in Computer Science, Data Science, Artificial Intelligence, or related branches 🧪 Experience: 1 - 5 Years About Datavalley.ai: Datavalley.ai is a forward-thinking AI company revolutionizing industries through advanced Machine Learning and Deep Learning solutions. Based in Hyderabad’s tech core, Cyber Towers, we engineer scalable AI systems that transform raw data into powerful business intelligence. At Datavalley.ai, we train fresh minds into AI professionals who build with impact. Roles & Responsibilities: Design and implement Machine Learning and Deep Learning algorithms. Work on data preprocessing, feature engineering, and model evaluation. Build and deploy models using: - Machine Learning: Linear Regression, Logistic Regression, Decision Trees, Random Forest, XGBoost, SVM, K-Means, PCA - Deep Learning: ANN, CNN, RNN, LSTM, Transfer Learning (ResNet, VGG, BERT) - AI Applications: NLP, Computer Vision, Recommendation Systems Use Python and tools like Scikit-learn, TensorFlow, Keras, PyTorch. Collaborate with teams to deploy models using Flask, FastAPI, Streamlit. Skills Required: Strong programming skills in Python Experience with NumPy, Pandas, Matplotlib, Seaborn Knowledge of Scikit-learn, TensorFlow, Keras, PyTorch Understanding of model evaluation metrics (Accuracy, Precision, Recall, F1-score) Familiarity with SQL, basic knowledge of NoSQL (MongoDB) Comfortable with Jupyter/Colab, Git, and version control Basic exposure to Flask/FastAPI for deployment Qualifications: B.Tech in Computer Science, AI & ML, Data Science, or related disciplines Strong foundation in algorithms, mathematics, and statistics Ability to build, train, and evaluate ML/DL models Project or internship in AI/ML is preferred 📩 To Apply: Send your resume to [ artishukla@datavalley.ai ]
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough