Jobs
Interviews

4489 Numpy Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Machine Learning/AI Engineer, your primary role will involve designing, developing, and implementing machine learning models and AI solutions using Python. You will collaborate closely with various teams to comprehend business requirements, pinpoint opportunities for leveraging machine learning and AI technologies, and create solutions to tackle complex challenges. Working with extensive datasets, you will apply statistical analysis and machine learning techniques to extract valuable insights and construct scalable and robust algorithms. Your key responsibilities will encompass understanding business problems in collaboration with stakeholders, collecting and preprocessing large datasets from diverse sources, developing machine learning models and AI algorithms using Python libraries like TensorFlow, PyTorch, scikit-learn, or similar, engineering features from raw data to enhance model performance and interpretability, training, validating, and fine-tuning machine learning models utilizing appropriate evaluation metrics and validation techniques, deploying machine learning models into production environments with a focus on scalability, reliability, and performance, monitoring model performance in production, conducting periodic model retraining, and addressing any arising issues, documenting code, algorithms, and processes to facilitate knowledge sharing and maintainability, staying informed about the latest advancements in machine learning and AI research, and exploring innovative solutions to enhance existing systems. Ideally, you should possess over 3 years of demonstrated experience in developing and deploying machine learning models and AI solutions using Python, with a preference for familiarity with deep learning frameworks. Proficiency in Python programming and libraries such as TensorFlow, PyTorch, scikit-learn, pandas, and NumPy is expected. A strong grasp of statistical concepts, linear algebra, calculus, and probability theory is essential. Furthermore, effective problem-solving skills, excellent communication abilities to interact with cross-functional teams and stakeholders, meticulous attention to detail in data analysis and model development, and a willingness to adapt to new technologies and changing project requirements are highly valued. Exposure to NetSuite cloud ERP/Platform is considered an added advantage. This is a full-time position offering health insurance and leave encashment benefits. The work schedule involves fixed shifts from Monday to Friday. Applicants are required to have a minimum of 3 years of experience in deep learning and be located in Hyderabad, Telangana, with work being conducted in person.,

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

🤖 Data Science Intern – Remote | Real Projects, Real Skills, Real Impact 📍 Location: Remote / Virtual 💼 Type: Internship (Unpaid) 🎁 Perks: Certificate after Completion || Letter of Recommendation (6 Months) 🕒 Schedule: Flexible (5–7 hours/week) Are you passionate about data, AI, and problem-solving? Join Skillfied Mentor and step into the world of data science with real projects, hands-on mentoring, and practical tools. This virtual internship is designed for students and fresh graduates who want to build real experience working on machine learning models , data pipelines , and predictive analytics . 🔧 What You’ll Do: Clean and prepare data for analysis and modeling Build and train basic machine learning models using Python Work on tools like Pandas, NumPy, Scikit-Learn, Jupyter Present findings in visual formats using Matplotlib/Seaborn Collaborate with peers and mentors during project reviews 🎓 What You’ll Gain: ✅ Full Python Course included during internship ✅ Hands-on experience with real-world ML and DS projects ✅ Internship Certificate + LOR (6 Months) ✅ Projects that enhance your resume and portfolio ✅ Work with tools & libraries used in the industry ✅ Fully remote – manage your schedule (5–7 hrs/week) 🗓️ Application Deadline: 30th July 2025 👉 Apply now and launch your Data Science journey with Skillfied Mentor!

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

At Rhythm, our values serve as the cornerstone of our organization. We are deeply committed to customer success, fostering innovation, and nurturing our employees. These values shape our decisions, actions, and interactions, ensuring that we consistently create a positive impact on the world around us. Rhythm Innovations is currently looking for a skilled and enthusiastic Machine Learning (ML) Developer to conceptualize, create, and implement machine learning models that enhance our supply chain risk management and other cutting-edge solutions. As an ML Developer, you will collaborate closely with our AI Architect and diverse teams to construct intelligent systems that tackle intricate business challenges and further our goal of providing unparalleled customer satisfaction. Key Responsibilities Model Development: Devise, execute, and train machine learning models utilizing cutting-edge algorithms and frameworks like TensorFlow, PyTorch, and scikit-learn. Data Preparation: Process, refine, and convert extensive datasets for the training and assessment of ML models. Feature Engineering: Identify and engineer pertinent features to enhance model performance and precision. Algorithm Optimization: Explore and implement advanced algorithms to cater to specific use cases such as classification, regression, clustering, and anomaly detection. Integration: Coordinate with software developers to integrate ML models into operational systems and guarantee smooth functionality. Performance Evaluation: Assess model performance using suitable metrics and consistently refine for accuracy, efficacy, and scalability. MLOps: Aid in establishing and overseeing CI/CD pipelines for model deployment and monitoring in production environments. Research and Development: Stay abreast of the latest breakthroughs in Gen AI AI/ML technologies and propose inventive solutions. Collaboration: Engage closely with data engineers, product teams, and stakeholders to grasp requirements and deliver customized ML solutions. Requirements Educational Background: Bachelor's in Engineering in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: 3 to 6 years of practical experience in developing and deploying machine learning models. Technical Skills Proficiency in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Familiarity with big data frameworks (Hadoop, Spark) is advantageous. Knowledge of SQL/NoSQL databases and data pipeline tools (e.g., Apache Airflow). Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Gen AI AI/ML services. Thorough understanding of supervised and unsupervised learning, deep learning, and reinforcement learning. Exposure to MLOps practices and model deployment pipelines. Soft Skills Strong problem-solving and analytical abilities. Effective communication and teamwork skills. Capability to thrive in a dynamic, collaborative environment.,

Posted 3 days ago

Apply

0.0 - 4.0 years

0 Lacs

pune, maharashtra

On-site

The Junior AI/ML Engineer role at Fulcrum Digital is an opportunity for an aspiring AI innovator and technical enthusiast to kickstart their AI journey by contributing to the development of cutting-edge AI solutions. This hybrid work model position allows employees to work from the office twice a week, with office locations in Pune, Mumbai, or Coimbatore to choose from based on preference and convenience. As a Junior AI/ML Engineer, you will collaborate with experienced professionals to build and implement innovative AI models and applications that solve real-world problems. This role offers more than just an entry-level experience by providing hands-on experience in developing and deploying AI/ML models, working on impactful projects, and learning and growing in a dynamic and innovative environment. Key responsibilities include assisting in the development and implementation of AI/ML models and algorithms, contributing to data preprocessing and feature engineering processes, supporting model evaluation and optimization, collaborating on research initiatives, and assisting in deployment and monitoring of AI/ML solutions. The ideal candidate should have a foundational understanding of machine learning concepts, programming skills in Python, familiarity with TensorFlow or PyTorch, and basic knowledge of data manipulation using libraries like Pandas and NumPy. Strong analytical, problem-solving, and communication skills are essential, along with a proactive and eager-to-learn attitude. The successful candidate will have a Bachelor's degree in Computer Science, Data Science, or a related field, a passion for artificial intelligence and machine learning, and a desire for continuous learning in the field of AI. Superpowers should include the ability to identify patterns in data, explain technical concepts clearly, consider ethical implications in AI development, and maintain an interest in staying updated with advancements in the field. Joining Fulcrum Digital as a Junior AI/ML Engineer offers the opportunity to work on exciting AI projects, be mentored by experienced professionals, contribute to innovative technological solutions, and gain valuable experience in a rapidly evolving field. If you are ready to be part of the AI revolution and shape the future of technology, apply now by sending your resume to the provided email address with the subject line "Application for Junior AI/ML Engineer".,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining Apexon, a digital-first technology services firm that specializes in accelerating business transformation and delivering human-centric digital experiences. At Apexon, we meet customers at every stage of the digital lifecycle and help them outperform their competition through speed and innovation. With a focus on AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering, and UX, we leverage our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the opportunities presented by the digital world. Our reputation is built on a comprehensive suite of engineering services, a commitment to solving our clients" toughest technology problems, and a dedication to continuous improvement. With backing from Goldman Sachs Asset Management and Everstone Capital, Apexon has a global presence with 15 offices and 10 delivery centers across four continents. As a part of our #HumanFirstDIGITAL initiative, you will be expected to excel in data analysis, VBA, Macros, and Excel. Your responsibilities will include monitoring and supporting healthcare operations, addressing client queries, and effectively communicating with stakeholders. Proficiency in Python scripting, particularly in pandas, numpy, and ETL pipelines, is essential. You should be able to independently understand client requirements and queries and demonstrate strong skills in data analysis. Knowledge of Azure synapse basics, Azure DevOps basics, Git, T-SQL experience, and Sql Server will be beneficial. At Apexon, we are committed to diversity and inclusion, and our benefits and rewards program is designed to recognize your skills and contributions, enhance your learning and upskilling experience, and provide support for you and your family. As an Apexon Associate, you will have access to continuous skill-based development, opportunities for career growth, comprehensive health and well-being benefits, and support. In addition to a supportive work environment, we offer a range of benefits, including group health insurance covering a family of 4, term insurance, accident insurance, paid holidays, earned leaves, paid parental leave, learning and career development opportunities, and employee wellness programs.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

You are ready to gain the skills and experience required to progress within your role and advance your career, and there is an excellent software engineering opportunity waiting for you. As a Software Engineer II at JPMorgan Chase in the Corporate Technology organization, you play a crucial role in the Data Services Team dedicated to enhancing, building, and delivering trusted market-leading Generative AI products securely, stably, and at scale. Being a part of the software engineering team, you will implement software solutions by designing, developing, and troubleshooting multiple components within technical products, applications, or systems while continuously enhancing your skills and experience. Your responsibilities include executing standard software solutions, writing secure and high-quality code in at least one programming language, designing and troubleshooting with consideration of upstream and downstream systems, applying tools within the Software Development Life Cycle for automation, and employing technical troubleshooting to solve basic complexity technical problems. Additionally, you will analyze large datasets to identify issues and contribute to decision-making for secure and stable application development, learn and apply system processes for developing secure code and systems, and contribute to a team culture of diversity, equity, inclusion, and respect. The qualifications, capabilities, and skills required for this role include formal training or certification in software engineering concepts with a minimum of 2 years of applied experience, experience with large datasets and predictive models, developing and maintaining code in a corporate environment using modern programming languages and database querying languages, proficiency in programming languages like Python, TensorFlow, PyTorch, PySpark, numpy, pandas, SQL, and familiarity with cloud services such as AWS/Azure. You should have a strong ability to analyze and derive insights from data, experience across the Software Development Life Cycle, exposure to agile methodologies, and emerging knowledge of software applications and technical processes within a technical discipline. Preferred qualifications include understanding of SDLC cycles for data platforms, major upgrade releases, patches, bug/hot fixes, and associated documentations.,

Posted 3 days ago

Apply

12.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description We are looking for a Python Developer with 12 years of hands-on experience who is passionate about clean code, problem-solving, and building scalable systems. Knowledge of Artificial Intelligence (AI) or Machine Learning (ML) is a plus. Key Responsibilities Develop, test, and maintain scalable Python applications Write reusable and efficient code Integrate user-facing elements with server-side logic Work closely with frontend developers, data scientists, and product teams Troubleshoot and debug applications Participate in code reviews and contribute to team knowledge-sharing Skills Required Strong proficiency in Python Experience with web frameworks such as FastAPI , Django or Flask Good understanding of REST APIs and database systems (SQL/NoSQL) Familiarity with version control (Git) Strong debugging and problem-solving skills Basic knowledge of AI/ML concepts is a plus Excellent communication and team collaboration skills Preferred Qualifications Bachelors degree in Computer Science, Engineering, or a related field Exposure to cloud platforms like AWS or Azure is an advantage Knowledge of libraries like NumPy, Pandas, or TensorFlow is desirable (ref:hirist.tech)

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Genpact (NYSE: G) is a global professional services and solutions firm dedicated to shaping the future by delivering impactful outcomes. With a team of over 125,000 professionals spread across more than 30 countries, we are characterized by our inherent curiosity, entrepreneurial spirit, and commitment to creating enduring value for our clients. Fueled by our overarching purpose of continually striving towards a world that functions better for individuals, we partner with and enhance leading enterprises, including members of the prestigious Fortune Global 500. Our core competencies revolve around in-depth business and industry expertise, digital operational services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Business Analyst - Data Scientist to join our dynamic team. As a Business Analyst - Data Scientist at Genpact, you will play a pivotal role in the development and implementation of NLP (Natural Language Processing) models and algorithms, extracting actionable insights from textual data, and collaborating with cross-functional teams to deliver innovative AI solutions. **Responsibilities:** **Model Development:** - Proficiency in various statistical, machine learning, and ensemble algorithms. - Strong understanding of time series algorithms and forecasting use cases. - Ability to discern the strengths and weaknesses of different models and select appropriate ones for specific problems. - Proficiency in evaluating metrics and recommending suitable evaluation metrics for different problem types. **Data Analysis:** - Extracting meaningful insights from structured data. - Preprocessing data for machine learning/artificial intelligence applications. **Collaboration:** - Close collaboration with data scientists, engineers, and business stakeholders. - Providing technical guidance and mentorship to team members. **Integration and Deployment:** - Integrating machine learning models into production systems. - Implementing CI/CD pipelines for continuous integration and deployment. **Documentation and Training:** - Documenting processes, models, and results. - Providing training and support to stakeholders on NLP techniques and tools. **Qualifications we seek in you:** **Minimum Qualifications / Skills:** - Bachelor's degree in computer science, engineering, or a related field. - Proficient programming skills in Python and R. - Experience with data science frameworks such as SKLEARN and NUMPY. - Knowledge of machine learning concepts and frameworks like TensorFlow and PyTorch. - Strong problem-solving and analytical capabilities. - Excellent communication and collaboration skills. **Preferred Qualifications/Skills:** - Experience in predictive analytics and machine learning techniques. - Proficiency in Python/R or any other open-source programming language. - Building and implementing models, using algorithms, and running simulations with various tools. - Familiarity with visualization tools such as Tableau, Power BI, Qlikview, etc. - Proficiency in applied statistics skills, including distributions, statistical testing, regression, etc. - Knowledge and experience in various tools and techniques like forecasting, linear regression, logistic regression, machine learning algorithms (e.g., Random Forest, Gradient Boosting, SVM, XGBoost, Deep Learning), etc. - Experience with big data technologies (Hadoop, Spark). - Familiarity with cloud platforms like AWS, Azure, GCP. **Job Details:** **Title:** Business Analyst - Data Scientist **Primary Location:** India-Hyderabad **Education Level:** Bachelor's / Graduation / Equivalent **Job Posting:** Apr 1, 2025, 2:47:23 AM **Unposting Date:** May 1, 2025, 1:29:00 PM **Master Skills List:** Digital **Job Category:** Full Time,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Back-End Developer at our company, you will be responsible for developing an AI-driven prescriptive remediation model for SuperZoom, CBRE's data quality platform. Your primary focus will be on analyzing invalid records flagged by data quality rules and providing suggestions for corrected values based on historical patterns. It is crucial that the model you develop learns from past corrections to continuously enhance its future recommendations. The ideal candidate for this role should possess a solid background in machine learning, natural language processing (NLP), data quality, and backend development. Your key responsibilities will include developing a prescriptive remediation model to analyze and suggest corrections for bad records, implementing a feedback loop for continuous learning, building APIs and backend workflows for seamless integration, designing a data pipeline for real-time processing of flagged records, optimizing model performance for large-scale datasets, and collaborating effectively with data governance teams, data scientists, and front-end developers. Additionally, you will be expected to ensure the security, scalability, and performance of the system in handling sensitive data. To excel in this role, you should have at least 5 years of backend development experience with a focus on AI/ML-driven solutions. Proficiency in Python, including skills in Pandas, PySpark, and NumPy, is essential. Experience with machine learning libraries like Scikit-Learn, TensorFlow, or Hugging Face Transformers, along with a solid understanding of data quality, fuzzy matching, and NLP techniques for text correction, will be advantageous. Strong SQL skills and familiarity with databases such as PostgreSQL, Snowflake, or MS SQL Server are required, as well as expertise in building RESTful APIs and integrating ML models into production systems. Your problem-solving and analytical abilities will also be put to the test in handling diverse data quality issues effectively. Nice-to-have skills for this role include experience with vector databases (e.g., Pinecone, Weaviate) for similarity search, familiarity with LLMs and fine-tuning for data correction tasks, experience with Apache Airflow for workflow automation, and knowledge of reinforcement learning to enhance remediation accuracy over time. Your success in this role will be measured by the accuracy and relevance of suggestions provided for data quality issues in flagged records, improved model performance through iterative learning, seamless integration of the remediation model into SuperZoom, and on-time delivery of backend features in collaboration with the data governance team.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Data Science Developer in Indore, you will be responsible for formulating, suggesting, and managing data-driven projects to advance the business's interests. Your duties will include collating and cleaning data from various sources for junior data scientists, delegating tasks to ensure project success, and monitoring the performance of junior team members while offering guidance. You will utilize advanced statistical procedures to derive actionable insights, cross-validate models, and produce non-technical reports on project outcomes. Additionally, you will propose strategies to apply insights to business decisions and stay updated on Data Science advancements. To qualify for this role, you should hold an advanced degree in data science, statistics, computer science, or a related field. You must have extensive experience as a data scientist and be proficient in Python, Numpy, Panda, Scikit Learn, SQL Stream, or similar AWS services. A strong grasp of SQL, machine learning principles, and a track record of leading data-centric projects are essential. Your exceptional supervision and mentorship skills, coupled with the ability to create a collaborative work environment, are crucial for success in this position. Compliance with ethical standards is also expected.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Senior Machine Learning Engineer at our AI/ML team, you will be responsible for designing and building intelligent search systems. Your focus will be on utilizing cutting-edge techniques in vector search, semantic similarity, and natural language processing to create innovative solutions. Your key responsibilities will include designing and implementing high-performance vector search systems using tools like FAISS, Milvus, Weaviate, or Pinecone. You will develop semantic search solutions that leverage embedding models and similarity scoring for precise and context-aware retrieval. Additionally, you will be expected to research and integrate the latest advancements in ANN algorithms, transformer-based models, and embedding generation. Collaboration with cross-functional teams, including data scientists, backend engineers, and product managers, will be essential to bring ML-driven features from concept to production. Furthermore, maintaining clear documentation of methodologies, experiments, and findings for technical and non-technical stakeholders will be part of your role. To qualify for this position, you should have at least 3 years of experience in Machine Learning, with a focus on NLP and vector search. A deep understanding of semantic embeddings, transformer models (e.g., BERT, RoBERTa, GPT), and hands-on experience with vector search frameworks is required. You should also possess a solid understanding of similarity search techniques such as cosine similarity, dot-product scoring, and clustering methods. Strong programming skills in Python and familiarity with libraries like NumPy, Pandas, Scikit-learn, and Hugging Face Transformers are necessary. Exposure to cloud platforms, preferably Azure, and container orchestration tools like Docker and Kubernetes is preferred. This is a full-time position with benefits including health insurance, internet reimbursement, and Provident Fund. The work schedule consists of day shifts, fixed shifts, and morning shifts, and the work location is in-person. The application deadline for this role is 18/04/2025.,

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

The Senior AI Engineer - Agentic AI position at JMD Megapolis, Gurugram requires a minimum of 5+ years of experience in Machine Learning engineering, Data Science, or similar roles focusing on applied data science and entity resolution. You will be expected to have a strong background in machine learning, data mining, and statistical analysis for model development, validation, implementation, and product integration. Proficiency in programming languages like Python or Scala, along with experience in working with data manipulation and analysis libraries such as Pandas, NumPy, and scikit-learn is essential. Additionally, experience with large-scale data processing frameworks like Spark, proficiency in SQL and database concepts, as well as a solid understanding of feature engineering, dimensionality reduction, and data preprocessing techniques are required. As a Senior AI Engineer, you should possess excellent problem-solving skills and the ability to devise creative solutions to complex data challenges. Strong communication skills are crucial for effective collaboration with cross-functional teams and explaining technical concepts to non-technical stakeholders. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science are desirable traits for this role. The ideal candidate for this position would hold a Masters or PhD in Computer Science, Data Science, Statistics, or a related quantitative field. They should have 5-10 years of industry experience in developing AI solutions, including machine learning and deep learning models. Strong programming skills in Python and familiarity with libraries such as TensorFlow, PyTorch, or scikit-learn are necessary. Furthermore, a solid understanding of machine learning algorithms, statistical analysis, data preprocessing techniques, and experience in working with large datasets to implement scalable AI solutions are required. Proficiency in data visualization and reporting tools, knowledge of cloud platforms like AWS, Azure, Google Cloud for AI deployment, familiarity with software development practices, and version control systems are all valued skills. Problem-solving abilities, creative thinking to overcome challenges, strong communication, and teamwork skills to collaborate effectively with cross-functional teams are essential for success in this role.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

Are you passionate about data and coding Do you enjoy working in a fast-paced and dynamic start-up environment If so, we are looking for a talented Python developer to join our team! We are a data consultancy start-up with a global client base, headquartered in London UK, and we are looking for someone to join us full time on-site in our cool Office in Gurugram. Uptitude is a forward-thinking consultancy that specializes in providing exceptional data and business intelligence solutions to clients worldwide. Our team is passionate about empowering businesses with data-driven insights, enabling them to make informed decisions and achieve remarkable results. At Uptitude, we embrace a vibrant and inclusive culture, where innovation, excellence, and collaboration thrive. As a Python Developer at Uptitude, you will be responsible for developing high-quality, scalable, and efficient software solutions. Your primary focus will be on designing and implementing Python-based applications, integrating data sources, and working closely with the data and business intelligence teams. You will have the opportunity to contribute to all stages of the software development life cycle, from concept and design to testing and deployment. In addition to your technical skills, you should be a creative thinker, have effective communication skills, and be comfortable working in a fast-paced and dynamic environment. Requirements: - 3-5 years of experience as a Python Developer or similar role. - Strong proficiency in Python and its core libraries (e.g., Pandas, NumPy, Matplotlib). - Proficiency in web frameworks (e.g., Flask, Django) and RESTful APIs. - Working knowledge of Database technologies (e.g., PostgreS, Redis, RDBMS) and data modeling concepts. - Hands-on experience with advanced excel. - Ability to work with cross-functional teams and communicate complex ideas to non-technical stakeholders. - Awareness of ISO:27001, creative thinker, and problem solver. - Strong attention to detail and ability to work in a fast-paced environment. - Head office based in London, UK, with the role located in Gurugram, India. At Uptitude, we embrace a set of core values that guide our work and define our culture: - Be Awesome: Strive for excellence in everything you do, continuously improving your skills and delivering exceptional results. - Step Up: Take ownership of challenges, be proactive, and seek opportunities to contribute beyond your role. - Make a Difference: Embrace innovation, think creatively, and contribute to the success of our clients and the company. - Have Fun: Foster a positive and enjoyable work environment, celebrating achievements and building strong relationships. Uptitude values its employees and offers a competitive benefits package, including: - Competitive Salary Commensurate With Experience And Qualifications. - Private health insurance coverage. - Offsite trips to encourage team building and knowledge sharing. - Quarterly team outings to unwind and celebrate achievements. - Corporate English Lessons with UK instructor. We are a fast-growing company with a global client base, so this is an excellent opportunity for the right candidate to grow and develop their skills in a dynamic and exciting environment. If you are passionate about coding, have experience with Python, and want to be part of a team that is making a real impact, we want to hear from you!,

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

delhi

On-site

As a Data Analyst Intern at our company based in Delhi, you will be responsible for aggregating, cleansing, and analyzing large datasets from various sources. Your role will involve engineering and optimizing complex SQL queries for data extraction, manipulation, and detailed analysis. Additionally, you will develop advanced Python scripts to automate data workflows, transformation processes, and create sophisticated visualizations. You will also be tasked with building dynamic dashboards and analytical reports using Excel's advanced features like Pivot Tables, VLOOKUP, and Power Query. Your key responsibilities include decoding intricate data patterns to extract actionable intelligence that drives strategic decision-making. It is essential to maintain strict data integrity, precision, and security protocols in all analytical outputs. As part of your role, you will design and implement automation frameworks to eliminate redundancies and improve operational efficiency. To excel in this role, you should have a mastery of SQL, including crafting complex queries, optimizing joins, executing advanced aggregations, and efficiently structuring data with Common Table Expressions (CTEs). Proficiency in Python is crucial, with hands-on experience in data-centric libraries such as Pandas, NumPy, Matplotlib, and Seaborn for data analysis and visualization. Advanced Excel skills are also required, encompassing Pivot Tables, Macros, and Power Query to streamline data processing and enhance analytical efficiency. Furthermore, you should possess superior analytical acumen with exceptional problem-solving abilities and the capability to extract meaningful insights from complex datasets. Strong communication and presentation skills are essential to distill intricate data findings into compelling narratives for stakeholder interactions. This position offers opportunities for full-time, permanent, and internship job types. Benefits include paid sick time, paid time off, a day shift schedule, performance bonuses, and yearly bonuses. The work location is in person. If you are looking to apply your analytical skills in a dynamic environment and contribute to strategic decision-making through data analysis, this Data Analyst Intern position could be the perfect fit for you.,

Posted 4 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Company, Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters We are seeking a highly experienced Computer Vision Architect with deep expertise in Python to design and lead the development of cutting-edge vision-based systems. The ideal candidate will architect scalable solutions that leverage advanced image and video processing, deep learning, and real-time inference. You will collaborate with cross-functional teams to deliver high-performance, production-grade computer vision platforms. Key Responsibilities: Architect and design end-to-end computer vision solutions for real-world applications (e.g., object detection, tracking, OCR, facial recognition, scene understanding, etc.) Lead R&D initiatives and prototype development using modern CV frameworks(OpenCV, PyTorch, TensorFlow, etc.) Optimize computer vision models for performance, scalability, and deployment on cloud, edge, or embedded systems Define architecture standards and best practices for Python-based CV pipelines Collaborate with product teams, data scientists, and ML engineers to translate business requirements into technical solutions Stay updated with the latest advancements in computer vision, deep learning, and AI Mentor junior developers and contribute to code reviews, design discussions, and technical documentation Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field (PhD is a plus) 8+ years of software development experience, with 5+ years in computer vision and deep learning Proficient in Python and libraries such as OpenCV, NumPy, scikit-image, Pillow Experience with deep learning frameworks like PyTorch, TensorFlow, or Keras Strong understanding of CNNs, object detection (YOLO, SSD, Faster R-CNN), semantic segmentation, and image classification Knowledge of MLOps, model deployment strategies (e.g., ONNX, TensorRT), and containerization (Docker/Kubernetes) Experience working with video analytics, image annotation tools, and large-scale dataset pipelines Familiarity with edge deployment (Jetson, Raspberry Pi, etc.) or cloud AI services(AWS SageMaker, Azure ML, GCP AI) Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.

Posted 4 days ago

Apply

1.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description About the Role We are seeking a AI Engineer to join our team and play a critical role in the design and develop a cognitive data solution. The broader vision is to develop an AI-based platform that will crawl through unstructured data sources and extract meaningful information. The ideal candidate will possess full-stack development skills along with a strong understanding of database structures, SQL queries, ETL tools and Azure data technologies. Core Responsibilities Lead the design, development, and implementation of AI algorithms and models to enhance payment and CX product offerings. Test agent interactions, document behaviors, and help improve reliability and performance. Assist with agent integration, system testing, and deployment. Develop and deploy AI models (including deep learning models) using TensorFlow, PyTorch, and other ML/DL frameworks. Design and maintain the full AI data pipeline, including data crawling, ETL, and building fact tables. Apply statistical and programming expertise (NumPy, Pandas, etc.) for data analysis and modeling. Optimize AI models for on-premise infrastructure with a focus on performance and security compliance. Stay abreast of the latest trends in AI and contribute to continuous improvements. Mentor junior AI/ML engineers and contribute to documentation and process standardization. Contribute to the broader AI agent ecosystem and cross-functional collaboration. Required Experience Strong Python programming skills and experience with full-stack development. Experience with large language models (LLMs) like ChatGPT, Claude, etc. 6 months to 1 year as AI engineer. Solid understanding of AI/ML concepts including NLP, ML algorithms, deep learning, image/speech recognition. Experience with Azure Data Factory, Databricks/Spark, Synapse/SQL DW, and Data Lake Storage. Familiarity with data pipelines, APIs, data exchange mechanisms, and RDBMS/NoSQL databases. Ability to write clear and thorough documentation. Enthusiasm for emerging AI technologies and open-source collaboration. Nice To Have Experience with agent frameworks (LangChain, AutoGPT, CrewAI, etc.) Basic DevOps knowledge and experience in scalable deployments. Experience with API development and RESTful services. Familiarity with vector databases (e.g., FAISS, ChromaDB, Pinecone, Weaviate). Prior contributions to open-source AI projects or communities. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!

Posted 4 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role And Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Requirements Bachelor's degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Benefits Opportunities to Take Ownership - Work on high-impact projects with real autonomy Fast Career Growth - Gain exposure to multiple business areas and advance quickly Be at the Forefront of Innovation - Work on cutting-edge technologies or disruptive ideas Collaborative & Flat Hierarchy - Work closely with leadership and have a real voice Dynamic, Fast-Paced Environment - No two days are the same; challenge yourself every day A Mission-Driven Company - Be part of something that makes a difference

Posted 4 days ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.

Posted 4 days ago

Apply

5.0 - 9.0 years

15 - 25 Lacs

Bengaluru

Work from Office

SKILLS AND COMPETENCIES Technical Skills: • Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) • Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques • Experience with big data processing using Spark for large-scale data analytics • Version control and experiment tracking using Git and MLflow • Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. • DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. • LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. • MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. • Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. • LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. • General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. • Experience in creating LLD for the provided architecture. • Experience working in microservices based architecture. Domain Expertise: • Strong mathematical foundation in statistics, probability, linear algebra, and optimization • Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation • Expertise in feature engineering, embedding optimization, and dimensionality reduction • Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing • Experience with RAG systems, vector databases, and semantic search implementation • Proficiency in LLM optimization techniques including quantization and knowledge distillation • Understanding of MLOps practices for model deployment and monitoring Professional Competencies: • Strong analytical thinking with ability to solve complex ML challenges • Excellent communication skills for presenting technical findings to diverse audiences • Experience translating business requirements into data science solutions • Project management skills for coordinating ML experiments and deployments • Strong collaboration abilities for working with cross-functional teams • Dedication to staying current with latest ML research and best practices • Ability to mentor and share knowledge with team members Primary Skill Set: Microservices LLM, Agentic AI Framework, Predictive modelling

Posted 4 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

As a Blis data engineer, we seek to understand the data and problem definition and find efficient solutions, so critical thinking is a key component to efficient pipelines and effective reuse. This must include defining the pipelines for the correct controls and recovery points, not only the function and scale. Across the team, everyone supports each other through mentoring, brainstorming, and pairing up. They have a passion for delivering products that delight and astound our customers and that have a long-lasting impact on the business. They do this while also optimising themselves and the team for long-lasting agility, which is often synonymous with practicing Good Engineering. They are almost always adherents of Lean Development and work well in environments with significant amounts of freedom and ambitious goals. Responsibilities Design, build, monitor, and support large-scale data processing pipelines. Support, mentor, and pair with other members of the team to advance our team's capabilities and capacity. Help Blis explore and exploit new data streams to innovate and support commercial and technical growth. Work closely with Product and be comfortable with taking, making, and delivering against fast-paced decisions to delight our customers. This ideal candidate will be comfortable with fast feature delivery with a robust engineered follow-up. Requirements 5+ years direct experience delivering robust, performant data pipelines within the constraints of direct SLA's and commercial financial footprints. Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture. Mastery of building Pipelines in GCP, maximising the use of native and native supporting technologies e. g. Apache Airflow. Mastery of Python for data and computational tasks with fluency in data cleansing, validation, and composition techniques. Hands-on implementation and architectural familiarity with all forms of data sourcing i. e streaming data, relational and non-relational databases, and distributed processing technologies (e. g. Spark). Fluency with all appropriate Python libraries typical of data science e. g. pandas, scikit-learn, scipy, numpy, MLlib, and/or other machine learning and statistical libraries. Advanced knowledge of cloud-based services, specifically GCP. Excellent working understanding of server-side Linux. Professional in managing and updating tasks, ensuring appropriate levels of documentation, testing, and assurance around their solutions. Desired Experience optimizing both code and config in Spark, Hive, or similar tools. Practical experience working with relational databases, including advanced operations such as partitioning and indexing. Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems. Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures. Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab, to analyze, prototype, and visualize data and algorithmic output. This job was posted by Jaina M from Blis.

Posted 4 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

We are looking for an enthusiastic Data Scientist to join our team based in Bangalore. You will be pivotal in developing, deploying, and optimizing recommendation models that significantly enhance user experience and engagement. Your work will directly influence how customers interact with products, driving personalization and conversion. Responsibilities Model Development: Design, build, and fine-tune machine learning models focused on personalized recommendations to boost user engagement and retention. Data Analysis: Perform a comprehensive analysis of user behavior, interactions, and purchasing patterns to generate actionable insights. Algorithm Optimization: Continuously improve recommendation algorithms by experimenting with new techniques and leveraging state-of-the-art methodologies. Deployment and Monitoring: Deploy machine learning models into production environments, and develop tools for continuous performance monitoring and optimization. Requirements Education level: Bachelor's degree (B. E. / B. Tech) in Computer Science or equivalent from a reputed institute. Technical Expertise Strong foundation in Statistics, Probability, and core Machine Learning concepts. Hands-on experience developing recommendation algorithms, including collaborative filtering, content-based filtering, matrix factorization, or deep learning approaches. Proficiency in Python and associated libraries (NumPy, Pandas, Scikit-Learn, PySpark). Experience with TensorFlow or PyTorch frameworks and familiarity with recommendation system libraries (e. g., torch-rec). Solid understanding of Big Data technologies and tools (Hadoop, Spark, SQL). Familiarity with the full Data Science lifecycle from data collection and preprocessing to model deployment. This job was posted by Rituza Rani from Oneture Technologies.

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join us as a Data Scientist You’ll design and implement data science tools and methods which harness our data in order to drive market leading purposeful customer solutions We’ll look to you to actively participate in the data community to identify and deliver opportunities to support the bank’s strategic direction through better use of data This is an opportunity to promote data literacy education with business stakeholders supporting them to foster a data driven culture and to make a real impact with your work We're offering this role at associate level What you'll do As a Data Scientist, you’ll bring together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques and algorithms to develop and implement ethically sound models end-to-end. We’ll look to you to understand the needs of business stakeholders, form hypotheses and identify suitable data and analytics solutions to meet those needs in order to support the achievement of our business strategy. You’ll Also Be Using data translation skills to work closely with business stakeholders to define detailed business questions, problems or opportunities which can be supported through analytics Applying a software engineering and product development lens to business problems, creating, scaling and deploying software driven products and services Working in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and scrum outcomes Selecting, building, training and testing machine learning models considering model valuation, model risk, governance and ethics, making sure that models are ready to implement and scale Iteratively building and prototyping data analysis pipelines to provide insights that will ultimately lead to production deployment The skills you'll need You’ll need a strong academic background in a STEM discipline such as Mathematics, Physics, Engineering or Computer Science. You’ll have atlease four years of experience with statistical modelling and machine learning techniques. We’ll also look for financial services knowledge, and an ability to identify wider business impact, risk or opportunities and make connections across key outputs and processes You’ll Also Demonstrate The ability to use data to solve business problems from hypotheses through to resolution Experience indata science, analytics, and machine learningwith strong understanding of statistical analysis, machine learning models and concepts, LLMs, and data management principles Proficiency in Python and relevant libraries such as Pandas, NumPy, and Scikit-learn Experience of cloud applications such as AWS Sagemaker and data visualisation tools Experience in synthesising, translating and visualising data and insights for key stakeholders Good communication skills with the ability to proactively engage with a wide range of stakeholders

Posted 4 days ago

Apply

5.0 years

0 Lacs

Greater Bengaluru Area

On-site

Role : Data Scientist Experience : 5 years to 8 years Location : Hyderabad, Pune, Bangalore Job Description: Mandatory Skills :Machine Learning, Python Programming Tools Python Pandas NumPy Matplotlib Seaborn Scikit learn SQL complex queries joins CTEs window functions Git Data Analysis Visualization Exploratory Data Analysis EDA trend and correlation analysis hypothesis testing dashboards Power BI Tableau storytelling with data Databases Data Handling SQL Server MySQL basic knowledge of Spark BigQuery Hive is a plus Statistics ML Foundational Descriptive and inferential statistics regression classification clustering KMeans model evaluation metrics accuracy precision recall AUC over fitting under fitting cross validation LLMAI Exposure Familiarity with prompt engineering OpenAI APIs basic usage of GPT models for data text automation awareness of LLM limitations and applications in analytics Soft Skills Strong problem solving ability attention to detail stakeholder communication ability to translate business problems into data solutions Experience Highlights Conducted in depth data analysis to uncover trends patterns and anomalies that informed strategic decisions across product marketing and operations teams Designed and implemented scalable data pipelines and automated data workflows using Python and SQL Developed and maintained analytical models and dashboards to track key business metrics and performance indicators Applied statistical methods and machine learning techniques to solve real world business problems such as forecasting segmentation and performance optimization Collaborated with stakeholders to gather requirements translate business questions into analytical approaches and communicate findings with clarity Explored the use of LLMs eg OpenAI GPT for enhancing internal workflows and accelerating data driven tasks such as querying summarization and content generation

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description As a Staff Machine Learning Engineer , you will drive AI programs, lead engagements, and independently develop innovative solutions that enhance decision-making, automate workflows, and create growth. You will own the end-to-end development of AI-powered applications, from solution design to deployment, leveraging pre-trained machine learning and generative AI models. You will work closely with cross-functional teams, proactively identifying opportunities to integrate AI capabilities into Experian's products and services while optimizing performance and scalability. Qualifications Experinece working in cloud environment with one of Databricks, Azure or AWS 8+ years experience building data-drive products and solutions Experience leading AI engagements Strong experience with AI APIs (OpenAI, Hugging Face, Google Vertex AI, AWS Bedrock) and fine-tuning models for production use. Deep understanding of machine learning, natural language processing (NLP), and generative AI evaluation techniques.Key Responsibilities Assist in Developing and Deploying Machine Learning Models: Support the development and deployment of machine learning models, including data preprocessing and performance evaluation in Python using sklearn, numpy and other standard libraries. Build and Maintain ML Pipelines: Help build and maintain scalable ML pipelines, and assist in automating model training workflows in Python using MLFlow, Databricks, Sagemaker or equivalent. Collaborate with Cross-Functional Teams: Work with product and data teams to align ML solutions with business needs and objectives. Write Clean and Documented Code: Write clean, well-documented code, following best practices for testing and version control. Use Sphinx and other auto documentation solutions to automate document generation. Support Model Monitoring and Debugging: Assist in monitoring and debugging models to improve their reliability and performance. Participate in Technical Discussions and Knowledge Sharing: Engage in technical discussions, code reviews, and knowledge-sharing sessions to learn and grow within the team. Day-to-Day ActivitiesOn a daily basis, you will work closely with senior ML engineers and data scientists to support various stages of the machine learning lifecycle. Your day-to-day activities will include: Data Preprocessing: Cleaning and preparing data for model training, ensuring data quality and consistency. Model Training: Assisting in the training of machine learning models, experimenting with different algorithms and hyperparameters. Performance Evaluation: Evaluating model performance using appropriate metrics and techniques, and identifying areas for improvement. Pipeline Maintenance: Building and maintaining ML pipelines, ensuring they are scalable and efficient. Code Development: Writing and maintaining clean, well-documented code, following best practices for testing and version control. Model Monitoring: Monitoring deployed models to ensure they are performing as expected, and assisting in debugging any issues that arise. Collaboration: Participating in team meetings, sprint planning, and daily stand-ups to stay aligned with project goals and timelines. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 4 days ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Your Role Develop and implement Generative AI / AI solutions on Google Cloud Platform Work with cross-functional teams to design and deliver AI-powered products and services Work on developing, versioning and executing Python code Deploy models as endpoints in Dev Environment Solid understanding of python Deep learning frameworks such as TensorFlow, PyTorch, or JAX Natural language processing (NLP) and machine learning (ML) Cloud storage, compute engine, VertexAI, Cloud Function, Pub/Sub, Vertex AI etc Generative AI support in Vertex, specifically handson experience with Generative AI models like Gemini, vertex Search etc Your Profile Experience in Generative AI development with Google Cloud Platform Experience in delivering an AI solution on VertexAI platform Experience in developing and deploying AI Solutions with ML What youll love about working here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learnon one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications About Capgemini Location - Hyderabad,Pune,Bengaluru,Chennai,Mumbai

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies