Jobs
Interviews

4556 Numpy Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Vadodara) Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, JAX, Reinforcement Learning, Scikit-learn, Generative AI, Natural Language Processing (NLP), PyTorch, Retrieval-Augmented Generation, Computer Vision An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Agra, Uttar Pradesh, India

On-site

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Vadodara) Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, JAX, Reinforcement Learning, Scikit-learn, Generative AI, Natural Language Processing (NLP), PyTorch, Retrieval-Augmented Generation, Computer Vision An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Area(s) of responsibility Skills: Data Engineer Experience: 5-9Years Job Location: Pune Technical / Professional Experience Requirement: 5 years Work on migrating existing ETL processes and objects into Azure Synapse, requiring complex optimized stored procedures and functions. Develop & Maintain Data Pipelines: Design, implement, and maintain automated data pipelines from On-prem SQL DB to Azure Synapse using Azure Data Factory. Performance Optimization: Optimize data pipeline performance by identifying and addressing bottlenecks, improving query efficiency, and implementing best practices for data storage and retrieval in Azure Synapse. PL/SQL & Database Architecture: Expertise in PL/SQL, database architecture, and performance tuning of existing procedures and processes. Automation & Python: Help automate day-to-day processes using Python programming, with knowledge of Pandas and NumPy libraries.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

eGrove Systems is looking for Lead Django Backend Developer to join its team of experts. Skill : Lead Django Backend Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role: Data Engineer (2–4 Years Experience) 📍 Location: Jaipur / Pune (Work from Office) 📧 hr@cognitivestars.com | 📞 99291-89819 We're looking for a Data Engineer (2–4 years) who’s excited about building scalable ETL pipelines , working with Azure Data Lake and Databricks , and supporting AI/ML readiness across real-world datasets. What You'll Do: Design robust, reusable Python-based ETL pipelines from systems like SAP & OCPLM Clean & transform large-scale datasets for analytics & ML Work with Azure Data Lake , Databricks , and modern cloud tools Collaborate with analytics teams to support predictive and prescriptive models Drive data automation and ensure data quality & traceability What You’ll Bring: 2–4 years of experience in data engineering or analytics programming Strong skills in Python & SQL Experience with Azure , Databricks , or similar cloud platforms Familiarity with ML concepts (hands-on not mandatory) Ability to understand complex enterprise data even without direct system access Tools You'll Use: Python | Pandas | NumPy | SQL Azure Data Lake | Databricks scikit-learn | XGBoost (as needed)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : Fulltime 15 years qualification Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Google Cloud Data Services, Microsoft SQL Server Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and Modelor Project Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects 3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications 5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills 2: Must have ability to collaborate with different teams and suggest solutions 3: Ability to work independently with little supervision or as a team 4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Google Cloud Data Services, Microsoft SQL Server Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and Modelor Project Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects 3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications 5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills 2: Must have ability to collaborate with different teams and suggest solutions 3: Ability to work independently with little supervision or as a team 4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

To know more, check out work.hike.in. At Hike, we're building the Rush Gaming Universe 🎮 📲 💰 Introduction 📖 At Hike, we're revolutionizing gaming and tech by blending innovation and immersive experiences. With our foray into Web3 gaming, we're exploring uncharted territories to create products that redefine fun and ownership. Join us as we make waves in this exciting new frontier. Hike Code 📝( Our core cultural values ) The Hike Code is our cultural operating system. It is our set of values that guides us operationally on a day to day basis. We have 9 core values{​{:} } Top Talent in Every Role → Both a quest for greatness & shared values are important to us 🦸‍♂ ️ Pro-Sports Team → Strength-based, results driven with a "team-first" attitude ⚽ ️ Customer Obsession → We exist to delight our customers ? ? Innovation & Make Magic → Courage to walk into the unknown and pioneer new fronts ? ? Owner not a Renter → Proactive & radically responsible. Everyone is an owner ? ? Think Deeply → Clear mind, obsession to simplify & data-informed 🙇‍♀ ️ Move Fast → Ruthless prioritization & move fast 🙋‍♂ ️ Be curious & keep learning → Curiosity to acquire new perspectives, quickly 👨‍? ? Dream Big → Courage to climb big mountains ? ? Skills & experience we're looking for 👨‍? ? Final‑year B.Tech/M.S. student or recent graduate in CS, IT, Math, Stats, or related field | Top Talent in Every Rol e Solid programming abilities in Python with the ML/AI stack (NumPy, Pandas, Scikit‑Learn, TensorFlow) | Top Talent in Every Rol e Good grasp of Data Structures, Algorithms, and basic system‑design concepts | Top Talent in Every Rol e Coursework or projects demonstrating machine‑learning fundamentals (regression, classification, DL models, Agentic AI) | Be Insatiably Curious & Keep Improvin g Familiarity with SQL and eagerness to dive into data pipelines (Kafka, MongoDB, BigQuery, or similar) | Think Deeply & Exercise Good Judgemen t Ability to be self‑directed and learn quickly, with a strong desire to stay on top of the latest AI developments | Be Insatiably Curious & Keep Improvin g Comfort using AI tools—Cursor, GPT, Claude—to accelerate development | Move Fast & Be Dynami c Strong written and verbal communication skills; collaborative mindset | Pro‑Sports Tea m You will be responsible for ? ? Strategy- Work extensively on our Multi‑Agent AI Analytics System, expanding capabilities to deliver conversational insights at scal e Strategy- Design and iterate on ML models powering real‑time personalization, matchmaking, and churn predictio n Strategy- Drive experimentation that boosts engagement, retention, and monetization through user‑level intelligenc e Operations-Monitor real‑time data pipelines that feed anomaly detection, feature stores, and matchmaking service s Operations-Optimize and benchmark ML inference for live gameplay scenarios (spin‑the‑wheel rewards, sticker recommendations, GBM matchmaking ) Collaboration-Partner with product, backend, and design to turn insights into delightful player experience s Collaboration-Champion AI‑driven tooling and workflow automation across the tea m 💰 Benefits → We have tremendous benefits & perks. Check out work.hike.in to know mor e

Posted 2 weeks ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Chandkheda, Ahmedabad, Gujarat

On-site

About Us: Red & White Education Pvt. Ltd., established in 2008, is Gujarats top NSDC & ISO-certified institute focused on skill-based education and global employability. Role Overview: Were hiring a full-time Onsite AI, Machine Learning, and Data Science Faculty/ Trainer with strong communication skills and a passion for teaching, Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML : 1 year (Required) Data science: 1 year (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Data Analyst (Data Visualization & Reporting) Key Responsibilities Work with large datasets from various inspection sources (LiDAR, drones, thermal imaging). Build insightful dashboards and reports using tools like Power BI, Tableau, or Looker. Develop and deploy predictive models and statistical analyses to detect anomalies and prevent failures. Collaborate with engineering and operations teams to translate complex data into operational insights. Ensure high-quality, clean, and consistent data by implementing validation pipelines. Apply basic electrical domain knowledge (fault detection, insulator/conductor analysis, etc.) for enriched interpretations. Continuously improve analysis workflows and automate repetitive data processes. Required Skills & Experience 3+ years of hands-on experience as a Data Analyst/Data Scientist. Strong skills in SQL, Python (Pandas, NumPy), or R for data manipulation. Proficiency in data visualization tools : Power BI, Tableau, Looker, etc. Experience in working with time-series data or sensor-based data from industrial sources. Exposure to predictive analytics, ML algorithms, or data modeling techniques. Solid understanding of data pipelines and best practices in data management. Familiarity with AWS/Azure/GCP for data processing is a plus. Background or familiarity with geospatial data or tools like QGIS is a bonus. Preferred Qualifications Degree in Data Science, Engineering, Computer Science, Prior experience with inspection data, IoT, or utilities/power transmission systems. Knowledge of domain-specific platforms used for power line inspections. Certification in data analysis/ML platforms (Google Data Analytics, Microsoft DA, etc. Soft Skills Strong analytical thinking and attention to detail. Ability to convert technical findings into business-focused insights. Team player with cross-functional collaboration experience. Effective written and verbal communication skills (ref:hirist.tech)

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Skills: Python, PyTorch, aws, Data Visualization, Machine Learning, ETL, Experience: 2 4 Years Location: Bangalore (In-office) Employment Type: Full-Time About The Role We are hiring a Junior Data Scientist to join our growing data team in Bangalore. Youll work alongside experienced data professionals to build models, generate insights, and support analytical solutions that solve real business problems. Responsibilities Assist in data cleaning, transformation, and exploratory data analysis (EDA). Develop and test predictive models under guidance from senior team members. Build dashboards and reports to communicate insights to stakeholders. Work with cross-functional teams to implement data-driven initiatives. Stay updated with modern data tools, algorithms, and techniques. Requirements 2 4 years of experience in a data science or analytics role. Proficiency in Python or R, SQL, and key data libraries (Pandas, NumPy, Scikit-learn). Experience with data visualization tools (Matplotlib, Seaborn, Tableau, Power BI). Basic understanding of machine learning algorithms and model evaluation. Strong problem-solving ability and eagerness to learn. Good communication and teamwork skills.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Report this job Job Title: Ecommerce SME Analyst Summary We are seeking an experienced and driven Ecommerce SME Analyst with 8 years of expertise in digital analytics and ecommerce data. In this role, you will analyze clickstream and user behavior data to uncover actionable insights that enhance user experience, optimize conversion funnels, and inform strategic product decisions. You will work extensively with Adobe Analytics, Python, SQL, and BigQuery, and collaborate with cross-functional teams to drive data-informed growth across our ecommerce platform. Key Responsibilities Clickstream %2526 Ecommerce AnalysisAnalyze ecommerce clickstream data using Adobe Analytics to understand user journeys, identify drop-off points, and recommend optimizations for improved engagement and conversion. User Behavior InsightsSegment and analyze user behavior to uncover patterns, preferences, and opportunities for personalization and targeting. Data Extraction %2526 TransformationUse SQL and Python to query, clean, and transform large datasets from BigQuery and other data sources. Visualization %2526 ReportingBuild dashboards and reports using visualization tools (e.g., Tableau, Looker, Power BI) to communicate insights clearly to stakeholders. Product Strategy SupportPartner with product and analytics teams to translate data insights into actionable recommendations that shape the product roadmap. KPI Definition %2526 TrackingDefine and monitor key performance indicators (KPIs) to evaluate the impact of product features and site changes. A/B Testing AnalysisDesign and analyze A/B tests to assess the effectiveness of new features and user experience improvements. Cross-Functional CollaborationWork closely with product managers, marketers, and engineers to understand data needs and deliver timely, relevant insights. Data Quality AssuranceEnsure data accuracy and integrity through validation checks and collaboration with data engineering teams. Continuous LearningStay current with industry trends, tools, and best practices in ecommerce analytics and data science. Required Qualifications Bachelor's degree in Statistics, Mathematics, Computer Science, Economics, or a related field. Minimum 2 years of experience in ecommerce analytics. Strong hands-on experience with Adobe Analytics for tracking and analyzing user behavior. Proficiency in SQL and Python (including libraries like Pandas, NumPy, Matplotlib, Seaborn). Experience working with Google BigQuery or similar cloud-based data warehouses. Familiarity with data visualization tools (e.g., Tableau, Looker, Power BI). Strong analytical and problem-solving skills. Excellent communication skills to present findings to technical and non-technical audiences. Ability to work independently and collaboratively in a fast-paced environment. Key Skills Adobe Analytics Python (Pandas, NumPy, Matplotlib, Seaborn) SQL Google BigQuery Ecommerce Analytics Clickstream %2526 User Behavior Analysis Data Visualization %2526 Reporting A/B Testing Product Strategy %2526 KPI Tracking Communication %2526 Collaboration Data Quality %2526 Validation Key Details Job Function: IT Software : Software Products & Services Industry: IT-Software Specialization:Information Systems Employment Type: Full Time Key Skills Mandatory Skills : Adobe Analytics Python SQL bigquery ecommerce domain About Company Company:LTIMindtree Job Posted by Company LTIMindtree Ltd. LTIMindtree is a global technology consulting and digital solutions company that enables enterprises... More across the industries to reimagine business models, As a digital transformation partner to more than 750 clients, brings extensive domain and technology expertise to help drive superior competitive world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - A Larsen & Toubro Group company - combines the industry - acclaimed strengths or erstwhile Larsen and Toubro Infotech and MindTree in solving the most complex business challenges and delivering transformation at scale. Less Job Id: 71587467

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Defender Experts (DEX) Research team is at the forefront of Microsoft’s threat protection strategy, combining world-class hunting expertise with AI-driven analytics to protect customers from advanced cyberattacks. Our mission is to move protection left—disrupting threats early, before damage occurs—by transforming raw signals into intelligence that powers detection, disruption, and customer trust. We’re looking for a passionate and curious Data Scientist to join this high-impact team. In this role, you'll partner with researchers, hunters, and detection engineers to explore attacker behavior, operationalize entity graphs, and develop statistical and ML-driven models that enhance DEX’s detection efficacy. Your work will directly feed into real-time protections used by thousands of enterprises and shape the future of Microsoft Security. This is an opportunity to work on problems that matter—with cutting-edge data, a highly collaborative team, and the scale of Microsoft behind you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Design and build robust, large-scale graph structures to model security entities, behaviors, and relationships. Develop and deploy scalable, production-grade AI/ML systems and intelligent agents for real-time threat detection, classification, and response. Collaborate closely with Security Research teams to integrate domain knowledge into data science workflows and enrich model development. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Work with large-scale graph data: create, query, and process it efficiently to extract insights and power models. Lead initiatives involving Graph ML, Generative AI, and agent-based systems, driving innovation across threat detection, risk propagation, and incident response. Collaborate closely with engineering and product teams to integrate solutions into production platforms. Mentor junior team members and contribute to strategic decisions around model architecture, evaluation, and deployment. Qualifications Bachelor’s or Master’s degree in Computer Science, Statistics, Applied Mathematics, Data Science, or a related quantitative field 5+ years of experience applying data science or machine learning in a real-world setting, preferably in security, fraud, risk, or anomaly detection Proficiency in Python and/or R, with hands-on experience in data manipulation (e.g., Pandas, NumPy), modeling (e.g., scikit-learn, XGBoost), and visualization (e.g., matplotlib, seaborn) Strong foundation in statistics, probability, and applied machine learning techniques Experience working with large-scale datasets, telemetry, or graph-structured data Ability to clearly communicate technical insights and influence cross-disciplinary teams Demonstrated ability to work independently, take ownership of problems, and drive solutions end-to-end Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

1 - 6 Lacs

Noida

Work from Office

Collaborate with teams to understand business needs, design and implement AI solutions, conduct thorough testing, optimize algorithms, stay updated with AI advancements, integrate technologies, and mentor team for innovation.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us: Planful is the pioneer of financial performance management cloud software. The Planful platform, which helps businesses drive peak financial performance, is used around the globe to streamline business-wide planning, budgeting, consolidations, reporting, and analytics. Planful empowers finance, accounting, and business users to plan confidently, close faster, and report accurately. More than 1,500 customers, including Bose, Boston Red Sox, Five Guys, Grafton Plc, Gousto, Specialized and Zappos rely on Planful to accelerate cycle times, increase productivity, and improve accuracy. Planful is a private company backed by Vector Capital, a leading global private equity firm. Learn more at planful.com. About the Role: We are looking for self-driven, self-motivated, and passionate technical experts who would love to join us in solving the hardest problems in the EPM space. If you are capable of diving deep into our tech stack to glean through memory allocations, floating point calculations, and data indexing (in addition to many others), come join us. Requirements: 5+ years in a mid-level Python Engineer role, preferably in analytics or fintech. Expert in Python (Flask, Django, pandas, NumPy, SciPy, scikit-learn) with hands-on performance tuning. Familiarity with AI-assisted development tools and IDEs (Cursor, Windsurf) and modern editor integrations (VS Code + Cline). Exposure to libraries supporting time-series forecasting. Proficient in SQL for complex queries on large datasets. Excellent analytical thinking, problem-solving, and communication skills. Nice to have: Shape financial time-series data: outlier detection/handling, missing-value imputation, techniques for small/limited datasets. Profile & optimize Python code (vectorization, multiprocessing, cProfile). Monitor model performance and iterate to improve accuracy. Collaborate with data scientists and stakeholders to integrate solutions. Why Planful Planful exists to enrich the world by helping our customers and our people achieve peak performance. To foster the best in class work we're so proud of, we've created a best in class culture, including: 2 Volunteer days, Birthday PTO, and quarterly company Wellness Days 3 months supply of diapers and meal deliveries for the first month of your Maternity/Paternity leave Annual Planful Palooza, our in-person, company-wide culture Company-wide Mentorship program with Executive sponsorship of CFO and Manager-specific monthly training programs Employee Resource Groups such as Women of Planful, LatinX at Planful, Parents of Planful, and many We encourage our teammates to bring their authentic selves to the team, and have full support in creating new ERGs & communities along the way.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

1 - 6 Lacs

Pune

Work from Office

Return to Work Program for Python Professionals: Location: Offline (Baner, Pune) Experience Required: 3+ years Program Duration: 3 Months Program Type: Free Training + Job Assistance (Not a Job Guarantee) Note: Candidate should be ready to learn new technologies. Restart Your Career in High-Demand Tech Fields! If you've experienced a career gap, layoff, or lost a job due to unforeseen circumstances, VishvaVidya's Return to Work Program offers a unique platform to relaunch your tech career with confidence. What We Offer: Free Technical Training: Upskill in Python, Generative AI, Data Science, and other relevant tools. Placement Assistance: Get connected with top hiring partners actively hiring returnees. Hands-on Learning: Work on real-world projects to bridge your experience gap. Mentorship & Confidence Building: Structured sessions to support your transition back to work. Zero Cost: The program is 100% free, fully sponsored by our hiring partners. Eligibility: Minimum 3 years of prior experience in Python development Career break of 6 months to 7 years welcome Eagerness to upskill and return to the workforce Availability for offline sessions in Baner, Pune Why Join VishvaVidyas Return to Work Program? Tailored for career restart seekers Trusted by top tech employers Industry-relevant curriculum curated by expert mentors Build portfolio-worthy projects and prepare for real-world job roles Why Choose VishvaVidya? We believe in second chances and career growth for everyone. Our fully sponsored program equips you with the skills, confidence, and opportunities needed to successfully re-enter the workforce. Apply Today Your next career chapter starts here!

Posted 2 weeks ago

Apply

2.0 years

3 - 8 Lacs

Delhi

On-site

Coder must have 2 years of experience: Python - primarly for data retrival, numpy,pandas . javascript - framework like react and angular knowledge of GIS- QGIS OR ArcGIS of spatial data . Job Types: Full-time, Permanent Pay: ₹354,035.99 - ₹851,937.64 per year Benefits: Paid sick time Provident Fund Schedule: Day shift Fixed shift Morning shift Work Location: In person Application Deadline: 22/07/2025

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune

On-site

Experience: 5+ Years Employment Type: Full-Time About the Role: We are looking for a highly skilled Data Scientist with a strong background in Machine Learning , Statistical Modeling , and hands-on experience working with Generative AI technologies. The ideal candidate will have deep technical expertise in agentic AI systems , RAG (Retrieval-Augmented Generation) architectures , and the ability to implement, fine-tune, and evaluate large language models such as OpenAI, LLaMA , or Cortex . This is a high-impact role where you'll be building intelligent, scalable, and context-aware AI solutions that solve real-world business problems. Key Responsibilities: Design and implement agentic AI systems that leverage memory, planning, and tool-use capabilities. Develop and deploy RAG-based architectures integrating internal data sources with LLMs to enable knowledge-grounded responses. Apply advanced statistical modeling and machine learning techniques to extract insights and predict outcomes from large datasets. Integrate and fine-tune Generative AI models like OpenAI (GPT), LLaMA, or Cortex for custom use cases. Build intelligent pipelines using Python for data preprocessing, model training, and evaluation. Collaborate cross-functionally with product, engineering, and business teams to drive AI/ML adoption. Ensure scalability, accuracy, and ethical usage of AI models in production environments. Required Skills and Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Statistics, or related field. 5+ years of experience in ML/AI engineering or data science roles. Strong experience with Python , NumPy, Pandas, Scikit-learn, and ML libraries like TensorFlow or PyTorch. Hands-on with Gen AI platforms such as OpenAI , LLaMA , Anthropic , or Cortex AI . Deep understanding of RAG pipelines , vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding techniques. Experience working on agentic AI frameworks like LangChain, AutoGPT, or OpenAgents. Solid grounding in statistical analysis , A/B testing, and predictive modeling. Familiarity with prompt engineering, fine-tuning, and evaluation metrics for LLMs. Good understanding of data privacy, model bias, and responsible AI practices. Nice to Have: Experience with tools like LangChain , Haystack , or LLM orchestration frameworks . Exposure to cloud platforms (AWS, GCP, Azure) for deploying ML models. Experience working with MLOps pipelines for productionalizing AI solutions. At TulaPi (pronounced tuu-la-pie), we’re building more than just a company – we’re crafting a movement. A movement that’s redefining what’s possible with data, machine learning, and AI, all powered by Snowflake's industry-leading platform. Think of us as the brainy rebels of the data world, bold enough to dream big and skilled enough to make it happen. We’re not just here to follow trends – we’re here to set them. From solving the most complex data challenges to building next-gen ML/AI solutions, we’re going to chart new territory every day. This is where the best talent comes to push boundaries, flex creative muscles, and make a real impact. At Tula Pi, you won’t just be working with cutting-edge tools and technologies – you’ll be shaping the future of what they can do. Whether you’re an architect of the cloud, an engineer with a knack for unlocking AI’s potential, or a strategist ready to disrupt the status quo, we’re looking for trailblazers like you to join our journey. Why Join Us? Big Challenges, Bigger Impact: Work on transformative projects that push the limits of what’s possible in ML/AI. Smart is the Standard: Collaborate with some of the brightest minds in the industry. Global Vision, Local Vibes: Be part of a team that’s global in its ambition but intimate in its culture. ️ Tools of Tomorrow: Gain access to the most advanced data and AI platforms, including Snowflake, and make them dance to your tune. Your Playground: A startup environment where your ideas, creativity, and innovation won’t just be welcomed – they’ll be celebrated. Get the chance to work closely with CEO and CTO with exposure to strategic decision-making. Tulapi is more than a workplace; it’s a destination for those who want their work to matter, their ideas to fly, and their careers to soar. If you're ready to work hard, dream bigger, and redefine the future of ML/AI, welcome home. Website: Tulapi.ai LinkedIn: https://www.linkedin.com/company/tulapi-ai/ Data fortune Software Solution is a 12+ year old company Based out of Pune . Our Head Office is in Atlanta , Georgia , US . we are around 150+ We work with US clients . Enterprise Data Management:-> Data Engineer, Snowflake, Azure, Power Bi , Tableau, SQL Server, SQL Server DBA. Application side - > Python, Dot Net , Angular, Flutter, Node, React , PHP, Vue JS , Java script , Flutter, Automation testing , Selenium , Load testing , etc. Website: https://datafortune.com/ LinkedIn: https://www.linkedin.com/company/datafortune/posts/?feedView=all

Posted 2 weeks ago

Apply

3.0 years

2 - 13 Lacs

India

On-site

Role Overview We are seeking a skilled and self-motivated AI/ML Engineer to join our growing team. You will be responsible for designing, developing, training, and deploying machine learning models and AI systems to solve practical, real-life problems. Key Responsibilities Design and implement machine learning models for real-world business applications. Analyze and preprocess large datasets from various structured and unstructured sources. Train, validate, and fine-tune models using classical ML and/or deep learning methods. Collaborate with product and engineering teams to integrate models into production systems. Build end-to-end ML pipelines (data ingestion → model training → deployment). Monitor and improve model performance over time with live feedback data. Document model architecture, performance metrics, and deployment processes. Required Skills and Experience 3–5 years of hands-on experience in AI/ML engineering. Strong knowledge of machine learning algorithms (classification, regression, clustering, etc.). Experience with deep learning frameworks (TensorFlow, PyTorch, Keras). Proficient in Python, with experience in libraries like scikit-learn, pandas, NumPy, etc. Experience with NLP, computer vision, or time-series models is a plus. Understanding of MLOps practices and tools (MLflow, DVC, Docker, etc.). Exposure to deploying ML models via REST APIs or cloud services (AWS/GCP/Azure). Familiarity with data versioning, model monitoring, and re-training workflows. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Data Science, AI, or a related field. Published work on GitHub or contributions to open-source AI/ML projects. Certifications in AI/ML, cloud computing, or data engineering. Contact US Email: careers@crestclimbers.com Phone: +91 94453 30496 Website: www.crestclimbers.com Office: Kodambakkam, Chennai Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹298,197.62 - ₹1,398,461.03 per year Work Location: In person Expected Start Date: 21/07/2025

Posted 2 weeks ago

Apply

8.0 years

4 - 8 Lacs

Bengaluru

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Roles and responsibilities The ideal candidate should have strong communication skills to effectively engage with both technical and non-technical stakeholdersThe candidate will be responsible for developing end to end task/workflow automation pipeline using Python using wide range of data sources (both on premise and cloud)Candidate should have strong working experience is transforming excel based manual processes to fully automated python based process incorporating strong governance around itThe person should be competent in Python Programming and possess high levels of analytical skills - Data pre-processing, engineering, data pipeline development and automation In-depth knowledge of libraries such as pandas, numpy, scikit-learn, openpyxl, pyxlsb, TensorFlow, PyTorch etc.Well-versed with Python coding standards and formatting conventions to ensure maintainable, scalable, and reusable modules.Build and automate workflows using Microsoft Power Platform (Power BI, Power Apps, Power Automate).integrating systems and automating workflows using WTW Unify Knowledge of Dataiku is a plus.Apply GenAI techniques to enhance data exploration, automate content generation, and support decision-making.Ensure data quality, governance, and compliance with organizational standards.Well-versed with CI/CD pipelines using Azure DevOps (ADO) for seamless deployment and integration of data science solutions.Experience working on Posit Workbench, Posit Connect will be an added advantageStay updated with the latest trends in AI, machine learning, and data engineering.Tools/Tech experience – Mandatory – Python (Data processing, Engineering & Automation), SQL, Proficiency with version control systems like ADO/BitbucketPreferred - R programming, Posit Workbench, R Shiny Experience processing large amount of data using BigData technologies is preferredFamiliarity with Microsoft Power Platform tools.Knowledge of Dataiku is a plus.Familiarity with WTW Unify platform and its applications in analytics.Knowledge of Generative AI models and frameworks (e.g., GPT, DALL·E, Llama).Knowledge of data visualization tools and techniques is a plus Functional/Other expertiseRelevant experience: 8+ years of experience using Python programming Language for end-to-end data pre-processing, transformation and automationExperience in the Insurance domain preferred (for e.g. Finance, Actuarial) Qualifications Educational Qualification: Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1 institutions Or BE/B-Tech, MCA or MBA from Tier 1 institutions

Posted 2 weeks ago

Apply

0 years

4 - 7 Lacs

Surat

On-site

Job description Primary role Writing efficient, reusable, testable, and scalable code. Developing - backend components to enhance performance and receptiveness, server - side logic and platform, statistical learning models. Integrate user-facing elements into applications. Improve functionality of existing systems. Working with python libraries like pandas, Numpy, etc. Creating models for AI and ML - based features. Coordinate with internal teams to understand user requirements and provide technical solutions. Job Overview (8098) Experience 30 Month(s). City Surat. Qualification M.SC,MCA,PGDCA Area of Expertise PYTHON Prefer Gender Male Function AI & ML Audio / Video Profile NA

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies