Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
In this role, you will be responsible for: - Building & maintaining ETL/ELT pipelines to ensure data accuracy and quality. - Working with large datasets to collect, clean, and prepare data for analysis. - Designing & implementing predictive models and collaborating with Data Science teams. - Creating dashboards & reports using tools like Tableau, Power BI, matplotlib, and Seaborn. - Working with cloud platforms such as AWS, Azure, GCP, and databases like SQL, MongoDB, PostgreSQL, and MySQL. - Collaborating with cross-functional teams to solve business problems using data. Preferred Skills: - Proficiency in Big Data tools like Spark, Hadoop. - Experience with ML Frameworks such as TensorFlow, PyTorch, Scikit-learn. - Strong programming skills in Python/R, particularly with Pandas and NumPy. - Knowledge of data engineering & warehousing concepts. As for additional details of the company, there is a focus on employee well-being and growth opportunities, including: - Health Insurance - Occasional WFH options as per policy - Paid time-off for personal needs - Exposure to client interactions & real-world business challenges - Opportunity to work on cutting-edge cross-functional projects Please note that this is a full-time position with the work location being in person.,
Posted 2 days ago
0.0 - 3.0 years
0 Lacs
jodhpur, rajasthan
On-site
As a Full Stack Developer AI/ML at our company located in Jodhpur, you will have the exciting opportunity to work on cutting-edge AI-powered applications, develop and deploy ML models, and integrate them into full-stack web solutions. Here's what you can expect in this role: Key Responsibilities: - Design, develop, and deploy AI/ML models for real-world applications. - Work with Python, TensorFlow, PyTorch, Scikit-Learn, and OpenCV for machine learning tasks. - Develop RESTful APIs to integrate AI models with web applications. - Implement and maintain backend services using Node.js, FastAPI, or Flask. - Build and enhance frontend UIs using React.js, Vue.js, or Angular. - Manage databases (SQL & NoSQL like PostgreSQL, MongoDB, Firebase) for storing model outputs. - Optimize AI models for performance, scalability, and deployment using Docker, Kubernetes, or cloud services (AWS, GCP, Azure). - Work on data preprocessing, feature engineering, and model evaluation. - Deploy models using MLOps pipelines (CI/CD, model versioning, monitoring). - Collaborate with cross-functional teams for AI integration into business applications. Required Skills & Qualifications: - 0-1 year of experience in AI/ML and full-stack development. - Strong proficiency in Python, TensorFlow, PyTorch, Scikit-Learn, Pandas, NumPy. - Experience with backend development (Node.js, FastAPI, Flask, or Django). - Knowledge of frontend frameworks (React.js, Vue.js, or Angular). - Familiarity with database management (SQL - PostgreSQL, MySQL | NoSQL - MongoDB, Firebase). - Understanding of machine learning workflows, model training, and deployment. - Experience with cloud services (AWS SageMaker, Google AI, Azure ML). - Knowledge of Docker, Kubernetes, and CI/CD pipelines for ML models. - Basic understanding of Natural Language Processing (NLP), Computer Vision (CV), and Deep Learning. - Familiarity with data visualization tools (Matplotlib, Seaborn, Plotly, D3.js). - Strong problem-solving and analytical skills. - Good communication and teamwork skills. Preferred Qualifications: - Experience with LLMs (GPT, BERT, Llama) and AI APIs (OpenAI, Hugging Face). - Knowledge of MLOps, model monitoring, and retraining pipelines. - Understanding of Reinforcement Learning (RL) and Generative AI. - Familiarity with Edge AI and IoT integrations. - Previous internship or project experience in AI/ML applications. In this role, you will receive a competitive salary package, hands-on experience with AI-driven projects, mentorship from experienced AI engineers, a flexible work environment (Remote/Hybrid), and career growth and learning opportunities in AI/ML.,
Posted 2 days ago
2.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Associate Consultant/Consultant, you will play a crucial role in designing and implementing AI/ML models and data-driven solutions to tackle intricate challenges in healthcare and life sciences. Your responsibilities will include analyzing large datasets to identify trends, generate insights, and support data-informed decision-making. Additionally, you will collaborate with consultants, data engineers, and scientists to establish end-to-end data pipelines and analytical workflows. Your key responsibilities will involve: - Designing and implementing AI/ML models and data-driven solutions for complex challenges in healthcare and life sciences - Analyzing large datasets to identify trends, generate insights, and support data-informed decision-making - Collaborating cross-functionally with consultants, data engineers, and scientists to build end-to-end data pipelines and analytical workflows - Engaging with clients through interviews, workshops, and presentations to contribute to deliverables and foster strong relationships - Communicating complex analyses into clear, actionable insights for both technical and non-technical stakeholders - Continuously learning about AI/ML advancements, contributing to research and proposals, and mentoring junior team members Qualifications required for this role include proficiency in Machine Learning, Python, Gen AI, and Consulting/Stakeholder Management. Desirable technical skills encompass proficiency in Python, experience with frameworks like TensorFlow, PyTorch, or scikit-learn, working with generative AI models such as GPT, BERT, LLaMA, or other transformer-based architectures, developing, testing, and deploying high-quality AI models and solutions, data manipulation, cleaning, and analysis proficiency, creating clear visualizations using tools like Matplotlib, Seaborn, or Tableau, and familiarity with cloud services like AWS, Google Cloud, or Azure for deploying AI solutions.,
Posted 2 days ago
2.0 - 3.0 years
4 - 5 Lacs
mumbai
Work from Office
Job role Job Requirements Personality traits To work on heavy data extraction, generation of reports, visualising data and putting analysis in discussed formatSQL & Advanced python knowledge with proven track record in python scripting using Numpy, pandas, seaborn, matplotlib, scikit learn etc Candidate must have excellent organizational skills, be level-headed, have good interpersonal skills Model building like classification, regression, recommendation systems etc SAS EG/Eminer knowledge is a plusNumerical skills - will need to acquire data and use it to target selected groups, as well as analyse the success or otherwise of campaigns The role holder will be responsible for data support, segmentation/scorecards, analytics and customer insights Past experience with deployment on cloud systems & API integration is preferredAt least 2-3 years of experience with data handling roles Must be excellent with Data visualisation He/ She should be comfortable working with heavy datasets and should have an analytical bent of mind Should have an active interest in data analysisHighly logical individuals with adaptable trait towards new technologiesAdhere to documented process and compliance requirements, assisting seniors on diversified projectsMotivated and determined candidate with a zeal to handle responsibilities and take initiativesStrong communication skills to effective interaction with business stakeholders
Posted 3 days ago
0.0 - 4.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of Beghou Consulting's software development team, your role will involve the development and maintenance of analytical and machine learning models. Your primary responsibility will be to provide efficient solutions to business requests by understanding, cleaning, and transforming data into a usable format. Additionally, you will be using data visualizations to present findings, identifying issues, and offering solutions to mitigate and address these issues. It is essential to maintain code quality, organization, and automation while continuously expanding your knowledge through research. You will also need to comply with corporate quality policies and procedures and ensure timely completion of all training requirements. Key Responsibilities: - Understand, clean, and transform data into a usable format - Utilize data visualizations to present findings - Identify issues and provide solutions to mitigate and address them - Maintain code quality, organization, and automation - Continuously expand knowledge through research - Comply with corporate quality policies and procedures - Ensure timely completion of all training requirements Qualifications Required: - Strong programming skills in Python - Proficiency in libraries such as Pandas, Numpy, scikit-learn, TensorFlow, or PyTorch, Matplotlib, Seaborn - Solid understanding of Machine Learning Concepts and Algorithms - Familiarity with AI integration - Knowledge of relational databases and SQL queries - Experience with code versioning tools like GIT Desired Soft Skills: - Excellent communication skills to interact with business stakeholders and technical development teams - Strong listening skills to understand customer needs - Critical and creative thinking abilities coupled with brainstorming skills - Teamwork and collaboration skills across different teams - Adaptability to changing and increasing workloads - Ability to assess and identify bottlenecks and potential failures - Fluency in spoken and written English At Beghou Consulting, we believe in treating our employees with respect and appreciation, valuing their talents and abilities, and fostering a supportive, collaborative, and dynamic work environment. We provide opportunities for professional and personal growth by encouraging learning from all levels within the organization. Our commitment to quality results and leadership in sales and marketing analytics has led to steady growth throughout our history.,
Posted 4 days ago
2.0 - 5.0 years
8 - 12 Lacs
pune
Work from Office
Job Purpose Skilled Python Developer with 2+ years of experience in Python, FastAPI, data libraries (NumPy, Pandas, Seaborn, Matplotlib), NoSQL, Git, and Linux, who can independently deliver high-quality code and take ownership of action items, with added advantage if experienced in Django, Java, JavaScript, Event Hub, or Agile. This will be specifically needed for account aggregator project developments and support. Duties and Responsibilities Write clean, efficient, and scalable Python code. Build APIs and backend services using FastAPI. Work extensively with NumPy, Pandas, Matplotlib, and Seaborn for data manipulation and visualization. Integrate with NoSQL databases. Collaborate using Git for version control and VS Code as the development environment. Operate effectively in Linux-based environments. Participate in code reviews, debugging, and optimization tasks. Follow through on assigned action items and project deliverables. Key Decisions / Dimensions Demonstrates ownership and consistently delivers high-quality, timely results. Writes clean, production-ready code with strong attention to detail and best practices. Collaborates effectively within a team, showing strong communication and interpersonal skills. Continuously learns and adopts new tools, technologies, and industry best practices. Major Challenges Keeping up to date with the latest technology Keep optimizing the application code for optimal usages of resources and scalability. Handling Production issues and responsible to communicate to business teams Conduct systematic Root Cause Analysis Maintaining the system uptime to 100% Required Qualifications and Experience a)Qualifications Graduation or Post Graduation in Computer Science b)Work Experience Experience of 2-5 years Mandatory Skills (tasks may differ basis the below mentioned skill sets) Must have: Minimum 2 years of experience in Python programming. Proficient in: NumPy, Pandas Seaborn, Matplotlib FastAPI NoSQL databases (e.g., MongoDB, Cassandra) Git, VS Code Linux Strong problem-solving and debugging skills. Ability to work independently and meet deadlines. Nice to Have (Preferred but not Mandatory): Experience with Django (web framework) Familiarity with Java or JavaScript Exposure to Event Hubs or message-driven architectures Understanding of Agile methodologies
Posted 4 days ago
2.0 - 5.0 years
8 - 12 Lacs
pune
Work from Office
Job Purpose Skilled Python Developer with 2+ years of experience in Python, FastAPI, data libraries (NumPy, Pandas, Seaborn, Matplotlib), NoSQL, Git, and Linux, who can independently deliver high-quality code and take ownership of action items, with added advantage if experienced in Django, Java, JavaScript, Event Hub, or Agile. This will be specifically needed for account aggregator project developments and support. Duties and Responsibilities Write clean, efficient, and scalable Python code. Build APIs and backend services using FastAPI. Work extensively with NumPy, Pandas, Matplotlib, and Seaborn for data manipulation and visualization. Integrate with NoSQL databases. Collaborate using Git for version control and VS Code as the development environment. Operate effectively in Linux-based environments. Participate in code reviews, debugging, and optimization tasks. Follow through on assigned action items and project deliverables. Key Decisions / Dimensions Demonstrates ownership and consistently delivers high-quality, timely results. Writes clean, production-ready code with strong attention to detail and best practices. Collaborates effectively within a team, showing strong communication and interpersonal skills. Continuously learns and adopts new tools, technologies, and industry best practices. Major Challenges Keeping up to date with the latest technology Keep optimizing the application code for optimal usages of resources and scalability. Handling Production issues and responsible to communicate to business teams Conduct systematic Root Cause Analysis Maintaining the system uptime to 100% Required Qualifications and Experience a)Qualifications Graduation or Post Graduation in Computer Science b)Work Experience Experience of 2-5 years Mandatory Skills (tasks may differ basis the below mentioned skill sets) Must have: Minimum 2 years of experience in Python programming. Proficient in: NumPy, Pandas Seaborn, Matplotlib FastAPI NoSQL databases (e.g., MongoDB, Cassandra) Git, VS Code Linux Strong problem-solving and debugging skills. Ability to work independently and meet deadlines. Nice to Have (Preferred but not Mandatory): Experience with Django (web framework) Familiarity with Java or JavaScript Exposure to Event Hubs or message-driven architectures Understanding of Agile methodologies
Posted 4 days ago
4.0 - 8.0 years
8 - 16 Lacs
bengaluru
Work from Office
Role & responsibilities : 1.Design and implement anomaly detection models using statistical and unsupervised learning techniques. 2.Optimize AI models for edge devices. 3.Work with time-series and sensor data, applying appropriate preprocessing, noise reduction, and feature extraction techniques. 4.Utilize deep learning architecture such as CNNs, RNNs, LSTM, GRU, and GANs. 5.Perform model evaluation, optimization, cross-validation, and hyperparameter tuning. 6.Visualize and analyze datasets using Python tools like Pandas, Matplotlib, and Seaborn. 7.Collaborate with cross-functional teams to translate business needs into AI-powered solutions. Good to have: 1.Sensor data processing (noise reduction, feature extraction, preprocessing techniques). 2.Gen AI and applying RAG technique would be added value. Interested candidate kindly share me your updated CV at jeeva@bvrpc.com
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an AI/ML Developer, you will be responsible for utilizing programming languages such as Python for AI/ML development. Your proficiency in libraries like NumPy, Pandas for data manipulation, Matplotlib, Seaborn, Plotly for data visualization, and Scikit-learn for classical ML algorithms will be crucial. Familiarity with R, Java, or C++ is a plus, especially for performance-critical applications. Your role will involve building models using Machine Learning & Deep Learning Frameworks such as TensorFlow and Keras for deep learning, PyTorch for research-grade and production-ready models, and XGBoost, LightGBM, or CatBoost for gradient boosting. Understanding model training, validation, hyperparameter tuning, and evaluation metrics like ROC-AUC, F1-score, precision/recall will be essential. In the field of Natural Language Processing (NLP), you will work with text preprocessing techniques like tokenization, stemming, lemmatization, vectorization techniques such as TF-IDF, Word2Vec, GloVe, and Transformer-based models like BERT, GPT, T5 using Hugging Face Transformers. Experience with text classification, named entity recognition (NER), question answering, or chatbot development will be required. For Computer Vision (CV), your experience with image classification, object detection, segmentation, and libraries like OpenCV, Pillow, and Albumentations will be utilized. Proficiency in pretrained models (e.g., ResNet, YOLO, EfficientNet) and transfer learning is expected. You will also handle Data Engineering & Pipelines by building and managing data ingestion and preprocessing pipelines using tools like Apache Airflow, Luigi, Pandas, Dask. Experience with structured (CSV, SQL) and unstructured (text, images, audio) data will be beneficial. Furthermore, your role will involve Model Deployment & MLOps where you will deploy models as REST APIs using Flask, FastAPI, or Django, batch jobs, or real-time inference services. Familiarity with Docker for containerization, Kubernetes for orchestration, and MLflow, Kubeflow, or SageMaker for model tracking and lifecycle management will be necessary. In addition, your hands-on experience with at least one cloud provider such as AWS (S3, EC2, SageMaker, Lambda), Google Cloud (Vertex AI, BigQuery, Cloud Functions), or Azure (Machine Learning Studio, Blob Storage) will be required. Understanding cloud storage, compute services, and cost optimization is essential. Your proficiency in SQL for querying relational databases (e.g., PostgreSQL, MySQL), NoSQL databases (e.g., MongoDB, Cassandra), and familiarity with big data tools like Apache Spark, Hadoop, or Databricks will be valuable. Experience with Git and platforms like GitHub, GitLab, or Bitbucket will be essential for Version Control & Collaboration. Familiarity with Agile/Scrum methodologies and tools like JIRA, Trello, or Asana will also be beneficial. Moreover, you will be responsible for writing unit tests and integration tests for ML code and using tools like pytest, unittest, and debuggers to ensure the quality of the code. This position is Full-time and Permanent with benefits including Provident Fund and Work from home option. The work location is in person.,
Posted 4 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
Role Overview: Enphase Energy is seeking a Quality Analyst to join their dynamic team in designing and developing next-gen energy technologies. As the Quality Analyst, your main responsibility will be to sample the right gateways from 4 million sites, create batches, monitor field upgrades, analyze product quality data, and interpret the data throughout the product lifecycle. You will work closely with various stakeholders to provide detailed analysis results. Key Responsibilities: - Understand Enphase products and factory releases to develop and maintain end-to-end field update data and visualization models. - Collaborate with cross-functional teams to understand key software releases, compatible hardware, and gaps to create data models for successful field upgrades. - Monitor field upgrades using data mining techniques and analytics tools like SQL, Excel, Power BI, and automate processes to identify and alert teams of any issues resulting from upgrades. - Automate generation and monitoring of metrics related to deployment for pre-post upgrade analysis using Python or other scripting languages. - Identify opportunities for leveraging data to drive product/process improvements by working with stakeholders across the organization. - Troubleshoot field upgrade issues, escalate to relevant teams, and drive corrective actions by modifying data models and implementing quality control measures. Qualifications Required: - Bachelor's degree in quantitative fields such as computer science, Data Science, electrical, or a related discipline with 10+ years of experience working with global stakeholders. - Proven experience as a Data Analyst or in a similar role, preferably in the energy or solar industry. - Strong proficiency in Python programming for data analysis, including libraries like Pandas, NumPy, Scikit-learn, Plotly, Seaborn, or Matplotlib. - Solid understanding of statistical concepts and experience with statistical analysis tools. - Experience with data warehousing concepts and proficiency in SQL for data extraction and manipulation. - Proficient in data visualization tools such as Excel, Incorta, Tableau, Power BI. - Excellent analytical, critical thinking, and problem-solving skills with the ability to translate complex data into actionable insights. - Strong attention to detail and commitment to delivering high-quality work within tight deadlines. - Excellent communication and presentation skills to effectively convey technical concepts to non-technical stakeholders. - Proactive and self-motivated with a passion for data analysis and the solar market.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Role Overview: You will be a data scientist responsible for developing and deploying impactful data solutions using your expertise in machine learning and statistical analysis. Your role will involve designing and implementing predictive models, collaborating with stakeholders, and driving strategic decisions through experimentation and data-driven insights. Key Responsibilities: - Design and implement robust machine learning solutions including regression, classification, NLP, time series, and recommendation systems. - Evaluate and tune models using appropriate metrics like AUC, RMSE, precision/recall, etc. - Work on feature engineering, model interpretability, and performance optimization. - Partner with stakeholders to identify key opportunities where data science can drive business value. - Translate business problems into data science projects with clearly defined deliverables and success metrics. - Provide actionable recommendations based on data analysis and model outputs. - Conduct deep-dive exploratory analysis to uncover trends and insights. - Apply statistical methods to test hypotheses, forecast trends, and measure campaign effectiveness. - Design and analyze A/B tests and other experiments to support product and marketing decisions. - Automate data pipelines and dashboards for ongoing monitoring of model and business performance. Qualification Required: - Proficiency in Python and MySQL. - Experience with libraries/frameworks such as scikit-learn, Pandas, NumPy, Time series, ARIMA, Bayesian model, Market Mix model, Regression, XGBoost, LightGBM, TensorFlow, PyTorch. - Proficient in statistical techniques like hypothesis testing, regression analysis, and time-series analysis. - Familiarity with databases like PostgreSQL, BigQuery, and MySQL. - Knowledge of visualization tools like Plotly, Seaborn, and Matplotlib. - Experience in Gitlab, CI/CD pipelines is a plus. - Familiarity with cloud platforms like AWS, GCP, or Azure services is also advantageous. In case of any additional details of the company, they are not provided in the job description.,
Posted 5 days ago
0.0 - 1.0 years
3 - 6 Lacs
hyderabad
Hybrid
Responsible for development and maintenance of analytical and machine learning models as part of Beghous software development team. Provide efficient solutions to business requests to comply with client requirements. We'll trust you to: Understand the data, clean, transform it into usable format Using data visualizations to present findings Identify the Issues, and provide solutions to mitigate and address these issues Help maintain code quality, organization, and automation Continuously expand body of knowledge via research Comply with corporate quality policies and procedures Ensure all training requirements are completed in a timely manner You'll need to have: Strong programming skills in Python. Strong knowledge in libraries such as Pandas, Numpy, scikit-learn, TensorFlow, or PyTorch, Matplotlib, Seaborn. Strong understanding of Machine Learning Concepts and Algorithms. Familiarity with AI integration. Familiarity with relational databases and SQL queries. Familiarity with code versioning tools such as GIT. Excellent communication skills. Ability to communicate with business stakeholders as well as technical development teams Excellent listening skills to understand customer needs Excellent critical and creative thinking skills and ability to brainstorm ideas Ability to work in a team and collaborate across teams Ability to adapt to changing and increasing workload Ability to assess and identify bottlenecks and potential failures Fluency in English is required (spoken and written) What you should know: We treat our employees with respect and appreciation, not only for what they do but who they are. We value the many talents and abilities of our employees and promote a supportive, collaborative, and dynamic work environment that encourages both professional and personal growth. You will have the opportunity to work with and learn from all levels in the organization, allowing everyone to work together to develop, achieve, and succeed with every project. We have had steady growth throughout our history because the people we hire are committed not only to delivering quality results for our clients but also to becoming leaders in sales and marketing analytics.
Posted 5 days ago
2.0 - 3.0 years
4 - 5 Lacs
mumbai
Work from Office
Job role Job Requirements Personality traits To work on heavy data extraction, generation of reports, visualising data and putting analysis in discussed formatSQL & Advanced python knowledge with proven track record in python scripting using Numpy, pandas, seaborn, matplotlib, scikit learn etc Candidate must have excellent organizational skills, be level-headed, have good interpersonal skills Model building like classification, regression, recommendation systems etc SAS EG/Eminer knowledge is a plusNumerical skills - will need to acquire data and use it to target selected groups, as well as analyse the success or otherwise of campaigns The role holder will be responsible for data support, segmentation/scorecards, analytics and customer insights Past experience with deployment on cloud systems & API integration is preferredAt least 2-3 years of experience with data handling roles Must be excellent with Data visualisation He/ She should be comfortable working with heavy datasets and should have an analytical bent of mind Should have an active interest in data analysisHighly logical individuals with adaptable trait towards new technologiesAdhere to documented process and compliance requirements, assisting seniors on diversified projectsMotivated and determined candidate with a zeal to handle responsibilities and take initiativesStrong communication skills to effective interaction with business stakeholders
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help the retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Key Responsibilities: - Utilize deep understanding of the retail industry to design AI solutions for critical business needs. - Gather and clean data from various retail sources like sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. - Apply machine learning algorithms for classification, clustering, regression, and deep learning to enhance predictive models. - Utilize AI-driven techniques for personalization, demand forecasting, and fraud detection. - Implement advanced statistical methods to optimize existing use cases and develop new products. - Stay informed about the latest trends in data science and retail technology. - Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional Skills: - Strong analytical and statistical skills. - Expertise in machine learning and AI. - Experience with retail-specific datasets and KPIs. - Proficiency in data visualization and reporting tools. - Ability to work with large datasets and complex data structures. - Strong communication skills for interaction with technical and non-technical stakeholders. - Solid understanding of the retail business and consumer behavior. Technical Skills: - Programming Languages: Python, R, SQL, Scala - Data Analysis Tools: Pandas, NumPy, Scikit-learn, TensorFlow, Keras - Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn - Big Data Technologies: Hadoop, Spark, AWS, Google Cloud - Databases: SQL, NoSQL (MongoDB, Cassandra) Minimum Qualifications: - Experience: Minimum 3 years of relevant experience. - Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. Location: Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must-Have Skills: - Solid understanding of retail industry dynamics and key performance indicators. - Ability to communicate complex data insights to non-technical stakeholders. - Meticulous in ensuring data quality and consistency. - Proficiency in Python for data manipulation, statistical analysis, and machine learning. - Experience with supervised and unsupervised learning algorithms. - Use advanced analytics to optimize pricing strategies based on market demand. Good-to-Have Skills: - Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms. - Experience with ETL processes and tools like Apache Airflow. - Familiarity with designing scalable and efficient data pipelines and architecture. - Experience with data visualization tools like Tableau, Power BI, Matplotlib, and Seaborn.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a highly skilled Data Scientist & AI Engineer, you will have the opportunity to leverage your passion for solving real-world problems using data, machine learning, and advanced AI technologies. In this role, you will collaborate with cross-functional teams to design, develop, and deploy predictive models, Generative AI, and Natural Language Processing (NLP) solutions that drive innovative outcomes. Your responsibilities will include building and optimizing Retrieval-Augmented Generation (RAG) pipelines, fine-tuning large language models (LLMs), and exploring open-source advancements in the AI field. You will also be expected to apply traditional NLP techniques, develop and maintain APIs for AI models, and integrate AI-driven features into products through collaboration with various stakeholders. To excel in this role, you should possess strong programming skills in Python and be proficient in libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Additionally, hands-on experience with SQL, model evaluation metrics, supervised & unsupervised ML models, and GenAI concepts will be essential. Exposure to traditional NLP or recommendation systems, data engineering practices, and version control tools like Git and Jupyter notebooks will be advantageous. Joining our team will offer you the chance to work on cutting-edge AI projects, collaborate in an innovative and impactful environment, and be part of a growth-oriented culture that encourages learning and exposure to diverse technologies and projects.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
We are looking for a motivated and curious Junior AI and Data Science Engineer to join our growing team. In this role, you will support the development and deployment of machine learning models, contribute to data analysis projects, and help build intelligent systems that drive business insights and innovation. This is an excellent opportunity for early-career professionals who are passionate about AI, machine learning, and data science, and are eager to grow their skills in a collaborative and fast-paced environment. Responsibilities - Assist in designing, developing, and deploying machine learning models and AI solutions. - Perform data cleaning, preprocessing, and exploratory data analysis (EDA). - Collaborate with cross-functional teams to understand business needs and translate them into data-driven solutions. - Support the development of data pipelines and model monitoring systems. - Document processes, models, and findings clearly and concisely. - Stay up to date with the latest trends and advancements in AI and data science. Qualifications Requirements: - Bachelors or Masters degree in Computer Science, Data Science, Statistics, or a related field. - 2+ years of experience in AI/ML development, with a strong focus on NLP and LLMs. - Proven experience deploying AI/ML models in production environments. - Proficiency in Python (e.g., NumPy, pandas, scikit-learn, TensorFlow or PyTorch). - Familiarity with SQL and data visualization tools (e.g., Matplotlib, Seaborn, Power BI). - Understanding of machine learning concepts and statistical analysis. - Strong problem-solving skills and attention to detail. - Excellent communication and teamwork abilities. Preferred Knowledge and Experience: - Internship or project experience in AI, machine learning, or data science. - Exposure to cloud platforms (e.g., AWS, Azure, GCP). - Experience with version control systems like Git. Competencies: - Self-motivated, able to multitask, prioritize, and manage time efficiently. - Strong problem-solving skills. - Data analysis skills. Ability to analyze complex data and turn it into actionable information. - Collaboration and teamwork across multiple functions and stakeholders around the globe. - Flexibility and adaptability. About Us onsemi (Nasdaq: ON) is driving disruptive innovations to help build a better future. With a focus on automotive and industrial end-markets, the company is accelerating change in megatrends such as vehicle electrification and safety, sustainable energy grids, industrial automation, and 5G and cloud infrastructure. With a highly differentiated and innovative product portfolio, onsemi creates intelligent power and sensing technologies that solve the world's most complex challenges and leads the way in creating a safer, cleaner, and smarter world. More details about our company benefits can be found here: https://www.onsemi.com/careers/career-benefits About The Team We are committed to sourcing, attracting, and hiring high-performance innovators while providing all candidates a positive recruitment experience that builds our brand as a great place to work.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
nashik, maharashtra
On-site
We are seeking a skilled and detail-oriented Data Scientist to play a crucial role in our computer vision projects. As a Data Scientist, you will be responsible for managing the full data lifecycle, from data collection and cleaning to annotation and analysis. Your primary focus will be on preparing high-quality datasets for computer vision tasks to ensure our machine learning models have the best data for learning. In this role, you will collaborate with ML Engineers and Data Engineers to design datasets, define evaluation metrics, source high-quality data, and extract actionable insights from AI models. Key responsibilities include identifying and managing large-scale datasets, overseeing the annotation and labelling of images, cleaning and preprocessing raw data, designing data augmentation strategies, and performing exploratory data analysis. Additionally, you will be involved in developing statistical and ML models for size estimation, orientation, and movement patterns, implementing quality control protocols for annotated data, working closely with domain experts to define data requirements, and selecting and evaluating models with ML Engineers. The ideal candidate should have at least 3 years of experience as a Data Scientist, preferably in computer vision or geospatial AI, with strong skills in Python and experience with image datasets and deep learning frameworks like PyTorch and TensorFlow. Proficiency in SQL, data pipelines, and communication skills are essential for this role. Preferred qualifications include familiarity with multi-object tracking, trajectory analysis, segmentation, fine-grained classification, and data visualization tools. Experience with satellite and geospatial data, as well as AIS data and open-source maritime datasets, would be advantageous. This position offers the opportunity to work on impactful AI projects, collaborate with a strong ML/AI engineering team, enjoy a flexible working setup, and receive competitive compensation and growth opportunities. If you are open to working from the office for the Nashik location and meet the required qualifications, we encourage you to apply for this full-time Data Scientist position. Benefits include health insurance and provident fund.,
Posted 6 days ago
2.0 - 5.0 years
0 - 0 Lacs
chennai
Work from Office
Role & responsibilities Impact Based Forecasting Design and implement clear, accurate, and engaging visualizations that support IBF platforms, dashboards, and reporting systems. Collaborate with cross-functional teams (data scientists, domain experts, UI/UX designers, and developers) to understand visualization needs and user goals as they apply to dashboards, reports, and interactive visualizations. Structure and organize complex data and indicators into meaningful information hierarchies that support storytelling and quick interpretation. Integrate visualizations into web applications, dashboards, or interactive platforms using frameworks like React, Vue, or others as needed. Prototype and iterate visual elements based on user feedback and usability testing. Ensure visualizations respect accessibility, privacy, and data governance considerations. Capacity Building and Stakeholder Engagement Facilitate training programs for team members and stakeholders, focusing on RIMES policies, regulations, and the use of forecasting tools. Develop and implement a self-training plan to enhance personal expertise, obtaining a trainer certificate as required. Prepare and implement training programs to enhance team capacity and submit training outcome reports. Reporting Prepare technical reports, progress updates, and outreach materials for stakeholders. Maintain comprehensive project documentation, including strategies, milestones, and outcomes. Capacity-building workshop materials and training reports. Other Responsibilities Utilize AI skills to assist in system implementation plans and decision support system (DSS) development. Utilize skills to assist in system implementation plans and decision support system (DSS) development. Assist in 24/7 operational readiness for client early warning systems such as SOCs, with backup support from RIMES Headquarters. Undertake additional tasks as assigned by the immediate supervisor or HR manager based on recommendations from RIMES technical team members and organizational needs. The above responsibilities are illustrative and not exhaustive. Undertake any other relevant tasks that may be needed from time to time. Preferred candidate profile Minimum of 2 years of experience in Data Visualization. Minimum of 2 years of experience in data engineering, analytics, or IT systems for disaster management, meteorology, or climate services. Experience in multi-stakeholder projects and facilitating capacity-building programs. Information Design: Ability to structure and organize complex information in a way that is easily digestible and understandable. Programming Proficiency: JavaScript: D3.js, Chart.js, Highcharts, Leaflet/Mapbox (for geospatial visualizations). Python: Plotly, Dash, Bokeh, Seaborn, or Matplotlib. Design Tools: Working knowledge of tools like Figma, Adobe XD, or Sketch is a plus. Familiarity with early warning systems, disaster risk frameworks, and sector-specific IBF requirements is a strong asset. Excellent communication skills, especially in multidisciplinary and multicultural team settings. Proficiency in technical documentation and user training. Experience in multi-stakeholder projects and facilitating capacity-building programs. Understanding of Data Visualization Best Practices: Deep knowledge of effective visualization types, common pitfalls, and ethical considerations (e.g., avoiding misleading visuals, ensuring accuracy, and respecting privacy). Ability to craft compelling narratives with data visualizations. Contract Duration The contract will initially be for one year and may be extended based on the satisfactory completion of a 180-day probationary period and subsequent annual performance reviews. How to Apply Interested candidates should send your application letter, resume, salary expectation and 2 references to rimeshra-india@rimes.int by midnight of 1 October 2025, Bangkok time. Please state Visualization Expert Impact-Based Forecasting: Your Name ” the Subject line of the email. Only short-listed applicants will be contacted. Ms. Dusadee Padungkul Head of Operations and Programs Regional Integrated Multi-Hazard Early Warning System AIT Campus, 58 Moo 9 Paholyothin Rd., Klong 1, Klong Luang, Pathumthani 12120 Thailand. RIMES promotes diversity and inclusion in the workplace. Well-qualified applicants particularly women are encouraged to apply.
Posted 6 days ago
3.0 - 5.0 years
0 - 1 Lacs
chennai
Work from Office
Role & responsibilities Impact Based Forecasting Collaborate with other members of the IT team, meteorologists, hydrologists, GIS specialists, and disaster risk management experts within RIMES to ensure the development of IBF DSS. Develop AI models (e.g., NLP, computer vision, reinforcement learning) Integrate models into applications and dashboards. Ensure model explainability and ethical compliance. Assist the RIMES Team in applying AI/ML models to forecast hazards and project likely impacts based on exposure and vulnerability indices. Work with forecasters and domain experts to automate the generation of impact-based products. Ensure data security, backup, and compliance with data governance and interoperability standards. Train national counterparts on the use and management of the AL, including analytics dashboards. Collaborate with GIS experts, hydromet agencies, and emergency response teams for integrated service delivery. Technical documentation on data architecture, models, and systems. Capacity Building and Stakeholder Engagement Facilitate training programs for team members and stakeholders, focusing on RIMES policies, regulations, and the use of forecasting tools. Develop and implement a self-training plan to enhance personal expertise, obtaining a trainer certificate as required. Prepare and implement training programs to enhance team capacity and submit training outcome reports. Reporting Prepare technical reports, progress updates, and outreach materials for stakeholders. Maintain comprehensive project documentation, including strategies, milestones, and outcomes. Capacity-building workshop materials and training reports. Other Responsibilities Utilize AI skills to assist in system implementation plans and decision support system (DSS) development. Utilize skills to assist in system implementation plans and decision support system (DSS) development. Assist in 24/7 operational readiness for client early warning systems such as SOCs, with backup support from RIMES Headquarters. Undertake additional tasks as assigned by the immediate supervisor or HR manager based on recommendations from RIMES technical team members and organizational needs. The above responsibilities are illustrative and not exhaustive. Undertake any other relevant tasks that could be needed from time to time. Preferred candidate profile Minimum of 3 years of experience in data engineering, analytics, or IT systems for disaster management, meteorology, or climate services. Experience in multi-stakeholder projects and facilitating capacity-building programs. Machine Learning Fundamentals: Deep understanding of various ML algorithms, including supervised, unsupervised, and reinforcement learning. This includes regression, classification, clustering, time series analysis, anomaly detection, etc. Deep Learning: Proficiency with deep learning architectures (e.g., CNNs, RNNs, LSTMs, Transformers) and frameworks (TensorFlow, PyTorch, Keras). Ability to design, train, and optimize complex neural networks. Strong programming skills in Python, with extensive libraries (NumPy, Pandas, SciPy, Scikit-learn, Matplotlib, Seaborn, GeoPandas). Familiarity with AI tools: such as PyTorch, TensorFlow, Keras, MLflow, etc. Data Visualization: Ability to create clear, compelling visualizations to communicate complex data and model outputs. Familiarity with early warning systems, disaster risk frameworks, and sector-specific IBF requirements is a strong plus. Proficiency in technical documentation and user training. Contract Duration The contract will initially be for one year and may be extended based on the satisfactory completion of a 180-day probationary period and subsequent annual performance reviews. How to Apply: Interested candidates should send your application letter, resume, salary expectation and 2 references to rimeshra-india@rimes.int by midnight of 1 October 2025, Bangkok time. Please state AI SpecialistImpact-Based Forecasting: Your Name the Subject line of the email. Only short-listed applicants will be contacted. Ms. Dusadee Padungkul Head-Department of Operational Support Regional Integrated Multi-Hazard Early Warning System AIT Campus, 58 Moo 9 Paholyothin Rd., Klong 1, Klong Luang, Pathumthani 12120 Thailand. RIMES promotes diversity and inclusion in the workplace. Well-qualified applicants particularly women are encouraged to apply.
Posted 6 days ago
8.0 - 13.0 years
0 - 1 Lacs
hyderabad
Remote
Role & responsibilities We are hiring an AI/ML Developer (India), to join our India team, in support of a large global client! You will be responsible for developing, deploying, and maintaining AI and machine learning models. Your expertise in Python, cloud services, databases, and big data technologies will be instrumental in creating scalable and efficient AI applications. What You Will Be Doing Develop, train, and deploy machine learning models for predictive analytics, classification, and clustering. Implement AI-based solutions using frameworks such as TensorFlow, PyTorch, and Scikit-learn. Work with cloud platforms including AWS (SageMaker, Lambda, S3), Azure, and Google Cloud (Vertex AI). Integrate and fine-tune Hugging Face transformer models (e.g., BERT, GPT) for NLP tasks such as text classification, summarization, and sentiment analysis. Develop AI automation solutions, including chatbot implementations using Microsoft Teams and Azure AI. Work with big data technologies such as Apache Spark and Snowflake for large-scale data processing and analytics. Design and optimize ETL pipelines for data quality management, transformation, and validation. Utilize SQL, MySQL, PostgreSQL, and MongoDB for database management and query optimization. Create interactive data visualizations using Tableau and Power BI to drive business insights. Work with Large Language Models (LLMs) for AI-driven applications, including fine-tuning, training, and deploying model for conversational AI, text generation, and summarization. Develop and implement Agentic AI systems, enabling autonomous decision-making AI agents that can adapt, learn, and optimize tasks in real-time. What You Bring Along 2+ years of experience applying AI to practical uses. Strong programming skills in Python, SQL, and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Knowledge of basic algorithms and object-oriented and functional design principles Proficiency in using data analytics libraries like Pandas, NumPy, Matplotlib, and Seaborn. Hands-on experience with cloud platforms such as AWS, Azure, and Google Cloud. Experience with big data processing using Apache Spark and Snowflake. Knowledge of NLP and AI model implementations using Hugging Face and cloud-based AI services. Strong understanding of database management, query optimization, and data warehousing. Experience with data visualization tools such as Tableau and Power BI. Ability to work in a collaborative environment and adapt to new AI technologies. Strong analytical and problem solving skills. Preferred candidate profile Bachelors degree in computer science, Data Science, AI/ML, or a related field.
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
punjab
On-site
As a Python AI/ML Developer at Mohali, you will be responsible for designing, developing, and implementing AI/ML models and algorithms to address real-world challenges. Your role will involve working with large datasets, including data cleaning, preprocessing, feature engineering, and exploratory analysis. You will be tasked with building and optimizing machine learning pipelines using Python-based frameworks, as well as deploying ML models into production and monitoring their performance. Collaboration with data engineers, product teams, and business stakeholders to comprehend requirements will be crucial. Moreover, you are expected to research and apply state-of-the-art ML/AI techniques for continuous enhancements and document processes, workflows, and model performance. To excel in this role, you should possess a strong proficiency in Python and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow/PyTorch, Matplotlib/Seaborn. A good understanding of Machine Learning algorithms like regression, classification, clustering, and ensemble methods is essential. Experience in data preprocessing, feature engineering, and model evaluation techniques is required, along with knowledge of Deep Learning fundamentals such as Neural Networks, CNN, RNN, with a preference for Transformers. Familiarity with SQL for handling structured/unstructured data and exposure to API development using frameworks like Flask, FastAPI, or Django for ML model deployment are also expected. Strong problem-solving and analytical skills will be valuable assets in this role. About the Company: smartData is a leading global software business specializing in business consulting and technology integrations to simplify, enhance, secure, and add value to start-ups to small & medium enterprises. With over 8000 projects and 20+ years of experience, smartData, with offices in the US, Australia, and India, offers domain and technology consulting, in-house products, and productized services. Their expertise in important industries like healthcare, B2B, B2C, B2B2C platforms, online delivery services, video platform services, and IT services, combined with strong capabilities in Microsoft, LAMP stack, MEAN/MERN stack, and mobility solutions using native (iOS, Android, Tizen) or hybrid (React Native, Flutter, Ionic, Cordova, Phone Gap) mobility stack, along with AI & ML technologies, ensures continuous business growth for their customers.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining HCL Software, a Product Development Division of HCL Tech, where we deliver transformative software solutions to clients worldwide. Our focus areas include AI, Automation, Data & Analytics, Security, and Cloud. As a Lead Python Developer (Data Science, AI/ML) with over 8 years of experience, you will play a crucial role in developing AI-driven Marketing Campaigns using the HCL Unica+ Marketing Platform. This platform empowers clients to execute precise and high-performance marketing campaigns across various channels such as social media, AdTech platforms, mobile applications, and websites. To excel in this role, you should possess: - 8-12 years of Python Development Experience with at least 4 years dedicated to data science and machine learning. - Familiarity with Customer Data Platforms (CDP) like Treasure Data, Epsilon, Tealium, Adobe, Salesforce. - Experience with AWS SageMaker and Lang Chain, RAG for Generative AI would be advantageous. - Proficiency in integration tools like Postman, Swagger, API Gateways. - Strong knowledge of REST, JSON, XML, and SOAP. - Ability to work effectively in an agile team environment. - Excellent communication and interpersonal skills. - A 4-year degree in Computer Science or IT. Your responsibilities will include: - Proficient Python Programming & Libraries knowledge, particularly Pandas, NumPy, and Matplotlib/Seaborn. - Strong grasp of Statistical Analysis & Modelling concepts. - Expertise in Data Cleaning & Preprocessing techniques. - Proficiency in SQL & Database Management for efficient data querying. - Skill in Exploratory Data Analysis (EDA) to understand datasets. - In-depth understanding and practical experience with various Machine Learning Algorithms. - Proficiency with major Deep Learning Frameworks like TensorFlow or PyTorch. - Ability to evaluate models and optimize their performance. - Understanding of Deployment & MLOps Concepts for production environments. This role may require approximately 30% travel, and the preferred location is India (Pune). In addition to a competitive base salary, you will be eligible for bonuses based on performance.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. We are seeking a passionate data analyst to transform data into actionable insights and support decision-making in a global organization focused on pricing and commercial strategy. This role spans business analysis, requirements gathering, data modeling, solution design, and visualization using modern tools. The analyst will also maintain and improve existing analytics solutions, interpret complex datasets, and communicate findings clearly to both technical and non-technical audiences. **Essential Functions of the Job:** - Analyze and interpret structured and unstructured data using statistical and quantitative methods to generate actionable insights and ongoing reports. - Design and implement data pipelines and processes for data cleaning, transformation, modeling, and visualization using tools such as Power BI, SQL, and Python. - Collaborate with stakeholders to define requirements, prioritize business needs, and translate problems into analytical solutions. - Develop, maintain, and enhance scalable analytics solutions and dashboards that support pricing strategy and commercial decision-making. - Identify opportunities for process improvement and operational efficiency through data-driven recommendations. - Communicate complex findings in a clear, compelling, and actionable manner to both technical and non-technical audiences. **Analytical/Decision Making Responsibilities:** - Apply a hypothesis-driven approach to analyzing ambiguous or complex data and synthesizing insights to guide strategic decisions. - Promote adoption of best practices in data analysis, modeling, and visualization, while tailoring approaches to meet the unique needs of each project. - Tackle analytical challenges with creativity and rigor, balancing innovative thinking with practical problem-solving across varied business domains. - Prioritize work based on business impact and deliver timely, high-quality results in fast-paced environments with evolving business needs. - Demonstrate sound judgment in selecting methods, tools, and data sources to support business objectives. **Knowledge and Skills Requirements:** - Proven experience as a data analyst, business analyst, data engineer, or similar role. - Strong analytical skills with the ability to collect, organize, analyze, and present large datasets accurately. - Foundational knowledge of statistics, including concepts like distributions, variance, and correlation. - Skilled in documenting processes and presenting findings to both technical and non-technical audiences. - Hands-on experience with Power BI for designing, developing, and maintaining analytics solutions. - Proficient in both Python and SQL, with strong programming and scripting skills. - Skilled in using Pandas, T-SQL, and Power Query M for querying, transforming, and cleaning data. - Hands-on experience in data modeling for both transactional (OLTP) and analytical (OLAP) database systems. - Strong visualization skills using Power BI and Python libraries such as Matplotlib and Seaborn. - Experience with defining and designing KPIs and aligning data insights with business goals. **Additional/Optional Knowledge and Skills:** - Experience with the Microsoft Fabric data analytics environment. - Proficiency in using the Apache Spark distributed analytics engine, particularly via PySpark and Spark SQL. - Exposure to implementing machine learning or AI solutions in a business context. - Familiarity with Python machine learning libraries such as scikit-learn, XGBoost, PyTorch, or transformers. - Experience with Power Platform tools (Power Apps, Power Automate, Dataverse, Copilot Studio, AI Builder). - Knowledge of pricing, commercial strategy, or competitive intelligence. - Experience with cloud-based data services, particularly in the Azure ecosystem (e.g., Azure Synapse Analytics or Azure Machine Learning). **Supervision Responsibilities:** - Operates with a high degree of independence and autonomy. - Collaborates closely with cross-functional teams including sales, pricing, and commercial strategy. - Mentors junior team members, helping develop technical skills and business domain knowledge. **Other Requirements:** - Collaborates with a team operating primarily in the Eastern Time Zone (UTC 4:00 / 5:00). - Limited travel may be required for this role. **Job Requirements:** **Education:** A bachelor's degree in a STEM field relevant to data analysis, data engineering, or data science is required. Examples include (but are not limited to) computer science, statistics, data analytics, artificial intelligence, operations research, or econometrics. **Experience:** 6+ years of experience in data analysis, data engineering, or a closely related field, ideally within a professional services environment. **Certification Requirements:** No certifications are required for this role. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
bengaluru
Work from Office
Job Posting TitleBUSINESS INTELLIGENCE ANALYST I Band/Level5-4-S Education ExperienceBachelors Degree (High School +4 years) Employment ExperienceLess than 1 year At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview TE Connectivity s Business Intelligence Teams are responsible for the processing, mining and delivery of data to their customer community through repositories, tools and services. Roles & Responsibilities Tasks & Responsibilities Assist in the development and deployment of Digital Factory solutions and Machine Learning models across Manufacturing, Quality, and Supply Chain functions. Support data collection, cleaning, preparation, and transformation from multiple sources, ensuring data consistency and readiness. Contribute to the creation of dashboards and reports using tools such as Power BI or Tableau. Work on basic analytics and visualization tasks to derive insights and identify improvement areas. Assist in maintaining existing ML models, including data monitoring and model retraining processes. Participate in small-scale PoCs (proof of concepts) and pilot projects with senior team members. Document use cases, write clean code with guidance, and contribute to knowledge-sharing sessions Support integration of models into production environments and perform basic testing. Desired Candidate Proficiency in Python and/or R for data analysis, along with libraries like Pandas, NumPy, Matplotlib, Seaborn. Basic understanding of statistical concepts such as distributions, correlation, regression, and hypothesis testing. Familiarity with SQL or other database querying tools; e.g., pyodbc, sqlite3, PostgreSQL. Exposure to ML algorithms like linear/logistic regression, decision trees, k-NN, or SVM. Basic knowledge of Jupyter Notebooks and version control using Git/GitHub. Good communication skills in English (written and verbal), able to explain technical topics simply Collaborative, eager to learn, and adaptable in a fast-paced and multicultural environment. Exposure to or interest in manufacturing technologies (e.g., stamping, molding, assembly). Exposure to cloud platforms (AWS/Azure) or services like S3, SageMaker, Redshift is an advantage. Hands-on experience in image data preprocessing (resizing, Gaussian blur, PCA) or computer vision projects. Interest in AutoML tools and transfer learning techniques. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site. Location
Posted 1 week ago
2.0 - 3.0 years
4 - 5 Lacs
mumbai
Work from Office
Job role Job Requirements Personality traits To work on heavy data extraction, generation of reports, visualising data and putting analysis in discussed formatSQL & Advanced python knowledge with proven track record in python scripting using Numpy, pandas, seaborn, matplotlib, scikit learn etc Candidate must have excellent organizational skills, be level-headed, have good interpersonal skills Model building like classification, regression, recommendation systems etc SAS EG/Eminer knowledge is a plusNumerical skills - will need to acquire data and use it to target selected groups, as well as analyse the success or otherwise of campaigns The role holder will be responsible for data support, segmentation/scorecards, analytics and customer insights Past experience with deployment on cloud systems & API integration is preferredAt least 2-3 years of experience with data handling roles Must be excellent with Data visualisation He/ She should be comfortable working with heavy datasets and should have an analytical bent of mind Should have an active interest in data analysisHighly logical individuals with adaptable trait towards new technologiesAdhere to documented process and compliance requirements, assisting seniors on diversified projectsMotivated and determined candidate with a zeal to handle responsibilities and take initiativesStrong communication skills to effective interaction with business stakeholders
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |