Jobs
Interviews

1437 Matplotlib Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location-Pune/Bangalore/Noida/Hyderabad/Mumbai/Chennai Experience -7-10 Yrs. Shift-UK Shift JD • Bachelor’s degree in computer science, engineering, or a related field, or equivalent practical experience with at least 7-10 years of combined experience as a Python and MLOps Engineer or similar roles. • Strong programming skills in Python. • Proficiency with AWS and/or Azure cloud platforms, including services such as EC2, S3, Lambda, SageMaker, Azure ML, etc. • Solid understanding of API programming and integration. • Hands-on experience with CI/CD pipelines, version control systems (e.g., git), and code repositories. • Knowledge of containerization using Docker, Kubernetes and orchestration tools. • Proficiency in creating data visualizations specifically for graphs and networks using tools like Matplotlib, Seaborn, or Plotly. • Understanding of data manipulation and analysis using libraries such as Pandas and NumP0079. • Problem-solving, analytical expertise, and troubleshooting abilities with attention to details. • Demonstrates VACC (Visionary, Catalyst, Architect, Coach) leadership behaviors: • Good self-awareness as well as system awareness. • Pro-actively asks for and gives feedback. • Strives to demonstrate strategic thinking as well as good business and external trends insights. • Focuses on outcomes defines and deliver highest pipeline, team, talent, and organizational impact outcomes.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Please mention subject line “Machine Learning Engineer - ESLNK59” while applying to hr@evoortsolutions.com Job Title: Machine Learning Engineer Location: Remote | Full-Time Experience: 4+ years Job Summary We are seeking a highly skilled and self-motivated Machine Learning Engineer / Senior Machine Learning Engineer to join our fast-growing AI/ML startup. You will be responsible for designing and deploying intelligent systems and advanced algorithms tailored to real-world business problems across diverse industries. This role demands a creative thinker with a strong mathematical foundation, hands-on experience in machine learning and deep learning, and the ability to work independently in a dynamic, agile environment. Key Responsibilities Design and develop machine learning and deep learning algorithms in collaboration with cross-functional teams, including data scientists and business stakeholders. Translate complex client problems into mathematical models and identify the most suitable AI/ML approach. Build data pipelines and automated classification systems using advanced ML/AI models. Conduct data mining and apply supervised/unsupervised learning to extract meaningful insights. Perform Exploratory Data Analysis (EDA), hypothesis generation, and pattern recognition from structured and unstructured datasets. Develop and implement Natural Language Processing (NLP) techniques for sentiment analysis, text classification, entity recognition, etc. Extend and customize ML libraries/frameworks like PyTorch, TensorFlow, and Scikit-learn. Visualize and communicate analytical findings using tools such as Tableau, Matplotlib, ggplot, etc. Develop and integrate APIs to deploy ML solutions on cloud-based platforms (AWS, Azure, GCP). Provide technical documentation and support for product development, business proposals, and client presentations. Stay updated with the latest trends in AI/ML and contribute to innovation-driven projects. Required Skills & Qualifications Education: B.Tech/BE or M.Tech/MS in Computer Science, Computer Engineering, or related field. Solid understanding of data structures, algorithms, probability, and statistical methods. Proficiency in Python, R, or Java for building ML models. Hands-on experience with ML/DL frameworks such as PyTorch, Keras, TensorFlow, and libraries like Scikit-learn, SpaCy, NLTK, etc. Experience with cloud services (PaaS/SaaS), RESTful APIs, and microservices architecture. Strong grasp of NLP, predictive analytics, and deep learning algorithms. Familiarity with big data technologies like Hadoop, Spark, Hive, Kafka, and NoSQL databases is a plus. Expertise in building and deploying scalable AI/ML models in production environments. Ability to work independently in an agile team setup and handle multiple priorities simultaneously. Exceptional analytical, problem-solving, and communication skills. Strong portfolio or examples of applied ML use cases in real-world applications. Why Join Us? Opportunity to work at the forefront of AI innovation and solve real-world challenges. Be part of a lean, fast-paced, and high-impact team driving AI solutions across industries. Flexible remote working culture with autonomy and ownership. Competitive compensation, growth opportunities, and access to cutting-edge technology. Embrace our culture of Learning, Engaging, Achieving, and Pioneering (LEAP) in every project you touch.

Posted 3 weeks ago

Apply

2.0 years

2 - 3 Lacs

Calicut

On-site

Job Description A Data Science Trainer is a professional who designs and delivers training programs to educate individuals and teams on data science concepts and techniques. They are responsible for creating and delivering engaging and effective training content that helps learners develop their data science skills. Responsibilities Design and develop training programs and curriculum for data science concepts and techniques Deliver training sessions to individuals and teams, both in-person and online Create and manage training materials such as presentations, tutorials, and exercises Monitor and evaluate the effectiveness of training programs Continuously update training materials and curriculum to reflect the latest trends and best practices in data science Provide one-on-one coaching and mentoring to learners Requirements A degree in a relevant field such as computer science, data science, statistics, or mathematics Strong understanding of data science concepts and techniques Experience with programming languages such as Python, R and SQL Strong presentation and communication skills Experience in training and/or teaching Experience with data visualization tools such as Tableau, Power BI or Matplotlib is a plus Knowledge of data science platform such as Scikit-learn, Tensorflow, Keras etc. is a plus. The role of a data science trainer requires a person who is passionate about teaching, has a solid understanding of data science and has the ability to adapt to the needs of the learners. They must be able to deliver training programs in an engaging and effective way, and must be able to continuously update the training materials to reflect the latest trends and best practices in data science. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Schedule: Day shift Education: Master's (Preferred) Experience: Data scientist: 2 years (Preferred) Work Location: In person Expected Start Date: 25/07/2025

Posted 3 weeks ago

Apply

3.0 years

4 - 6 Lacs

Kazhakuttam

On-site

Job Title: Data Analyst Location: Trivandrum Experience Level: 3 - 5 years About the Role We are looking for a Data Analyst with strong analytical skills and a keen interest in web infrastructure and behavioural analytics to support the development of an ML-based anomaly detection system for NGINX server logs. You will play a critical role in extracting domain knowledge from raw logs, identifying behaviour patterns, anomalies, and translating these into insightful features that inform our ML engineers and are used for real-time dashboards. This is not just a reporting role — you will work alongside AI engineers and data engineers to shape how raw data is interpreted and transformed for machine learning and operational use. Key Responsibilities Domain Behaviour Analysis : Analyse large volumes of log data to identify user behaviour patterns, anomalies, and security events. Interpret fields such as IP addresses, geolocation data, user agents, request paths, status codes, and request times to derive meaningful insights. Feature Engineering Support : Collaborate with AI engineers to propose relevant features based on log behaviour and traffic patterns (e.g., burst patterns, unusual request headers, request frequency shifts). Validate engineered features against behavioural patterns and business context. Conduct exploratory data analysis (EDA) to evaluate feature quality and distribution. Data Visualization & Dashboards : Develop data visualizations to represent time-series trends, geo-distributions, traffic behaviour, etc... Collaborate with the frontend/dashboard team to define and test visual requirements and anomaly indicators. Help surface visual insights on anomalies and their severity for analyst consumption. Data Quality & Validation : Identify and address gaps, inconsistencies, and errors in raw logs. Ensure feature logic aligns with real-world HTTP behaviour and use cases. Documentation & Knowledge Sharing : Create documentation that explains observed behavioural patterns, feature assumptions, and traffic insights for use by the wider ML and security team. Minimum Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Analytics, Cybersecurity, or a related field. 2+ years of experience in data analysis or analytics roles. Proficiency in: SQL , Elasticsearch queries (DSL or Kibana), Python for data analysis (pandas, matplotlib, seaborn, plotly) Experience working with web server logs (NGINX, Apache) or structured event data. Strong analytical thinking — ability to break down complex log behaviour into patterns and outliers. Nice to Have Familiarity with web security concepts (DDoS, bot detection, HTTP protocol). Experience with log analytics platforms (Kibana, Grafana, ELK Stack). Understanding of feature engineering concepts in ML pipelines. Experience working on or with anomaly detection or security analytics systems. Job Type: Full-time Pay: ₹40,000.00 - ₹55,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Experience: Data analytics: 3 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

4.0 years

3 - 10 Lacs

Mohali

On-site

Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹90,000.00 per month Education: Bachelor's (Preferred) Experience: AI/Machine learining: 4 years (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

1.0 years

1 - 2 Lacs

Chennai

On-site

Key Responsibilities: Develop, train, and evaluate machine learning models for business use cases. Preprocess and analyze structured and unstructured data. Implement end-to-end ML workflows — from data collection to model deployment. Collaborate with cross-functional teams to understand project requirements and deliver data-driven insights. Stay updated with new trends, tools, and best practices in AI and machine learning. Document experiments, models, and performance metrics clearly and accurately. Required Skills: Bachelor’s degree in Computer Science, Data Science, or related technical field. 1 year of hands-on experience in building ML models using Python. Familiarity with ML libraries and frameworks such as Scikit-learn, Pandas, NumPy, Matplotlib, and TensorFlow or PyTorch. Experience with basic data preprocessing, feature engineering, and EDA. Understanding of key algorithms like regression, classification, clustering, and decision trees. Ability to write clean, efficient, and well-documented code. Preferred: Familiarity with cloud services like AWS, GCP, or Azure. Experience with projects or internships in AI/ML or data science. Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Leave encashment Paid sick time Paid time off Work Location: In person

Posted 3 weeks ago

Apply

16.0 years

2 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Job Summary We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri)

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title: Python Developer Experience: 1 – 3 Years Location: Kolkata (Work From Home) Shift Timings: Regular Shift: 10:00 AM – 6:00 PM We are hiring a Python Developer who involves developing and maintaining risk analytics tools and automating reporting processes to support commodity risk management. Key Responsibilities: Develop, test, and maintain Python scripts for data analysis and reporting Write scalable, clean code using Pandas, NumPy, Matplotlib, and OOPS principles Collaborate with risk analysts to implement process improvements Document workflows and maintain SOPs in Confluence Optimize code performance and adapt to evolving business needs Requirements: Strong hands-on experience with Python, Pandas, NumPy, Matplotlib, and OOPS Good understanding of data structures and algorithms Experience with Excel and VBA is an added advantage Exposure to financial/market risk environments is preferred Excellent problem-solving, communication, and documentation skills

Posted 3 weeks ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Our Culture & Values: - We’d describe our culture as human, friendly, engaging, supportive, agile, and super collaborative. At Kainskep Solutions, our five values underpin everything we do; from how we work, to how we delight and deliver to our customers. Our values are: - #TeamMember #Ownership #Innovation #Challenge and #Colloboration What makes a great team? A Diverse Team! Don’t be put off if you don’t tick all boxes; we know from research that candidates may not apply if they don’t feel they are 100% there yet; the essential experience we need is the ability to engage clients and build strong, effective relationships. If you don’t tick the rest, we would still love to talk. We’re committed to creating a diverse and inclusive. What you’ll bring: Use programming languages like Python, R, and SQL for data manipulation, statistical analysis, and machine learning tasks. Apply fundamental statistical concepts such as mean, median, variance, probability distributions, and hypothesis testing to analyze data. Develop supervised and unsupervised machine learning models, including classification, regression, clustering, and dimensionality reduction techniques. Evaluate model performance using metrics such as accuracy, precision, recall, and F1-score, implementing cross-validation techniques to ensure reliability. Conduct data manipulation and visualization using libraries such as Pandas, Matplotlib, Seaborn, and ggplot2, implementing data cleaning techniques to handle missing values and outliers. Perform exploratory data analysis, feature engineering, and data mining tasks, including text mining, natural language processing (NLP), and web scraping. Familiarize yourself with big data technologies such as Apache Spark and Hadoop, understanding distributed computing concepts to handle large-scale datasets effectively. Manage relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval. Use version control systems like Git and GitHub/GitLab for collaborative development, understanding branching, merging, and versioning workflows. Demonstrate basic knowledge of the software development lifecycle, Agile methodologies, algorithms, and data structures. Requirements: Proficiency in programming languages such as Python, R, and SQL. Strong analytical skills and a passion for working with data. Proven experience with generative models like GPT, BERT, DALL·E, Midjourney, GANs, VAEs, etc. Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, or JAX . Strong understanding of NLP, computer vision, and transformer architectures . Hands-on experience with Hugging Face Transformers, LangChain, OpenAI API, or similar tools . Prior experience with data analysis, machine learning, or related fields is a plus . Good To Have: Experience in Computer Vision, including Image Processing and Video Processing. Familiarity with Generative AI techniques, such as Generative Adversarial Networks (GANs), and their applications in image, text, and other data generation tasks. Knowledge of Large Language Models (LLMs) is a plus. Experience with Microsoft AI technologies, including Azure AI Studio and Azure Copilot Studio.

Posted 3 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Date Posted: 2025-07-11 Country: India Location: North Gate Business Park Sy.No 2/1, and Sy.No 2/2, KIAL Road, Venkatala Village, Chowdeshwari Layout, Yelahanka, Bangalore, Karnataka 560064 Position Role Type: Unspecified Who we are: At Pratt & Whitney, we believe that powered flight has transformed – and will continue to transform – the world. That’s why we work with an explorer’s heart and a perfectionist’s grit to design, build, and service the world’s Military Engines most advanced aircraft engines. We do this across different portfolios – including Commercial Engines, Business Aviation, General Aviation, Regional Aviation, and Helicopter Aviation – and as a way of turning possibilities into realities for our customers. This is how we at Pratt & Whitney approach our work, and this is why we are inspired to go beyond. Job Description: The Engine Diagnostics, Prognostics and Health Management team (DPHM) is dedicated to providing end to end solutions to improve aircraft reliability while reducing operating costs. The selected candidate will generate/implement algorithms that monitor automatically monitor critical flight parameters, alert for anomalies, and provide visibility to events or exceedances. This helps the operator to make informed decisions about preventive maintenance, thereby reducing unscheduled maintenance, lowering operating costs, and increasing aircraft availability. DPHM’s advanced diagnostic solutions provides true value-added benefits that enable the operator to determine the best time to undertake maintenance on the engine and airframe. This can minimize costs by allowing for component repair instead of replacement and maximize engine time on wing. Essential Duties/Tasks of Position: Support development of advanced engine performance condition trend monitoring algorithms Analyze steady state and transient engine trend data in support of engineering studies Support QCPC & field investigations related to turbine engine Performance, Operability, Control, and Mechanical subsystems Perform performance deterioration studies of engines operating in service Develop and implement scripts, algorithms, machine learning, data clustering and fault classification techniques within a relational database structure to manage, aggregate, and search data in support of advanced diagnostics and prognostics. Implement and maintain analytic dashboards and Power BI solutions Required qualification and skills: Bachelor/Master of Science in Aerospace or Mechanical Engineering with 2-5 years of engineering experience required in field of aviation. Experience in turbomachinery performance modeling (i.e., SOAPP, FAST, NPSS, etc.) Experience supporting turbomachinery testing, data collection and/or data reduction Knowledgeable of Six Sigma Statistical Process Control methods Strong computer programming skills using the languages Python, R, C++, and/or C# (SSIS) Familiar with data visualization techniques using (i.e., Matplotlib, HighCharts and/or PowerBI) Proficient in MS SQL, MySQL and/or PostgreSQL Proficient in MS Office, especially in Excel Good oral and written communication skills Beneficial Skills: Experience specifically with Turboprop/Turboshaft engine performance characteristics Knowledge of DataBricks Knowledge ow AWS and related services What we offer Long-term deferred compensation programs Daycare for young children Advancement programs to enhance education skills Flexible work schedules Leadership and training programs Comprehensive benefits, savings, and pension plans Financial support for parental leave Reward programs for outstanding work RTX adheres to the principles of equal employment. All qualified applications will be given careful consideration without regard to ethnicity, color, religion, gender, sexual orientation or identity, national origin, age, disability, protected veteran status or any other characteristic protected by law. Privacy Policy and Terms: Click on this link to read the Policy and Terms

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Data scientist to join Hub portion of Trustage hub-spoke engagement model support from CoE for part time delivery over 3 year duration assisting with Coco bots deployment and refinement Participate in daily stand-up meetings, update status and risks, and share daily status report and weekly status report. Work with stakeholders to gather representative test data set for AI model. Statically evaluate through visualization, clean-up and prepare data set for AI testing. Assess the effectiveness and accuracy of new data sources and employ data gathering techniques. Use predictive modeling to increase and optimize QA and other business outcomes. Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Looking for 9-11 years experience. Proficiency in Python or R for statistical analysis and machine learning. Strong knowledge of SQL and experience working with large-scale datasets. Experience with machine learning frameworks and libraries such as Scikit-learn, TensorFlow, Keras, XGBoost, etc. Proficiency in data visualization tools like Power BI, Tableau, or Matplotlib/Seaborn. Strong problem-solving and analytical skills with a keen attention to detail. Excellent communication and stakeholder management skills.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Company Overview Bixware Technologies Pvt Ltd is a rapidly growing technology firm with offices in Coimbatore and Mumbai. We specialize in delivering Microsoft BI solutions and SAP resourcing services, while also building capabilities across Open Source technologies, modern web stacks, cloud infrastructure, and Office 365 ecosystems. Our client-centric approach and cross-domain expertise enable us to deliver scalable, secure, and efficient software Overview : Responsibilities We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will have strong programming fundamentals, deep knowledge of Python ecosystems, and hands-on experience in designing, building, and maintaining web applications and data-driven services. This role demands both technical proficiency and the ability to collaborate across Responsibilities : Design, develop, and maintain scalable and reusable Python-based applications. Implement backend components using frameworks like Django or Flask. Integrate applications with relational databases like MS SQL and Amazon Redshift, including writing efficient SQL queries. Develop and consume RESTful APIs and third-party integrations. Utilize data science libraries (e.g., pandas, NumPy, scikit-learn, matplotlib) for data processing, analysis, and visualization. Apply object-oriented programming (OOP) principles to write clean, modular, and testable code. Perform unit testing, code reviews, and debugging to ensure high software quality. Collaborate with front-end developers, DevOps teams, and business stakeholders to ensure alignment between technical implementation and business requirements. Use version control systems like Git to manage code repositories and support CI/CD practices. Follow agile development methodologies and contribute to sprint planning, estimations, and Skills & Qualifications : Bachelors degree in Computer Science, Information Technology, or a related discipline. 3 - 5 years of professional experience in Python development. Proficiency in Python core concepts and best practices. Solid experience with at least one Python web framework (preferably Django or Flask). Strong understanding of SQL databases (MS SQL, Redshift) and data modeling. Experience in developing and consuming RESTful APIs. Exposure to data science and analytics libraries. Familiarity with version control systems such as Git and tools like GitHub or GitLab. Working knowledge of testing frameworks (e.g., PyTest, unittest) and deployment strategies. Excellent analytical thinking, problem-solving skills, and attention to detail. Strong verbal and written communication skills; ability to work independently and collaboratively. (ref:hirist.tech)

Posted 3 weeks ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

haryana

On-site

Roles and responsibilities Provide Data led analytics and reporting Assistance on US Securities Exchange Commission (SEC) regulatory filingsfor spin-offs, carveouts and initial public offerings (IPO) Provide transaction-oriented accounting for mergers and acquisitions, divestitures, revenue recognition and leases Summarizing and Analysing financial information (trial balances,income statements, balance sheets and cash flows)Work with large volumes of transactional data to analyzeunderlying performance trends and identify insights;Combine data from multiple sources (SQL databases, Excel files,flat files, etc.) into integrated views that can be used to driveanalysis and decision making;Build dashboards and applications using Power Bi.Create Automation routines to build efficiency into analytics andreporting delivery processes Core Skills Experience with Data analytics, Financial Analysis, Business Modelling or Financial Modelling;Working knowledge of accounting principles and financial statementsStrong analytical and problem-solving skills;Data wrangling and data processing skillsKnowledge of Python (Pyspark, Pandas, Numpy, Matplotlib) with 1-2 years of experienceAdvanced knowledge of Alteryx, Power Bi, Excel and T-SQLA commercial outlook and a good understanding of the general business and economic environment; andExcellent communication and presentation skills , both written and verbal and the ability to articulate key points clearly and succinctly in your analysis and reasoning.Ability to build effective networks internally and externallyA natural curiosity about business and a passion for business improvement;Strong attention to detail.,

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Project Role: Consultant Work Experience: 4 to 7 Years Work location: Bangalore/Gurgaon Work Mode: Hybrid Must Have Skills: Machine Learning, Python, Gen AI Desirable Technical Skills: Proficiency in Python is preferred; familiarity with other programming languages (e.g., R, Java) is a plus. Experience with frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience working with generative AI models, such as GPT, BERT, LLaMA, or other transformer-based architectures. Experience in developing, testing, and deploying high-quality AI models and solutions. Proficiency in data manipulation, cleaning, and analysis. Ability to create clear and insightful visualizations using tools like Matplotlib, Seaborn, or Tableau. Familiarity with cloud services like AWS, Google Cloud, or Azure for deploying AI solutions. Key Responsibilities: AI & Data Solutions: Design and implement AI/ML models and data-driven solutions to address complex challenges in healthcare and life sciences. Data Analysis & Insights: Analyze large datasets to identify trends, generate insights, and support data-informed decision-making. Cross-functional Collaboration: Partner with consultants, data engineers, and scientists to build end-to-end data pipelines and analytical workflows. Client Engagement: Interact with clients through interviews, workshops, and presentations, contributing to deliverables and building strong relationships. Communication & Reporting: Translate complex analyses into clear, actionable insights for both technical and non-technical stakeholders. Continuous Learning & Mentorship: Stay current with AI/ML advancements, contribute to research and proposals, and mentor junior team members.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences. Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies