Home
Jobs
Companies
Resume

162 Plotly Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

2 - 8 Lacs

Gurgaon

On-site

Experience: 5 - 8 Years Job Location: Gurgoan, Hyderabad Purpose of the Job – A simple statement to identify clearly the objective of the job. The Senior Machine Learning Engineer is responsible for designing, implementing, and deploying scalable and efficient machine learning algorithms to solve complex business problems. The Machine Learning Engineer is also responsible of the lifecycle of models, once deployed in production environments, through monitoring performance and model evolution. The position is highly technical and requires an ability to collaborate with multiple technical and non-technical profiles (data scientists, data engineers, data analysts, product owners, business experts), and actively take part in a large data science community. Key Responsibilities and Expected Deliverables– This details what actually needs to be done; the duties and expected outcomes. Managing the lifecycle of machine learning models Develop and implement machine learning models to solve complex business problems. Ensure that models are accurate, efficient, reliable, and scalable. Deploy machine learning models to production environments, ensuring that models are integrated with software systems. Monitor machine learning models in production, ensuring that models are performing as expected and that any errors or performance issues are identified and resolved quickly. Maintain machine learning models over time. This includes updating models as new data becomes available, retraining models to improve performance, and retiring models that are no longer effective. Develop and implement policies and procedures for ensuring the ethical and responsible use of machine learning models. This includes addressing issues related to bias, fairness, transparency, and accountability. Continuous Improvements Stay up to date with the latest developments in the field: read research papers, attend conferences, and participate in trainings to expand their knowledge and skills. Identify and evaluate new technologies and tools that can improve the efficiency and effectiveness of machine learning projects. Propose and implement optimizations for current machine learning workflows and systems. Proactively identify areas of improvement within the pipelines. Make sure that created code is compliant with our set of engineering standards. Collaboration with other data experts (Data Engineers, Platform Engineers, and Data Analysts) Participate to pull requests reviews coming from other team members. Ask for review and comments when submitting their own work. Actively participate to the day-to-day life of the project (Agile rituals), the data science team (DS meeting) and the rest of the Global Engineering team Education & Experience Engineering Master’s degree or PhD in Data Science, Statistics, Mathematics, or related fields 5 years+ experience in a Machine Learning Engineer role into large corporate organizations Experience of working with ML models in a cloud ecosystem Statistics & Machine Learning Statistics: Strong understanding of statistical analysis and modelling techniques (e.g., regression analysis, hypothesis testing, time series analysis) Classical ML: Very strong knowledge in classical ML algorithms for regression & classification, supervised and unsupervised machine learning, both theoretical and practical (e.g. using scikit-learn, xgboost) ML niche: Expertise in at least one of the following ML specialisations: Timeseries forecasting / Natural Language Processing / Computer Vision Deep Learning: Good knowledge of Deep Learning fundamentals (CNN, RNN, transformer architecture, attention mechanism, …) and one of the deep learning frameworks (pytorch, tensorflow, keras) Generative AI: Good understanding of Generative AI specificities and previous experience in working with Large Language Models is a plus (e.g. with openai, langchain) MLOps Model strategy: Expertise in designing, implementing, and testing machine learning strategies. Model integration: Very strong skills in integrating a machine learning algorithm in a data science application in production. Model performance: Deep understanding of model performance evaluation metrics and existing libraries (e.g., scikit-learn, evidently) Model deployment: Experience in deploying and managing machine learning models in production either using specific cloud platform, model serving frameworks, or containerization. Model monitoring: Experience with model performance monitoring tools is a plus (Grafana, Prometheus) Software Engineering Python: Very strong coding skills in Python including modularity, OOP, data & config manipulation frameworks (e.g., pandas, pydantic) etc. Python ecosystem: Strong knowledge of tooling in Python ecosystem such as dependency management tooling (venv, poetry), documentation frameworks (e.g. sphinx, mkdocs, jupyter-book), testing frameworks (unittest, pytest) Software engineering practices: Experience in putting in place good software engineering practices such as design patterns, testing (unit, integration), clean code, code formatting etc. Debugging: Ability to troubleshoot and debug issues within machine learning pipelines Data Science Experimentation and Analytics Data Visualization: Knowledge of data visualization tools such as plotly, seaborn, matplotlib, etc. to visualise, interpret and communicate the results of machine learning models to stakeholders. Basic knowledge of PowerBI is a plus Data Cleaning: Experience with data cleaning and preprocessing techniques such as feature scaling, dimensionality reduction, and outlier detection (e.g. with pandas, scikit-learn). Data Science Experiments: Understanding of experimental design and A/B testing methodologies Data Processing: Databricks/Spark: Basic knowledge of PySpark for big data processing Databases: Basic knowledge of SQL to query data in internal systems Data Formats: Familiarity with different data storage formats such as Parquet and Delta DevOps Azure DevOps: Experience using a DevOps platform such as Azure DevOps for using Boards, Repositories, Pipelines Git: Experience working with code versioning (git), branch strategies, and collaborative work with pull requests. Proficient with the most basic git commands. CI / CD: Experience in implementing/maintaining pipelines for continuous integration (including execution of testing strategy) and continuous deployment is preferable. Cloud Platform: Azure Cloud: Previous experience with services like Azure Machine Learning Services and/or Azure Databricks on Azure is preferable. Soft skills Strong analytical and problem-solving skills, with attention to detail Excellent verbal and written communication and pedagogical skills with technical and non-technical teams Excellent teamwork and collaboration skills Adaptability and reactivity to new technologies, tools, and techniques Fluent in English

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Job Description Vice President, Data Management & Quantitative Analysis I At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world’s financial system we touch nearly 20% of the world’s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what is all about. We’re seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team. This role is located in Pune, MH or Chennai, TN (Hybrid). In this role, you’ll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (“DART”) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation. The team works closely with various groups across BNY to support the firm’s Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test. The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting. For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework. Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency. To be successful in this role, we’re seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask. Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc.), clustering (k-means, etc.), and time series analysis (ARIMA, EWMA, GARCH, etc.) is a plus. Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills Show more Show less

Posted 1 week ago

Apply

7.0 years

0 - 0 Lacs

Coimbatore

Remote

Sr. Python Developer | 7+ years | Work Timings: 1 PM to 10 PM | Remote Job Description: Core Skill: - Hands on experience with Python Development Key Responsibilities (including, but not limited to): This developer should be proficient in Python programming and possess a strong understanding of data structures, algorithms, and database concepts. They are adept at using relevant Python libraries and frameworks and are comfortable working in a data-driven environment. Responsible for designing, developing, and implementing robust and scalable data parsers, data pipeline solutions and web applications for data visualization. Their core responsibilities include: Data platform related components: Building and maintaining efficient and reliable data pipeline components using Python and related technologies (e.g., Lambda, Airflow). This involves extracting data from various sources, transforming it into usable formats, and loading it into target persistence layers and serving them via API. Data Visualization (Dash Apps): Developing interactive and user-friendly data visualization applications using Plotly Dash. This includes designing dashboards that effectively communicate complex data insights, enabling stakeholders to make data-driven decisions. Data Parsing and Transformation: Implementing data parsing and transformation logic using Python libraries to clean, normalize, and restructure data from diverse formats (e.g., JSON, CSV, XML) into formats suitable for analysis and modeling. Collaboration: Working closely with product leadership and profession services teams to understand product and project requirements, define data solutions, and ensure quality and timely delivery. Software Development Best Practices: Adhering to software development best practices, including version control (Git), testing (unit, integration), and documentation, to ensure maintainable and reliable code. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Education: Bachelor's (Preferred) Experience: Python: 7 years (Preferred)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ernakulam, Kerala, India

Remote

Linkedin logo

Job Purpose Responsible for consulting for the client to understand their AI/ML, analytics needs & delivering AI/ML applications to the client. Job Description / Duties & Responsibilities ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. Job Specification / Skills and Competencies ▪Preferably current location Ernakulam/ Trivandrum ▪Willing to work in hybrid/ Remote mode ▪Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 4 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:-Regression , Time Series ▪ Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. ▪ NLP, Text Mining, LLM (GPTs) ▪ Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks -TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in one or more of the programming languages– Python / R ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At BNY, our culture empowers you to grow and succeed. As a leading global financial services company at the center of the world’s financial system we touch nearly 20% of the world’s investible assets. Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere. We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace. Through our unique ideas and talents, together we help make money work for the world. This is what is all about. We’re seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team. This role is located in Pune, MH or Chennai, TN (Hybrid). In this role, you’ll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (“DART”) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation. The team works closely with various groups across BNY to support the firm’s Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test. The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting. For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework. Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency. To be successful in this role, we’re seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask. Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc.), clustering (k-means, etc.), and time series analysis (ARIMA, EWMA, GARCH, etc.) is a plus. Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Established in 1806 as a small soap and candle business in New York City, Colgate-Palmolive is now a truly global company with products sold in over 200 countries and territories under such internationally recognized brand names as Colgate, Palmolive, Softsoap, Irish Spring, Protex, Sorriso, Kolynos, elmex, Tom's of Maine, Sanex, Ajax, Axion, Soupline, Haci Sakir, Suavitel, PCA SKIN, EltaMD, Filorga and Hello as well as Hill's Science Diet and Hill's Prescription Diet. Colgate-Palmolive is a leading consumer products company that serves hundreds of millions of consumers worldwide with brands and products across four core businesses – Oral Care, Personal Care, Home Care and Pet Nutrition. We are committed to offering products that make lives healthier and more enjoyable, and programs that enrich communities around the world. Every day millions of people trust our products to care for themselves and the ones they love. Our goal is to use our technology to create products that will continue to improve the quality of life for our consumers wherever they live. A career at Colgate-Palmolive is an excellent opportunity if you seek a global experience, constant challenge, and development opportunities in an environment that respects work/life effectiveness. Job Title: Assistant Manager, Business Analytics Travel Required?: Travel - up to 10% of time Date: Jun 3, 2025 Remote Relocation Assistance Offered Within Country Job Number #165136 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has chipped in to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the CBS Analytics team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The CBS Analytics vertical in Colgate Palmolive is passionate about working on reasons which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead CBS Analytics projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Efficiency, Forecasting etc. Data Scientist needs to handle relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who are you… You are a function expert - Leads Analytics projects within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built You connect the dots - Drive insights from internal and external data for business Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need Graduation/Masters in Statistics/ Applied Mathematics/ Computer Science 1+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Knowledge of using GitHub, Airflow for coding and model executions Handling, redefining, developing statistical models for RGM/Pricing and/or Marketing Efficiency and communicating insights decks to business Validated understanding on tools like Tableau, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What you’ll need…(Preferred) Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Shown understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/communicating data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. #LI-Remote

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Established in 1806 as a small soap and candle business in New York City, Colgate-Palmolive is now a truly global company with products sold in over 200 countries and territories under such internationally recognized brand names as Colgate, Palmolive, Softsoap, Irish Spring, Protex, Sorriso, Kolynos, elmex, Tom's of Maine, Sanex, Ajax, Axion, Soupline, Haci Sakir, Suavitel, PCA SKIN, EltaMD, Filorga and Hello as well as Hill's Science Diet and Hill's Prescription Diet. Colgate-Palmolive is a leading consumer products company that serves hundreds of millions of consumers worldwide with brands and products across four core businesses – Oral Care, Personal Care, Home Care and Pet Nutrition. We are committed to offering products that make lives healthier and more enjoyable, and programs that enrich communities around the world. Every day millions of people trust our products to care for themselves and the ones they love. Our goal is to use our technology to create products that will continue to improve the quality of life for our consumers wherever they live. A career at Colgate-Palmolive is an excellent opportunity if you seek a global experience, constant challenge, and development opportunities in an environment that respects work/life effectiveness. Job Title: Sr. Specialist, Decision Analytics Travel Required?: Travel - up to 10% of time Date: Jun 3, 2025 Remote Relocation Assistance Offered Within Country Job Number #165135 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has chipped in to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the CBS Analytics team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The CBS Analytics vertical in Colgate Palmolive is passionate about working on cases which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead CBS Analytics projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Efficiency, Forecasting etc. Data Scientist needs to manage relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who are you… You are a function expert - Leads Analytics projects within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built You connect the dots - Drive insights from internal and external data for business Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need Graduation/Masters in Statistics/ Applied Mathematics/ Computer Science 1+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Knowledge of using GitHub, Airflow for coding and model executions Leading, redefining, developing statistical models for RGM/Pricing and/or Marketing Efficiency and communicating insights decks to business Confirmed understanding on tools like Tableau, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What you’ll need…(Preferred) Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Proven understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/presenting data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. #LI-Remote

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Title Software Engineer Location: Multiple Locations Job Term: Full-Time The Opportunity: At Picarro, Software Engineering focuses on developing and deploying industry vertical applications to clients in the Scientific and Energy communities. This specific role is focused on the suite of solutions, such as greenhouse gas emissions quantification, pipe replacement, and advanced leak detection, used by our gas utility and pipeline customers. The majority have a web-based user interface, but the backend utilizes geoprocessing, data, and ML services. While the products are designed to meet the needs of the industry, they sit within Picaro's larger analytical suite/ distributed framework, so a wider collection of skills is desired. The software engineer participates in the design, programming, testing, documentation and implementation of applications and related processes/systems. You may also be required to identify and evaluate development options, assess future needs for key technologies and techniques and develop plans for adoption. This position reports to the GIS Software Development Manager. The position will be on site based out of our Bangalore, India office. Key Responsibilities: Work directly with product stakeholders and product management to understand product use cases and synthesize business requirements. Design, develop, and deploy high-performance multi-tenant dashboard applications using Dash Enterprise. Write production-quality code that creates responsive web applications. Handle multiple technical requests and project deadlines simultaneously. Collaborate daily with a global team of software engineers, data scientists, analysts, and product managers using a variety of online communication channels. Apply software development best practices including version control (Git), code review, and testing. Document technical detail of work using Jira and Confluence. Desired Skills and Experience: 8+ years of overall software development experience. 5+ years of experience developing responsive web applications using HTML, CSS, and JavaScript. 3+ years of Python experience, specifically in an object-oriented structure. Experience with common data analytics and visualization libraries such as Numpy, Pandas, Json, Sqlalchemy, Plotly, and/or Matplotlib. Experience with geospatial libraries such as Shapely, GeoPandas, GDAL/OGR, and PyProj are a plus. 1+ years with SQL for analytical use cases. 1+ years of experience with a modern web UI library, like React, Vue, Angular, or Svelte. 1+ years of experience developing web applications using Python. 1+ years of experience with at least one common data visualization tool such as Tableau, PowerBI, Qlik, or Dash Enterprise. 1+ years of cloud development (e.g. AWS, Azure, Google Cloud) and software container technologies (e.g. Docker, Kubernetes). Familiar with Agile methodologies and processes. Familiar with Gas Distribution company processes and/or pipeline and distribution network data. Bachelor or master's degree in computer science, engineering, GIS, geography, or related field. About Picarro: Picarro, Inc. is the world's leading producer of greenhouse gas and optical stable isotope instruments, which are used in a wide variety of scientific and industrial applications, including: atmospheric science, air quality, greenhouse gas measurements, gas leak detection, food safety, hydrology, ecology and more. The company's products are all designed and manufactured at Picarro's Santa Clara, California headquarters and exported to countries worldwide. Picarro's products are based on dozens of patents related to cavity ring-down spectroscopy (CRDS) technology. Picarro's solutions are unparalleled in their precision, ease of use, portability, and reliability. Honors awarded the Company include the World Economic Forum Technology Innovation Pioneer, IHS CERA Energy Innovation Pioneer, the U.S. Department of Energy Small Business of the Year, the TiE50 Winner and the Red Herring Global 100 Winner. Key investors include Benchmark Capital Partners, Greylock Management Corporation, Duff, Ackerman & Goodrich, Stanford University, Focus Ventures, Mingxin China Growth Ltd., NTT Finance and Weston Presidio Capital. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, nor on the basis of disability. Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referral. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact Picarro, Inc. at disabilityassistance@picarro.com for assistance. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TechnipFMC is committed to driving real change in the energy industry. Our ambition is to build a sustainable future through relentless innovation and global collaboration – and we want you to be part of it. You’ll be joining a culture that values curiosity, expertise, and ideas as well as diversity, inclusion, and authenticity. Bring your unique energy to our team of more than 20,000 people worldwide, and discover a rewarding, fulfilling, and varied career that you can take in anywhere you want to go. Job Purpose Seeking a skilled Python Developer to join our team and help us develop applications and tooling to streamline in-house engineering design processes with a continuous concern for quality, targets, and customer satisfaction. Job Description Write clean and maintainable Python code using PEP guidelines Build and maintain software packages for scientific computing Build and maintain command line interfaces (CLIs) Build and maintain web applications and dashboards Design and implement data analysis pipelines Create and maintain database schemas and queries Optimise code performance and scalability Develop and maintain automated tests to validate software Contribute and adhere to team software development practices, e.g., Agile product management, source code version control, continuous integration/deployment (CI/CD) Build and maintain machine learning models (appreciated, but not a prerequisite) Technical Stack Languages: Python, SQL Core libraries: Scipy, Pandas, NumPy Web frameworks: Streamlit, Dash, Flask Visualisation: Matplotlib, Seaborn, Plotly Automated testing: pytest CLI development: Click, Argparse Source code version control: Git Agile product management: Azure DevOps, GitHub CI/CD: Azure Pipelines, Github Actions, Docker Database systems: PostgreSQL, Snowflake, SQlite, HDF5 Performance: Numba, Dask Machine Learning: Scikit-learn, TensorFlow, PyTorch (Desired) You Are Meant For This Job If Bachelor's degree in computer science or software engineering Master's degree is a plus Strong technical basis in engineering Presentation skills Good organizational and problem-solving skills Service/Customer oriented Ability to work in a team-oriented environment Good command of English Skills Spring Boot Data Modelling CI/CD Internet of Things (IoT) Jira/Confluence React/Angular SAFe Scrum Kamban Collaboration SQL Bash/Shell/Powershell AWS S3 AWS lambda Cypress/Playwright Material Design Empirical Thinking Agility Github HTML/CSS Javascript/TypeScript GraphQL Continuous Learning Cybersecurity Computer Programming Java/Kotlin Test Driven Development Being a global leader in the energy industry requires an inclusive and diverse environment. TechnipFMC promotes diversity, equity, and inclusion by ensuring equal opportunities to all ages, races, ethnicities, religions, sexual orientations, gender expressions, disabilities, or all other pluralities. We celebrate who you are and what you bring. Every voice matters and we encourage you to add to our culture. TechnipFMC respects the rights and dignity of those it works with and promotes adherence to internationally recognized human rights principles for those in its value chain. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TCS is conducting Virtual drive on Friday dated 6th June, 2025 Position: AI Engineer Location: Pune Years of Experience: 5-9 yrs(Accurate) Notice Period: Immediate Joiners or 0-30 days NP (Note: Candidates sharing details below experience range will not be considered) Responsibilities: Strong understanding of machine learning techniques, including deep learning, reinforcement learning, and predictive modeling. Demonstrable experience in AI, ML, NLP, or related fields, with a robust portfolio showcasing practical application of these technologies. Proficient in Python and familiar with frameworks for AI development, such as TensorFlow, PyTorch, Keras, scikit-learn, NLTK, Langchain. Experience in utilizing Python libraries such as NumPy, Pandas, Matplotlib, Plotly Proficient in programming languages such as Python Experience with ML, deep learning, TensorFlow, Python, NLP Experience in program leadership, governance, and change enablement Knowledge of basic algorithms, object-oriented and functional design principles, and best-practice patterns Experience in REST API development, NoSQL database design, and RDBMS design and optimizations Familiarity with modern data platforms and cloud services like AWS, Azure for deploying scalable AI solutions. Kindly attach your updated CVs Thanks & Regards Shilpa Silonee Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Purpose We are looking for an energetic and self-starter software developer to join our product development practice as a L04-Senior Software Engineer (SSE) / L05-Staff Engineer (SE) You will get to work with some of the best and knowledgeable tech talent in the financial world and you will build next generation digital services and platforms that will lead the transformation goals for our customers. You will work closely with the engineering, UX, product and test automation communities, as part of the agile team, to lead product design and development and to help the Digital Service Product Owner to deliver and maximize value. You will drive engineering and architecture best practices for writing and encouraging others to write secure code and dev-ops process while getting opportunities for learning new business domain and topics, to work with industry SMEs and to learn new technology and behavioral skills. Key Responsibilities As a Full-stack Developer 6+ year’s professional experience in enterprise software design and development in an N-tier architecture environment. Understanding of 12-factor app framework is highly desirable Must have experience building web applications using .NET core 3.x (.NET 5.0 is better), Web API, HTML5, React JS Must have experience with tools such as Jira, Github, Confluence (or other wiki), SonarQube (or similar), OWASP ZAP (or similar) and Snyk (or similar) Experience with data visualization libraries /framework like D3js, Plotly, HighCharts etc. will be an advantage Must have experience with SOA and Web Service standards (REST & JSON/SOAP & WSDL/WS-I Basic Profile), and IIS Understand the business requirements from the product owner(s) Design and implement the system from scratch & build enhancements, features request using modern application frameworks using C# and React with .NET Core, Web API, AWS services etc. Participate in both development & maintenance tasks Independently troubleshoot difficult and complex issues on production and other environments As a Technical Lead in the pod Must have experience of working in an automated CI/CD environment and with fast moving teams using Scrum/Agile; Experience with AWS and other cloud providers is highly desirable Must have extensive experience with object oriented design principles. Ability to articulate the pros and cons of design/implementation options Participate in design review and peer code review Work collaboratively in a global setting, should be eager to learn new technologies Responsible for extending and maintaining existing codebase with focus on quality, re-usability, maintainability and consistency Coach teams on best practices and architecture design As member of the Engineering community Must have extensive experience with object oriented design principles. Ability to articulate the pros and cons of design/implementation options Good understanding and knowledge of areas including but not limited to requirement gathering, designing, development, testing, maintenance, quality control etc. Stay up-to-date on latest developments in technology Learn and share learnings with the community Behavioral Competencies A self-starter, excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must have organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Liking and initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. ABOUT THIS JOB Nielsen Global Media uses cutting edge technology and industry leading data science to tackle some of the hardest problems in marketing science. We’re automating our models with artificial intelligence and machine learning to produce the same quality insights as a traditional white-glove consulting engagement at unparalleled speed and scale. Intelligence Studio is a horizontally scalable, cross-cloud technology agnostic platform built with trusted open source components like VS Code, Apache Airflow, Jupyterhub and MLFlow. It allows data scientists to focus on doing data science by taking care of essential concerns like data access, logging, configuration, resource negotiation, dependency management, orchestration, and testing. We’re looking for a Staff Software Engineer to help our talented, cross-functional team improve user workflows in Intelligence Studio. Ideal candidates will be hands-on technologists with experience in Python, Kubernetes, Distributed Systems, AWS or Azure cloud infrastructure. This position is a fantastic opportunity for an experienced engineer to work with creative engineers and cutting-edge technologies. RESPONSIBILITIES Build software and integrations in a cloud-based microservices environment (kubernetes) for big data applications with Spark, Ray, etc. Writing software in python, typescript, go, Java and Scala Work with stakeholders and technical leadership to design and build interfaces, workflows, and services that enhance the delivery of data science products Actively participate in team code reviews and enforce quality standards Work within a cross-functional team to author clear and purposeful epics/stories Promote and enforce best practices in development and operations Identify opportunities and weaknesses in the platform architecture Design and develop data visualization tooling using electron, jupyterhub, plotly, typescript and pandas to enhance data exploration workflows for data science Integrate data science visualization and diagnostic tooling like tensorboard, ray serving, spark history server into an existing distributed compute and development environment Build secure integrations with the kubernetes api allowing the management of user workloads in a shared environment with potentially sensitive data Understand and debug interactions between cloud networking components (ALBs, web api proxies) cluster ingression and security using kong and istio, python-based web servers and modern web transfer protocols like websockets and http3 A LITTLE BIT ABOUT YOU You are an experienced software engineer with a proven track record of quickly learning and implementing new technologies. You love technology and are excited to work on a high-performance team building a very ambitious product. You are looking for an opportunity to grow your career and your technical depth by diving into a project working on the current state of the art in big data and cloud technologies. QUALIFICATIONS Bachelor’s degree in Computer Science or a related technical field, or equivalent industry experience Typescript, Python, Kubernetes, Airflow, Electron, Jupyter, Pandas, Keras, ray, tensorflow, CUDA Apache Spark, Istio, Scala, Java, Go, kong, cloud software design, containerized microservices & distributed caching Experience with machine learning (RNNs, CNNs, random forest, LLMs) a plus Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

We are looking for a Senior Data Engineer to join our life sciences and healthcare technology consulting team. This is an exciting opportunity to lead the design and implementation of modern data pipelines and infrastructure that support data-driven insights for pharmaceutical and biotech clients. The role focuses on building scalable, reliable, and governed data solutions using tools like Python, SQL, and cloud platforms, with secondary exposure to data visualization and analytics. You’ll work closely with cross-functional teams to deliver impactful data solutions that support strategic decision-making. About You – Experience, Education, Skills, And Accomplishments Bachelor’s degree or equivalent in Computer Science, Information Systems, Engineering, or a related field At least 5 years of relevant experience in data engineering, data analytics, or consulting roles Proven expertise in building scalable data pipelines, ETL/ELT workflows, and cloud-native solutions Strong programming proficiency in Python and SQL, including data wrangling and transformation, with experience using PySpark and Databricks for large-scale data processing Hands-on experience with Power BI and other data visualization tools such as Spotfire, Tableau, Seaborn, or Plotly Strong understanding of data quality, validation, and governance principles It would be great if you also had . . . Prior experience working in the life sciences or pharma sectors (preferred but not compulsory) Wider experience with AWS, Azure or GCP data engineering tools for insights generation, analysis and storage. Experience mentoring junior engineers or analysts and fostering team capability development Familiarity with AI/ML workflows or experience supporting data science teams (nice to have, not essential) Understanding of compliance and privacy standards like HIPAA, GDPR, or other industry frameworks (nice to have, not essential) What will you be doing in this role? Design, build, and maintain robust data pipelines to support downstream analytics and reporting Ingest, integrate, and transform structured and unstructured data from various sources (e.g., APIs, databases, files) Automate and optimize repeatable data workflows for performance and scalability Implement validation and governance checks to ensure high data quality and consistency Collaborate with analysts, consultants, and stakeholders to shape well-modeled datasets for visualization and decision support Build lightweight dashboards or data prototypes using BI tools or Python-based libraries, when needed Contribute to documentation, engineering templates, and reusable components to enhance delivery efficiency Participate in project scoping, estimations, and solution design discussions with technical and non-technical stakeholders About The Team This is a technology consulting team within Clarivate’s Life Sciences and Healthcare Consulting division. We specialize in creating bespoke solutions that leverage data and technology to solve strategic problems for pharma and biotech clients. Our team manages the entire delivery lifecycle—from business development and discovery to architecture, implementation, and deployment. We work across a modern tech stack including Python, SQL, cloud platforms (AWS, Azure, GCP), orchestration tools (e.g., Airflow, dbt), and data visualization platforms (Tableau, Power BI). Our work spans data engineering, analytics enablement, regulatory intelligence, and reporting automation. The team is small, cross-functional, and agile, with a strong emphasis on innovation, quality, and collaboration. Hours of Work This is a full-time position based in Bangalore, operating in a hybrid mode. Working 2–3 days per week in the office is compulsory. Flexibility in working hours is essential, as we collaborate with internal teams and clients across multiple time zones. At Clarivate, we are committed to providing equal employment opportunities for all persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Cognite Embark on a transformative journey with Cognite, a global SaaS forerunner in leveraging data to unravel complex business challenges through our cutting-edge Cognite Data Fusion (CDF) platform. We were awarded the 2022 Technology Innovation Leader for Global Digital Industrial Platforms & Cognite was recognized as 2024 Microsoft Energy and Resources Partner of the Year . In the realm of industrial digitalization, we stand at the forefront, reshaping the future of Oil & Gas, Manufacturing and Energy sectors. Join us in this venture where data meets ingenuity, and together, we forge the path to a smarter, more connected industrial future. Learn More About Cognite Here Cognite Product Tour 2024 Cognite Product Tour 2023 Data Contextualization Masterclass 2023 Our values Impact : Cogniters strive to make an impact in all that they do. We are result-oriented, always asking ourselves “How do I add value to my team, to my company, and most importantly - to our customers?” Ownership : Cogniters embrace a culture of ownership. We go beyond our comfort zones to contribute to the greater good, fostering inclusivity and sharing responsibilities for challenges and success. Relentless : Cogniters are relentless in their pursuit of innovation. We are determined and deliverable (never ruthless or reckless), facing challenges head-on and viewing setbacks as opportunities for growth. About Global Customer Success In Global Customer Success , we build a quality company that helps our customers (and employees!) achieve their desired outcomes with Cognite. We take ownership to ensure that value moves beyond a paper exercise, and is being realized in practice. We have a relentless focus on innovation, ensuring we fully utilize our peoples' capabilities. We strive for happy and thriving employees, working with engaged customers to expand their usage and impact, and bring back feedback to improve our products. Within Global Customer Success, we emphasise developing repeatable and scalable offerings to maximize the outcomes of our investments as a company. About Cognite And Cogniters Cognite is a global leader in industrial software with our Industrial DataOps platform, Cognite Data Fusion®, at the core. We were awarded the 2022 Technology Innovation Leader for Global Digital Industrial Platforms . Do you see how data can be used, modeled and visualized in new ways to improve decision making in heavy-asset industries? Do you enjoy building machine-learning or physics-based solutions to help operators, engineers and business leaders plan and run their processes more efficiently? If so, you should join Cognite and become a part of the team responsible for delivering Cognite’s cutting edge industry solutions to our customers! The data science team at Cognite is responsible for understanding our customers’ business goals and challenges, scoping and building solutions to address these, and growing Cognite’s impact within our customers’ digital transformation efforts. As a Data Scientist in the team, you will work in cross-functional pods of other data scientists, data engineers, solution architects and project managers to configure, deploy, and operationalize digital solutions across the oil and gas, renewables/sustainability, power & utilities, and manufacturing industries. You will work with the customer to understand their desired outcome, conduct discovery workshops, and advise on possible approaches and potential pitfalls. You will build solutions based on physics models and/or using machine learning techniques, build dashboards for visualizing the solutions, and scale and deploy solutions in production using Cognite’s core capabilities. The position will give you a unique opportunity to work with and learn from people with various backgrounds in business, software, and industry. Who You Are 5+ years of full-time work experience as a data scientist (prefarbly within related industry), or data scientist with domain expertise in Oil and Gas or Maintenance or Manufacturing 1+ years experience serving as a domain expert on internal or customer projects Proficient with Python and its data ecosystem (pandas, numpy), ML libraries (scikit-learn, keras, etc) Experience in data visualization using PowerBI, Grafana, Tableau, or web development frameworks like Plotly Dash, Streamlit, etc. Promoting good software development practices including automated tests and documentation Experience with Version control (git, for example) Experience contributing code to a large software project Pragmatic, can balance short term and long term tradeoffs Realistic and credible advisor on machine learning Excellent communicator internally and with customers; Managing expectations of technical stakeholders at the customer. Mentoring and quality assurance of juniors’ work Cloud experience, such as building streaming calculations & models as serverless functions Deployment to production using provided infrastructure practices A track record of working in diverse industries Ready to stay up to date with latest developments in GenAI and using GenAI services to build use cases for customers Join the global Cognite Community! 🌐 Join an organization of 70 different nationalities 🌐 with Diversity, Equality and Inclusion (DEI) in focus 🤝 ITPL office location (Bengaluru) A highly modern and fun working environment with sublime culture across the organization, follow us on Instagram @ cognitedata 📷 to know more Flat structure with direct access to decision-makers, with minimal amount of bureaucracy Opportunity to work with and learn from some of the best people on some of the most ambitious projects found anywhere, across industries Join our HUB 🗣️ to be part of the conversation directly with Cogniters and our partners. Hybrid work environment globally Why choose Cognite? 🏆 🚀 Join us in making a real and lasting impact in one of the most exciting and fastest-growing new software companies in the world. We have repeatedly demonstrated that digital transformation, when anchored on strong DataOps, drives business value and sustainability for clients and allows front-line workers, as well as domain experts, to make better decisions every single day. We were recognized as one of CNBC's top global enterprise technology startups powering digital transformation ! And just recently, Frost & Sullivan named Cognite a Technology Innovation Leader ! 🥇 Most recently Cognite Data Fusion® Achieved Industry First DNV Compliance for Digital Twins 🥇 Apply today! If you're excited about the opportunity to work at Cognite and make a difference in the tech industry, we encourage you to apply today! We welcome candidates of all backgrounds and identities to join our team. Please do not hesitate to contact our Talent Acquisition team with any questions - We encourage you to follow us on Cognite LinkedIn ; we post all our openings there. Equal Opportunity Cognite is committed to creating a diverse and inclusive environment at work and is proud to be an equal opportunity employer. All qualified applicants will receive the same level of consideration for employment; everyone we hire will receive the same level of consideration for training, compensation, and promotion. We ask for gender as part of our application because we want to ensure equal assessment in the recruitment process. Your answer will help us reach this commitment! However, the question about gender is optional and your choice not to answer will not affect the assessment of your application in any way. Show more Show less

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Associate Software Engineer - Real-time Resource Allocation What You Will Do We are seeking a highly skilled and experienced Full Stack Software Engineer to join our team. As a Senior Full Stack Engineer, you will be responsible for developing and deploying complex software applications with guidance from senior software engineers. You will work closely with cross-functional teams to deliver high-quality, scalable, and maintainable solutions Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Take ownership of complex software projects from conception to deployment. Contribute to both front-end and back-end development using cloud technology. Create and maintain documentation on software design, deployment, and operations. Identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements Work closely with product team, business team, and other key partners. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Master’s degree in computer science or STEM majors with a minimum of 1 to 3 years of Information Systems experience OR Bachelor’s degree in computer science or STEM majors with a minimum of 3 to 5 years of Information Systems experience. Must-Have Skills: Knowledge about various cloud services and cloud design principles Hands on experience with Full Stack software development including REST APIs and data pipelines. Proficient in programming language Python(preferred) and SQL/NoSQL. Experience in microservices architecture and containerization technologies such as Docker, Kubernetes, Azure, AWS, or other cloud platforms. Experience in JavaScript, TypeScript, React framework, HTML5, CSS, and NPM. Good-to-Have Skills: Experience with DevOps CI/CD build and deployment pipeline Experience with design patterns, data structures, test-driven development Experience with Python-based visualization frameworks like Plotly. Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Awareness of industry trends. Strong oral and written communication skills. Strong interpersonal skills. Effective team-building and problem-solving abilities. Persistence to completion, especially in the face of setbacks, and the ability to push for results through team spirit. Ability to work effectively with global, virtual teams What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Data Engineer - Real-time Resource Allocation What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building and optimizing our data pipelines and platforms in a SAFE Agile product team. Roles & Responsibilities: Chip into the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Must-Have Skills: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Good-to-Have Skills: Experience with DevOps CI/CD build and deployment pipeline Experience with design patterns, data structures, test-driven development Experience with Python-based visualization frameworks like Plotly. Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Associate Software Engineer - Real-time Resource Allocation What You Will Do We are seeking a highly skilled and experienced Full Stack Software Engineer to join our team. As a Senior Full Stack Engineer, you will be responsible for developing and deploying complex software applications with guidance from senior software engineers. You will work closely with cross-functional teams to deliver high-quality, scalable, and maintainable solutions Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Take ownership of complex software projects from conception to deployment. Contribute to both front-end and back-end development using cloud technology. Create and maintain documentation on software design, deployment, and operations. Identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements Work closely with product team, business team, and other key partners. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Master’s degree in computer science or STEM majors with a minimum of 1 to 3 years of Information Systems experience OR Bachelor’s degree in computer science or STEM majors with a minimum of 3 to 5 years of Information Systems experience. Must-Have Skills: Knowledge about various cloud services and cloud design principles Hands on experience with Full Stack software development including REST APIs and data pipelines. Proficient in programming language Python(preferred) and SQL/NoSQL. Experience in microservices architecture and containerization technologies such as Docker, Kubernetes, Azure, AWS, or other cloud platforms. Experience in JavaScript, TypeScript, React framework, HTML5, CSS, and NPM. Good-to-Have Skills: Experience with DevOps CI/CD build and deployment pipeline Experience with design patterns, data structures, test-driven development Experience with Python-based visualization frameworks like Plotly. Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Awareness of industry trends. Strong oral and written communication skills. Strong interpersonal skills. Effective team-building and problem-solving abilities. Persistence to completion, especially in the face of setbacks, and the ability to push for results through team spirit. Ability to work effectively with global, virtual teams What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 weeks ago

Apply

12.0 - 18.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Are you looking for a career move that will put you at the heart of a global financial institution? Then bring your skills in data-driven modelling and data engineering to Citi’s Global FX Team. By Joining Citi, you will become part of a global organization whose mission is to serve as a trusted partner to our clients by responsibly providing financial services that enable growth and economic progress. Team/Role Overview The FX Data Analytics & AI Technology team, within Citi's FX Technology organization, seeks a highly motivated Full Stack Data Scientist / Data Engineer. The FX Data Analytics & Gen AI Technology team provides data, analytics, and tools to Citi FX sales and trading globally and is responsible for defining and executing the overall data strategy for FX. The successful candidate will be responsible for developing and implementing data-driven models, and engineering robust data and analytics pipelines, to unlock actionable insights from our vast amount of global FX data. The role will be instrumental in executing the overall data strategy for FX and will benefit from close interaction with a wide range of stakeholders across sales, trading, and technology. We are looking for a proactive individual with a practical and pragmatic attitude, ability to build consensus, and work both collaboratively and independently in a dynamic environment. What You’ll Do Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business. Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability. Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology. Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance. Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders. What We’ll Need From You 12 to 18 years experience Master’s degree or above (or equivalent education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc). Understanding of Gen AI models , Vector databases , Agents and follow the market trends. Its desirable to have a hands-on experience on these. Experience developing and implementing quantitative models from data in a financial context. Proficiency working with version control systems such as Git, and familiarity with Linux computing environments. Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc. Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools. Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences. Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes. Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark/Ignite or other caching and analytics technologies. A working knowledge of FX markets and financial instruments would be beneficial. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Department: Technology Location: Kolkata Description Energy Aspects currently has an exciting opportunity available for a Data Engineer to join our Data Engineering team based out of our Kolkata office. This is a fantastic opportunity for an experienced data engineer to support the ETL, data pipelines, core databases and strategic projects underpinning the work of our highly regarded global oil, natural gas and energy teams, optimizing our data content and capabilities. The successful candidate will be responsible for building entirely new data products and maintaining/iterating on our existing datasets, data pipelines, and tools used daily by the Data and Research teams at Energy Aspects. You will take ownership of systems within a cloud-first data stack and be expected to contribute quickly to production-level Python codebases. You will closely support Data Analysts/other technical analysts while helping to shape and drive forward the future state of data engineering at Energy Aspects. Key Responsibilities Develop and maintain data pipelines using Python. Build, extend, and transform core datasets within the data warehouse. Collaborate with various cloud services to create new services and scrapes, facilitating data collection and presentation in diverse formats. Design and implement innovative data solutions for internal teams and external clients. Support a range of strategic, company-level data initiatives. Requirements Minimum of 2 years of experience in software development. Proficiency in web scraping and Python-based web development. Experience with scheduling and orchestration tools, such as Airflow. Strong understanding of building, extending, and querying SQL databases, both natively and via Python. Knowledge of best practices for organizing and modeling data for storage, retrieval, and analysis. Familiarity with data development best practices, including documentation, version control, testing, and automation. Understanding of the importance of data governance and metadata. Methodical, analytical, and organized approach to managing large-scale projects and datasets. Effective communication skills, with the ability to present to both technical and non-technical audiences. Experience in building reliable and maintainable data pipelines. Candidates with Working knowledge or familiarity with the following technologies, libraries, and services shall be preferred: Python: Anaconda, pandas, SQLAlchemy, web scraping libraries SQL: PostgreSQL Cloud: GCP/AWS DevOps: CI/CD, GitHub Actions, Cloud Build, etc. Orchestration: Airflow Data Visualisation: Plotly, Dash, Flask, FastAPI etc. Desirable Skills: Experience with cloud infrastructure and familiarity with Infrastructure as Code. Proficiency with Docker and containerisation. Experience working with high-frequency time series data and/or streaming data. Experience in building and managing other database/datastore technologies (e.g., NoSQL variants, file stores). Experience with Python web frameworks, Jinja templating, and/or JavaScript. Prior experience or training in forecasting or modelling. An interest in commodities, energy, or financial markets is beneficial. Our Culture & Benefits Welcome to our unique workplace where a passion for our industry-leading product sits at the heart of who we are. Life at EA is completely eclectic, fostered through the global nature of the business and a real appreciation of the many cultures of our diverse team. We unite as a single, cohesive team through an array of social clubs that cater to a spectrum of interests, from running and yoga to football and culinary adventures. These groups create a collegial and dynamic atmosphere that extends beyond work, promoting a healthy and balanced lifestyle for our team. Our strategically located offices are all set in prestigious buildings, offering you the convenience of nearby gyms, retail therapy, diverse dining options, and accessible public transport. Located at South City Business Park, Anandpur with convenient access around Kolkata, our office is thoughtfully equipped to enhance your day-to-day experience whether working independently or collaborating with teammates. Enjoy the simple pleasures of a freshly brewed coffee, healthy snacks, and a social space for celebratory moments. We recognise your contribution with a competitive compensation package that includes annual bonuses, comprehensive private health insurance, and substantial pension contributions. Additionally, we offer company share options, subsidised gym memberships, lunch allowance and a generous holiday policy to support your financial and personal well-being. Join a company that values your professional growth and personal fulfilment, all within a supportive and engaging environment. Show more Show less

Posted 2 weeks ago

Apply

0 years

6 - 9 Lacs

Chennai

Remote

Chennai, India Bangalore, India Job ID: R-1074652 Apply prior to the end date: June 14th, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing… Lead development of advanced machine learning and statistical models Design scalable data pipelines using PySpark Perform data transformation and exploratory analysis using Pandas, Numpy and SQL Build, train and fine tune machine learning and deep learning models using TensorFlow and PyTorch Mentor junior engineers and lead code reviews, best practices and documentation. Designing and implementing big data, streaming AI/ML training and prediction pipelines. Translate complex business problems into data driven solutions. Promote best practices in data science, and model governance . Stay ahead with evolving technologies and guide strategic data initiatives. What we're looking for… You'll need to have: Bachelor's degree or four or more years of work experience. Experience in Python, PySpark and SQL. Strong proficiency in Pandas, Numpy, Excel, Plotly, Matplotlib, Seaborn, ETL, AWS and Sagemaker Experience in Supervised learning models: Regression, Classification and Unsupervised learning models: Anomaly detection, clustering. Extensive experience with AWS analytics services, including Redshift, Glue, Athena, Lambda, and Kinesis. Knowledge in Deep Learning Autoencoders, CNN. RNN, LSTM, hybrid models Experience in Model evaluation, cross validation, hyper parameters tuning Familiarity with data visualization tools and techniques. Even better if you have one or more of the following: Experience with machine learning and statistical analysis. Experience in Hypothesis testing. Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders. If our company and this role sound like a fit for you, we encourage you to apply even if you don't meet every "even better" qualification listed above. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Senior Engineer Consultant-AI Science Save Chennai, India, +1 other location Technology Senior Engineer Consultant-Data Science Save Chennai, India, +1 other location Technology Senior Machine Learning Engineer Save Basking Ridge, New Jersey, +4 other locations Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description IQEQ is a preeminent service provider to the alternative asset industry. IQEQ works with managers in multiple capacities ranging from hedge fund, private equity fund, and mutual fund launches; private equity fund administration; advisory firm set-up, regulatory registration and infrastructure design; ongoing regulatory compliance (SEC, CFTC, and 40 Act); financial controls and operational support services; compliance and operational related projects and reviews; and outsourced CFO/controller and administration services to private equity fund investments – portfolio companies, real estate assets and energy assets. Our client base is growing, and our existing clients are engaging the firm across the spectrum of our services offerings. Job Description About IQ-EQ: IQ-EQ is a leading investor services group that brings together that rare combination of global expertise and a deep understanding of the needs of clients. We have the know-how and the know you that allows us to provide a comprehensive range of compliance, administration, asset and advisory services to investment funds, global companies, family offices and private clients globally. IQ-EQ employs a global workforce of 5,000+ people located in 23 jurisdictions and has assets under administration (AUA) exceeding US$500 billion. IQ-EQ works with eight of the top 10 global private equity firms. This is an exciting time to be part of IQ-EQ, we are growing substantially and currently seeking talented Data individuals to come along for the journey into the Data, Analytics and Reporting Department. You will have the opportunity to utilise IQ-EQ's leading-edge technology stack whilst enjoying our continuous learning and development programme. What does the Analytics Engineer opportunity look like for you? You will play a pivotal role in the development and maintenance of interactive client, investor and operational team facing dashboards using Tableau, Power BI and other visual analytics tools. You will work closely with clients and senior stakeholders to capture their BI requirements, conduct data analysis using SQL, Python (or other open-source languages), and visualise the insights in an impactful manner. In order to be successful in this role we require the following experience Experience of interacting directly with external clients in a verbal manner and through those interactions be able to conceptualise what the overall end-to-end database architecture and data model would look like from the source to destination of the data Intermediate experience of working with structured and unstructured data, data warehouses and data lakes both on-prem and in cloud (Microsoft SQL Server, Azure, AWS or GCP) At least 5 yrs. of demonstrable experience in SQL Server and cloud-based data stores. Intermediate knowledge of SQL Server Data Tool such as SSIS or Azure Data Factory (at least 5 yrs. of demonstrable experience) Intermediate experience of ETL / ELT methods such as incremental and full load as well as various tools to implement these methods such as Azure Data Factory, Azure DataBricks, SSIS, Python, dbt, airflow, Alteryx Intermediate experience of implementing dimensional data models within analytics databases / data warehouses Intermediate knowledge of Python / Spark packages for analytical data modelling and data analysis (panda, NumPy, scikit, etc.) as well as data visualisation (matplotlib, plotly, dash, etc.) Intermediate experience of BI tools – Power BI, Tableau (at least 4 yrs. of demonstrable experience) Experience of various java libraries for front end development and embedding of visuals (i.e. D3, react, node, etc.) Tasks (what does the role do on a day-to-day basis) Engage with external clients, vendors, project reps, internal stakeholders from operations as well as client services teams to understand their analytics and dashboard requirements Maintain and enhance existing database architecture of the BI solution Conduct in-depth exploratory data analysis and define business critical features Work with key teams within Group Technology to get appropriate access and infrastructure setup to develop advanced visualisations (BI dashboards) Optimize SQL server queries, stored procedures, views to efficiently and effectively retrieve data from databases, employing the fundamentals of dimensional data modelling Ensure updates to tickets and work in progress are well communicated and escalations regarding the delivery being provided is kept to a minimum Maintain and enhance existing data solutions. Maintain best-practice data warehouse solutions that support business analytics needs. This is achieved using on-prem or cloud databases and different ETL / ELT programs and software, such as Azure Data Factory, SSIS, Python, PowerShell, Alteryx or other open source technology Create, maintain and document the different analytics solution processes created per project worked on Resolve IT and data issues. When database issues arise or development requests come in through the help desk, BI Developers work to resolve these problems. This requires an understanding of legacy solutions and issues. Key competencies for position and level Analytical Reasoning – Ability to identify patterns within a group of facts or rules and use those patterns to determine outcomes about what could / must be true and come to logical conclusions Critical Thinking – Ability to conceptualise, analyse, synthesise, evaluate and apply information to reach an answer or conclusion Conceptual Thinking and Creative Problem Solving - Original thinker that has the ability to go beyond traditional approaches with the resourcefulness to adapt to new / difficult situations and devise ways to overcome obstacles in a persistent manner that does not give up easily Interpersonal Savvy – Relating comfortably with people across all levels, functions, cultures & geographies, building rapport in an open, friendly & accepting way Effective Communication – Adjusting communication style to fit the audience & message, whilst providing timely information to help others across the organisation. Encourages the open expression of diverse ideas and opinions Results / Action Orientated and Determination – Readily taking action on challenges without unnecessary planning and identifies new opportunities, taking ownership of them with a focus on getting the problem solved Key behaviours we expect to see Role In addition to demonstrating our Group Values (Authentic, Bold, and Collaborative), the role holder will be expected to demonstrate the following: Facilitate open and frank debate to drive forward improvement Willingness to learn, develop, and keep abreast of technological developments An analytical mind, excellent problem-solving & diagnostic skills, attention to detail Qualifications Required Experience Education / Professional Qualifications Degree level education in Data Analytics or Computer Science is preferred but equivalent professional IT certification is acceptable. Background Experience A Minimum of 5 years’ experience in a developer / engineer role or similar DB experience. Good understanding of dimensional data modelling methodologies Experience with visualization and reporting tools namely Tableau and Power BI as well as Qlikview, Looker, ThoughtSpot Experience with Microsoft Fabric Platform Experience with MS Excel including PowerPivot Technical Experience of supporting a variety of SQL based applications. Hands on experience with SQL 2016 and above Experience with T-SQL and the ability to analyse queries for efficiency. Experience with MS SQL Server suite, including SSIS Experience in Fabric Data Factory, Azure Data Factory, Azure Synapse Experience in both batch (incremental and full load) and near-real time data ETL / ELT data processing Experience with version control software e.g. Git, Bitbucket as well as software development platforms such Azure DevOps and Jira Languages English Show more Show less

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Sr. Analyst - Marketing Measurement & Optimization Job Description: Qualifications: Bachelor's degree in Statistics, Mathematics, Computer Science, Engineering, or a related field. Proven 3-4 years of experience in a similar role. Strong analytical and problem-solving skills. Excellent communication and presentation skills. Skills: Proficiency in R (tidyverse, LME4/lmerTest, plotly/ggplot2), or Python, for data manipulation and modelling and visualization, and SQL (joins, aggregation, analytics functions) for data handling. Ability to handle & analyse marketing data and perform statistical tests. Experience with data visualization tools such as Tableau, PowerPoint, Excel. Strong storytelling skills and the ability to generate insights & recommendations. Responsibilities: Understand business requirements and suggest appropriate marketing measurement solutions (Media Mix Modelling, Multi Touch Attribution, etc.) Conduct panel data analysis using fixed effects, random effects, and mixed effects models. Perform econometric modelling, including model evaluation, model selection, and results interpretation. Understand, execute, and evaluate the data science modelling flow. Understand marketing, its objectives, and effectiveness measures such as ROI/ROAS. Familiarity with marketing channels, performance metrics, and conversion funnel. Experience with media mix modelling, ad-stock effect, saturation effect, multi-touch attribution, rule-based attribution, and media mix optimization. Knowledge of Bayes’ theorem, Shapley value, Markov chain, response curve, marginal ROI, halo effect, and cannibalization. Experience handling marketing data and performing data QA & manipulation tasks such as joins/merge, aggregation & segregation, append. Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies