Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining Apexon, a digital-first technology services firm that specializes in accelerating business transformation and delivering human-centric digital experiences. At Apexon, we meet customers at every stage of the digital lifecycle and help them outperform their competition through speed and innovation. With a focus on AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering, and UX, we leverage our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the opportunities presented by the digital world. Our reputation is built on a comprehensive suite of engineering services, a commitment to solving our clients" toughest technology problems, and a dedication to continuous improvement. With backing from Goldman Sachs Asset Management and Everstone Capital, Apexon has a global presence with 15 offices and 10 delivery centers across four continents. As a part of our #HumanFirstDIGITAL initiative, you will be expected to excel in data analysis, VBA, Macros, and Excel. Your responsibilities will include monitoring and supporting healthcare operations, addressing client queries, and effectively communicating with stakeholders. Proficiency in Python scripting, particularly in pandas, numpy, and ETL pipelines, is essential. You should be able to independently understand client requirements and queries and demonstrate strong skills in data analysis. Knowledge of Azure synapse basics, Azure DevOps basics, Git, T-SQL experience, and Sql Server will be beneficial. At Apexon, we are committed to diversity and inclusion, and our benefits and rewards program is designed to recognize your skills and contributions, enhance your learning and upskilling experience, and provide support for you and your family. As an Apexon Associate, you will have access to continuous skill-based development, opportunities for career growth, comprehensive health and well-being benefits, and support. In addition to a supportive work environment, we offer a range of benefits, including group health insurance covering a family of 4, term insurance, accident insurance, paid holidays, earned leaves, paid parental leave, learning and career development opportunities, and employee wellness programs.,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You are ready to gain the skills and experience required to progress within your role and advance your career, and there is an excellent software engineering opportunity waiting for you. As a Software Engineer II at JPMorgan Chase in the Corporate Technology organization, you play a crucial role in the Data Services Team dedicated to enhancing, building, and delivering trusted market-leading Generative AI products securely, stably, and at scale. Being a part of the software engineering team, you will implement software solutions by designing, developing, and troubleshooting multiple components within technical products, applications, or systems while continuously enhancing your skills and experience. Your responsibilities include executing standard software solutions, writing secure and high-quality code in at least one programming language, designing and troubleshooting with consideration of upstream and downstream systems, applying tools within the Software Development Life Cycle for automation, and employing technical troubleshooting to solve basic complexity technical problems. Additionally, you will analyze large datasets to identify issues and contribute to decision-making for secure and stable application development, learn and apply system processes for developing secure code and systems, and contribute to a team culture of diversity, equity, inclusion, and respect. The qualifications, capabilities, and skills required for this role include formal training or certification in software engineering concepts with a minimum of 2 years of applied experience, experience with large datasets and predictive models, developing and maintaining code in a corporate environment using modern programming languages and database querying languages, proficiency in programming languages like Python, TensorFlow, PyTorch, PySpark, numpy, pandas, SQL, and familiarity with cloud services such as AWS/Azure. You should have a strong ability to analyze and derive insights from data, experience across the Software Development Life Cycle, exposure to agile methodologies, and emerging knowledge of software applications and technical processes within a technical discipline. Preferred qualifications include understanding of SDLC cycles for data platforms, major upgrade releases, patches, bug/hot fixes, and associated documentations.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
As a Junior Software Engineer at Verve Square, you will be an integral part of our dynamic team, supporting L3 application support and contributing to ongoing feature enhancements for a live enterprise product. If you are passionate about coding, enjoy tackling real-world problems, and are eager to develop in a fast-paced tech environment, we are excited to learn more about you! Key Skills: - Strong understanding of Python, Flask, Pandas, and SQLAlchemy ORM - Experience in React.js with a focus on building UIs using the Material UI (MUI) framework - Basic knowledge of SQL and database operations - Strong problem-solving and debugging skills Role & Responsibilities: - Collaborate closely with senior engineers to provide support and improve production applications - Address bugs, resolve user issues (L3 support), and assist in the implementation of new features - Work in collaboration with cross-functional teams to ensure the delivery of high-quality code - Write clean, maintainable, and efficient code while ensuring proper documentation is in place. If you possess these skills and are eager to contribute to a challenging and rewarding environment, we encourage you to apply for this exciting opportunity at Verve Square.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Robotic Process Automation (RPA) Developer for Digital Transformation, you will be responsible for designing, developing, and testing the automation of workflows. Your key role will involve supporting the implementation of RPA solutions, collaborating with the RPA Business Analyst to document process details, and working with the engagement team to implement and test solutions while managing exceptions. Additionally, you will be involved in the maintenance and change control of existing artifacts. To excel in this role, you should possess substantial experience in standard concepts, practices, technologies, tools, and methodologies related to Digital Transformations, including automation, analytics, and new/emerging technologies in AI/ML. Your ability to efficiently manage projects from inception to completion, coupled with strong execution skills, will be crucial. Knowledge of process reengineering would be advantageous. Your responsibilities will include executing projects on digital transformations, process redesign, and maximizing operational efficiency to identify cost-saving opportunities for the enterprise. You will also interact with Business Partners in India and the USA. Key Job Functions and Responsibilities: - Manage end-to-end execution of digital transformational initiatives - Drive ideation and pilot projects on new/emerging technologies such as AI/ML and predictive analytics - Evaluate multiple tools and select the appropriate technology stack for specific challenges - Collaborate with Subject Matter Experts (SMEs) to document current and future processes - Possess a clear understanding of process discovery and differentiate between RPA and regular automation - Provide guidance on designing "to be" processes for effective automation - Develop RPA solutions following best practices - Consult with internal clients and partners to offer automation expertise - Implement RPA solutions across various platforms (e.g., Citrix, web, Microsoft Office, database, scripting) - Assist in establishing a change management framework for updates - Offer guidance on process design from an automation perspective Qualifications: - Bachelor's/Master's/Engineering degree in IT, Computer Science, Software Engineering, or a relevant field - Minimum of 3-4 years of experience in UiPath - Strong programming skills in Python, SQL, and Pandas - Expertise in at least one popular Python framework (e.g., Django, Flask, or Pyramid) is advantageous - Application of Machine Learning/Deep Learning concepts in cognitive areas such as NLP, Computer Vision, and image analytics is highly beneficial - Proficiency in working with structured/unstructured data, image (OCR)/voice, and descriptive/prescriptive analytics - Excellent organizational and time management skills, with the ability to work independently - Certification in UiPath is recommended - Hands-on experience and deep understanding of AWS tools and technologies like EC2, EMR, ECS, Docker, Lambda, and SageMaker - Enthusiasm for collaborating with team members and other groups in a distributed work model - Willingness to support and learn from teammates while sharing knowledge - Comfortable working in a mid-day shift and remote setup Work Schedule or Travel Requirements: - 2-11 PM IST; 5 days a week,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Back-End Developer at our company, you will be responsible for developing an AI-driven prescriptive remediation model for SuperZoom, CBRE's data quality platform. Your primary focus will be on analyzing invalid records flagged by data quality rules and providing suggestions for corrected values based on historical patterns. It is crucial that the model you develop learns from past corrections to continuously enhance its future recommendations. The ideal candidate for this role should possess a solid background in machine learning, natural language processing (NLP), data quality, and backend development. Your key responsibilities will include developing a prescriptive remediation model to analyze and suggest corrections for bad records, implementing a feedback loop for continuous learning, building APIs and backend workflows for seamless integration, designing a data pipeline for real-time processing of flagged records, optimizing model performance for large-scale datasets, and collaborating effectively with data governance teams, data scientists, and front-end developers. Additionally, you will be expected to ensure the security, scalability, and performance of the system in handling sensitive data. To excel in this role, you should have at least 5 years of backend development experience with a focus on AI/ML-driven solutions. Proficiency in Python, including skills in Pandas, PySpark, and NumPy, is essential. Experience with machine learning libraries like Scikit-Learn, TensorFlow, or Hugging Face Transformers, along with a solid understanding of data quality, fuzzy matching, and NLP techniques for text correction, will be advantageous. Strong SQL skills and familiarity with databases such as PostgreSQL, Snowflake, or MS SQL Server are required, as well as expertise in building RESTful APIs and integrating ML models into production systems. Your problem-solving and analytical abilities will also be put to the test in handling diverse data quality issues effectively. Nice-to-have skills for this role include experience with vector databases (e.g., Pinecone, Weaviate) for similarity search, familiarity with LLMs and fine-tuning for data correction tasks, experience with Apache Airflow for workflow automation, and knowledge of reinforcement learning to enhance remediation accuracy over time. Your success in this role will be measured by the accuracy and relevance of suggestions provided for data quality issues in flagged records, improved model performance through iterative learning, seamless integration of the remediation model into SuperZoom, and on-time delivery of backend features in collaboration with the data governance team.,
Posted 2 days ago
0.0 - 3.0 years
0 Lacs
chandigarh
On-site
As a Python Backend Developer at Lookfinity, you will be a part of the Backend Engineering team that is focused on building scalable, data-driven, and cloud-native applications to solve real-world business problems. We are dedicated to maintaining clean architecture, enhancing performance, and designing elegant APIs. Join our dynamic team that is enthusiastic about backend craftsmanship and modern infrastructure. You will be working with a tech stack that includes languages and frameworks such as Python, FastAPI, and GraphQL (Ariadne), databases like PostgreSQL, MongoDB, and ClickHouse, messaging and task queues such as RabbitMQ and Celery, cloud services like AWS (EC2, S3, Lambda), Docker, Kubernetes, data processing tools like Pandas and SQL, and monitoring and logging tools like Prometheus and Grafana. Additionally, you will be utilizing version control systems like Git, GitHub/GitLab, and CI/CD tools. Your responsibilities will include developing and maintaining scalable RESTful and GraphQL APIs using Python, designing and integrating microservices with databases, writing clean and efficient code following best practices, working with Celery & RabbitMQ for async processing, containerizing services using Docker, collaborating with cross-functional teams, monitoring and optimizing application performance, participating in code reviews, and contributing to team knowledge-sharing. We are looking for candidates with 6 months to 1 year of hands-on experience in backend Python development, a good understanding of FastAPI or willingness to learn, basic knowledge of SQL and familiarity with databases like PostgreSQL and/or MongoDB, exposure to messaging systems like RabbitMQ, familiarity with cloud platforms like AWS, understanding of Docker and containerization, curiosity towards learning new technologies, clear communication skills, team spirit, and appreciation for clean code. Additional experience with GraphQL APIs, Kubernetes, data pipelines, CI/CD processes, and observability tools is considered a bonus. In this role, you will have the opportunity to work on modern backend systems, receive mentorship, and have technical growth plans tailored to your career goals. This is a full-time position with a day shift schedule located in Panchkula. Join us at Lookfinity and be a part of our innovative team dedicated to backend development.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Senior Machine Learning Engineer at our AI/ML team, you will be responsible for designing and building intelligent search systems. Your focus will be on utilizing cutting-edge techniques in vector search, semantic similarity, and natural language processing to create innovative solutions. Your key responsibilities will include designing and implementing high-performance vector search systems using tools like FAISS, Milvus, Weaviate, or Pinecone. You will develop semantic search solutions that leverage embedding models and similarity scoring for precise and context-aware retrieval. Additionally, you will be expected to research and integrate the latest advancements in ANN algorithms, transformer-based models, and embedding generation. Collaboration with cross-functional teams, including data scientists, backend engineers, and product managers, will be essential to bring ML-driven features from concept to production. Furthermore, maintaining clear documentation of methodologies, experiments, and findings for technical and non-technical stakeholders will be part of your role. To qualify for this position, you should have at least 3 years of experience in Machine Learning, with a focus on NLP and vector search. A deep understanding of semantic embeddings, transformer models (e.g., BERT, RoBERTa, GPT), and hands-on experience with vector search frameworks is required. You should also possess a solid understanding of similarity search techniques such as cosine similarity, dot-product scoring, and clustering methods. Strong programming skills in Python and familiarity with libraries like NumPy, Pandas, Scikit-learn, and Hugging Face Transformers are necessary. Exposure to cloud platforms, preferably Azure, and container orchestration tools like Docker and Kubernetes is preferred. This is a full-time position with benefits including health insurance, internet reimbursement, and Provident Fund. The work schedule consists of day shifts, fixed shifts, and morning shifts, and the work location is in-person. The application deadline for this role is 18/04/2025.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for architecting and delivering Automation and AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. Working closely with business stakeholders, you will understand business requirements and design custom AI solutions to address complex challenges. This role offers the opportunity to accelerate AI adoption across the business landscape in the Statistical Applications and Data domain. Your key responsibilities include defining and leading AI use cases in clinical projects from start-up through close-out. This involves tasks such as protocol development, site management, and data review. To excel in this role, you must be a proficient programmer capable of developing AI solutions thoroughly. Additionally, you will be expected to coach and mentor junior AI/ML engineers to enhance and accelerate AI adoption in business processes. Required Technical Skills: - Bachelors degree in a relevant scientific discipline (e.g., Biomedical engineering, Life Sciences, Nursing, Pharmacy) or clinical background (e.g., MD- Doctor of Medicine OR RN - Registered Nurse) - Advanced degree (e.g., Masters or Ph.D.) in Data Science, AI/ML - Overall experience of 10-12 years with a minimum of 5-7 years in clinical research, particularly managing clinical trials in pharmaceutical, biotechnology, or contract research organization (CRO) settings - Minimum of 5-7 years of experience in AI research and development, specifically focusing on healthcare or life sciences applications - Strong understanding of clinical trial regulations and guidelines, including Good Clinical Practice (GCP), International Conference on Harmonization (ICH), and applicable local regulations - Proven track record in designing and delivering AI solutions, emphasizing foundation models, large language models, or similar technologies - Experience in natural language processing (NLP) and text analytics is highly desirable - Proficiency in programming languages such as Python, R, and experience with AI frameworks like TensorFlow, PyTorch, or Hugging Face - Knowledge of libraries such as SciKit Learn, Pandas, Matplotlib, etc. - Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and related services is a plus - Experience working with large datasets, performing data pre-processing, feature engineering, and model evaluation - Proficiency in solution architecture and design, translating business requirements into technical specifications, and developing scalable and robust AI solutions In summary, this role requires a seasoned professional with a strong background in AI, clinical research, and technical expertise to drive innovation and AI adoption in the pharmaceutical domain, particularly in clinical settings.,
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
haryana
On-site
The Senior AI Engineer - Agentic AI position at JMD Megapolis, Gurugram requires a minimum of 5+ years of experience in Machine Learning engineering, Data Science, or similar roles focusing on applied data science and entity resolution. You will be expected to have a strong background in machine learning, data mining, and statistical analysis for model development, validation, implementation, and product integration. Proficiency in programming languages like Python or Scala, along with experience in working with data manipulation and analysis libraries such as Pandas, NumPy, and scikit-learn is essential. Additionally, experience with large-scale data processing frameworks like Spark, proficiency in SQL and database concepts, as well as a solid understanding of feature engineering, dimensionality reduction, and data preprocessing techniques are required. As a Senior AI Engineer, you should possess excellent problem-solving skills and the ability to devise creative solutions to complex data challenges. Strong communication skills are crucial for effective collaboration with cross-functional teams and explaining technical concepts to non-technical stakeholders. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science are desirable traits for this role. The ideal candidate for this position would hold a Masters or PhD in Computer Science, Data Science, Statistics, or a related quantitative field. They should have 5-10 years of industry experience in developing AI solutions, including machine learning and deep learning models. Strong programming skills in Python and familiarity with libraries such as TensorFlow, PyTorch, or scikit-learn are necessary. Furthermore, a solid understanding of machine learning algorithms, statistical analysis, data preprocessing techniques, and experience in working with large datasets to implement scalable AI solutions are required. Proficiency in data visualization and reporting tools, knowledge of cloud platforms like AWS, Azure, Google Cloud for AI deployment, familiarity with software development practices, and version control systems are all valued skills. Problem-solving abilities, creative thinking to overcome challenges, strong communication, and teamwork skills to collaborate effectively with cross-functional teams are essential for success in this role.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
Are you passionate about data and coding Do you enjoy working in a fast-paced and dynamic start-up environment If so, we are looking for a talented Python developer to join our team! We are a data consultancy start-up with a global client base, headquartered in London UK, and we are looking for someone to join us full time on-site in our cool Office in Gurugram. Uptitude is a forward-thinking consultancy that specializes in providing exceptional data and business intelligence solutions to clients worldwide. Our team is passionate about empowering businesses with data-driven insights, enabling them to make informed decisions and achieve remarkable results. At Uptitude, we embrace a vibrant and inclusive culture, where innovation, excellence, and collaboration thrive. As a Python Developer at Uptitude, you will be responsible for developing high-quality, scalable, and efficient software solutions. Your primary focus will be on designing and implementing Python-based applications, integrating data sources, and working closely with the data and business intelligence teams. You will have the opportunity to contribute to all stages of the software development life cycle, from concept and design to testing and deployment. In addition to your technical skills, you should be a creative thinker, have effective communication skills, and be comfortable working in a fast-paced and dynamic environment. Requirements: - 3-5 years of experience as a Python Developer or similar role. - Strong proficiency in Python and its core libraries (e.g., Pandas, NumPy, Matplotlib). - Proficiency in web frameworks (e.g., Flask, Django) and RESTful APIs. - Working knowledge of Database technologies (e.g., PostgreS, Redis, RDBMS) and data modeling concepts. - Hands-on experience with advanced excel. - Ability to work with cross-functional teams and communicate complex ideas to non-technical stakeholders. - Awareness of ISO:27001, creative thinker, and problem solver. - Strong attention to detail and ability to work in a fast-paced environment. - Head office based in London, UK, with the role located in Gurugram, India. At Uptitude, we embrace a set of core values that guide our work and define our culture: - Be Awesome: Strive for excellence in everything you do, continuously improving your skills and delivering exceptional results. - Step Up: Take ownership of challenges, be proactive, and seek opportunities to contribute beyond your role. - Make a Difference: Embrace innovation, think creatively, and contribute to the success of our clients and the company. - Have Fun: Foster a positive and enjoyable work environment, celebrating achievements and building strong relationships. Uptitude values its employees and offers a competitive benefits package, including: - Competitive Salary Commensurate With Experience And Qualifications. - Private health insurance coverage. - Offsite trips to encourage team building and knowledge sharing. - Quarterly team outings to unwind and celebrate achievements. - Corporate English Lessons with UK instructor. We are a fast-growing company with a global client base, so this is an excellent opportunity for the right candidate to grow and develop their skills in a dynamic and exciting environment. If you are passionate about coding, have experience with Python, and want to be part of a team that is making a real impact, we want to hear from you!,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
As a Data Analyst Intern at our company based in Delhi, you will be responsible for aggregating, cleansing, and analyzing large datasets from various sources. Your role will involve engineering and optimizing complex SQL queries for data extraction, manipulation, and detailed analysis. Additionally, you will develop advanced Python scripts to automate data workflows, transformation processes, and create sophisticated visualizations. You will also be tasked with building dynamic dashboards and analytical reports using Excel's advanced features like Pivot Tables, VLOOKUP, and Power Query. Your key responsibilities include decoding intricate data patterns to extract actionable intelligence that drives strategic decision-making. It is essential to maintain strict data integrity, precision, and security protocols in all analytical outputs. As part of your role, you will design and implement automation frameworks to eliminate redundancies and improve operational efficiency. To excel in this role, you should have a mastery of SQL, including crafting complex queries, optimizing joins, executing advanced aggregations, and efficiently structuring data with Common Table Expressions (CTEs). Proficiency in Python is crucial, with hands-on experience in data-centric libraries such as Pandas, NumPy, Matplotlib, and Seaborn for data analysis and visualization. Advanced Excel skills are also required, encompassing Pivot Tables, Macros, and Power Query to streamline data processing and enhance analytical efficiency. Furthermore, you should possess superior analytical acumen with exceptional problem-solving abilities and the capability to extract meaningful insights from complex datasets. Strong communication and presentation skills are essential to distill intricate data findings into compelling narratives for stakeholder interactions. This position offers opportunities for full-time, permanent, and internship job types. Benefits include paid sick time, paid time off, a day shift schedule, performance bonuses, and yearly bonuses. The work location is in person. If you are looking to apply your analytical skills in a dynamic environment and contribute to strategic decision-making through data analysis, this Data Analyst Intern position could be the perfect fit for you.,
Posted 3 days ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.
Posted 3 days ago
5.0 - 9.0 years
15 - 25 Lacs
Bengaluru
Work from Office
SKILLS AND COMPETENCIES Technical Skills: • Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) • Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques • Experience with big data processing using Spark for large-scale data analytics • Version control and experiment tracking using Git and MLflow • Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. • DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. • LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. • MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. • Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. • LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. • General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. • Experience in creating LLD for the provided architecture. • Experience working in microservices based architecture. Domain Expertise: • Strong mathematical foundation in statistics, probability, linear algebra, and optimization • Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation • Expertise in feature engineering, embedding optimization, and dimensionality reduction • Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing • Experience with RAG systems, vector databases, and semantic search implementation • Proficiency in LLM optimization techniques including quantization and knowledge distillation • Understanding of MLOps practices for model deployment and monitoring Professional Competencies: • Strong analytical thinking with ability to solve complex ML challenges • Excellent communication skills for presenting technical findings to diverse audiences • Experience translating business requirements into data science solutions • Project management skills for coordinating ML experiments and deployments • Strong collaboration abilities for working with cross-functional teams • Dedication to staying current with latest ML research and best practices • Ability to mentor and share knowledge with team members Primary Skill Set: Microservices LLM, Agentic AI Framework, Predictive modelling
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Bengaluru
Hybrid
Role & responsibilities Python, Pandas, SQL -Data frames, Series Data Fetching (From Flat files/Database ),loc,iloc and ix Merge/Joins/Concats, Apply Agg and group by functions Time Delta Preferred candidate profile 5-7 Years of experince in Python,Pandas,SQL Bangalore Location Immediate joiners
Posted 3 days ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier companyOur Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Senior Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Senior Machine Learning Engineer if you possess: At least 5 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. #LI-FZ1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 3 days ago
2.0 - 5.0 years
8 - 12 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Your Role Develop and implement Generative AI / AI solutions on Google Cloud Platform Work with cross-functional teams to design and deliver AI-powered products and services Work on developing, versioning and executing Python code Deploy models as endpoints in Dev Environment Solid understanding of python Deep learning frameworks such as TensorFlow, PyTorch, or JAX Natural language processing (NLP) and machine learning (ML) Cloud storage, compute engine, VertexAI, Cloud Function, Pub/Sub, Vertex AI etc Generative AI support in Vertex, specifically handson experience with Generative AI models like Gemini, vertex Search etc Your Profile Experience in Generative AI development with Google Cloud Platform Experience in delivering an AI solution on VertexAI platform Experience in developing and deploying AI Solutions with ML What youll love about working here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learnon one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications About Capgemini Location - Hyderabad,Pune,Bengaluru,Chennai,Mumbai
Posted 3 days ago
2.0 - 6.0 years
8 - 12 Lacs
Pune, Chennai, Bengaluru
Work from Office
Your Role Design and develop Agentic AI architectures that can autonomously plan, reason, and execute tasks. Implement multi-agent communication protocols for agent-to-agent collaboration and coordination. Work with Large Language Models (LLMs) such as LLaMA, GPT, etc., for language understanding, generation, and task planning. Develop and integrate Retrieval-Augmented Generation (RAG) pipelines to enhance the reasoning capability of agents with external knowledge. Perform fine-tuning of foundational models for specific domains, tasks, or use cases. Design and experiment with lightweight models (e.g., Phi, Tiny LLMs) for efficiency in real-time or on-device scenarios. Collaborate cross-functionally with Data Scientists, ML Engineers, and Product Teams to deliver end-to-end AI solutions. Your Profile Proven experience in developing and deploying Agentic AI systems. Strong experience with LLMs, particularly Metas LLaMA family or similar open-source models. Hands-on experience with RAG (Retrieval-Augmented Generation) architectures and related tools like LangChain, Haystack, etc. Knowledge and experience with model fine-tuning using frameworks like HuggingFace Transformers, PEFT, or LoRA. Experience in implementing agent-to-agent communication frameworks and multi-agent task handling systems. Proficiency in Python and AI/ML libraries such as PyTorch, TensorFlow, Transformers, etc. Familiarity with prompt engineering and chaining logic for LLMs. What youll love about working here You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications.Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can bring your original self to work . About Capgemini Location - Pune,Bengaluru,Chennai,Hyderabad
Posted 3 days ago
4.0 - 7.0 years
4 - 8 Lacs
Mumbai
Work from Office
Commercial Management : Roles & Responsibilities Improve WIN probability Deal shaping from commercial perspective Help arrive at Right Price to Win Internal Benchmarking Alternate pricing and commercial structures Client Business case Identify margin / price improvement levers Develop appropriate commercial solutions Review cost modeling Review Rfx documents to highlight risks Review compliance with internal guidelines Review pricing sheet responses Draft end to end Responses Commercial responses including contract markup, assumptions and T&Cs Establish MOUs/ agreements with Internal BUs Comprehensive contract documents with client and sub-contractors Commercial negotiation Commercial handover from pre-sale to post-sale teams Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Skills (competencies) Verbal Communication
Posted 3 days ago
0.0 - 1.0 years
4 - 8 Lacs
Nagpur, Bengaluru
Work from Office
Ocean Software Technologies is looking for Python Developer to join our dynamic team and embark on a rewarding career journey Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Integrating user-facing elements using server-side logic. Assessing and prioritizing client feature requests. Integrating data storage solutions. Reprogramming existing databases to improve functionality. Developing digital tools to monitor online traffic. Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Implement security and data protection solutions Assess and prioritize feature requests Coordinate with internal teams to understand user requirements and provide technical solutions.
Posted 3 days ago
3.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As a key member of our Data Science team, you will be responsible for developing innovative AI ML solutions across diverse business domains This includes designing, implementing, and optimizing advanced analytics models to address complex business challenges and drive data-driven decision making Your core responsibility will be to extract actionable insights from large datasets, develop predictive algorithms, and create robust machine learning pipelines You will collaborate closely with cross-functional teams including business analysts, software engineers, and product managers to understand business requirements, define problem statements, and deliver scalable solutions Additionally, you'll be expected to stay current with emerging technologies and methodologies in the AI/ML landscape to ensure our technical approaches remain cutting-edge and effective Desired Skills and experience Demonstrated expertise in applying advanced statistical modeling, machine learning algorithms, and deep learning techniques. Proficiency in programming languages such as Python data analysis and model development. Proficiency in cloud platforms, such as Azure, Azure Data Factory, Snowflake, Databricks. Experience with data manipulation, cleaning, and preprocessing using pandas, NumPy, or equivalent libraries. Strong knowledge of SQL and experience working with various database systems and big data technologies. Proven track record of developing and deploying machine learning models in production environments. Experience with version control systems (e.g., Git) and collaborative development practices. Proficiency with visualization tools and libraries such as Matplotlib, Seaborn, Tableau, or PowerBI. Strong mathematics background including statistics, probability, linear algebra, and calculus. Excellent communication skills with ability to translate technical concepts to non-technical stakeholders. Experience working in cross-functional teams and managing projects through the full data science lifecycle. Knowledge of ethical considerations in AI development including bias detection and mitigation techniques. Key Responsibilities Analyze complex datasets to extract meaningful insights and patterns using statistical methods and machine learning techniques. Design, develop and implement advanced machine learning models and algorithms to solve business problems and drive data-driven decision making. Perform feature engineering, model selection, and hyperparameter tuning to optimize model performance and accuracy. Create and maintain data processing pipelines for efficient data collection, cleaning, transformation, and integration. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Evaluate model performance using appropriate metrics and validation techniques to ensure reliability and robustness. Present findings, visualizations, and recommendations to stakeholders in clear, accessible formats tailored to technical and non-technical audiences. Stay current with the latest advancements in machine learning, deep learning, and statistical methods through continuous learning and research. Develop proof-of-concept applications to demonstrate the value and feasibility of data science solutions. Implement A/B testing and experimental design methodologies to validate hypotheses and measure the impact of implemented solutions. Document methodologies, procedures, and results thoroughly to ensure reproducibility and knowledge transfer within the organization.
Posted 3 days ago
3.0 - 5.0 years
9 - 11 Lacs
Pune
Work from Office
Hiring Senior Data Engineer for an AI-native startup. Work on scalable data pipelines, LLM workflows, web scraping (Scrapy, lxml), Pandas, APIs, and Django. Strong in Python, data quality, mentoring, and large-scale systems. Health insurance
Posted 3 days ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Remote
THIS IS A FULLY REMOTE JOB WITH 5 DAYS WORK WEEK. THIS IS A ONE YEAR CONTRACT JOB, LIKELY TO BE CONTINUED AFTER ONE YEAR. Required Qualifications Education: B.Tech /M.Tech in Computer Science, Data Engineering, or equivalent field. Experience: 7-10 years in data engineering, with 2+ years in an industrial/operations-heavy environment (manufacturing, energy, supply chain, etc.) Job Role Senior Data Engineer will be responsible for independently designing, developing, and deploying scalable data infrastructure to support analytics, optimization, and AI-driven use cases in a low-tech maturity environment . You will own the data architecture end-to-end , work closely with data scientists , full stack engineers , and operations teams , and be a driving force in creating a robust Industry 4.0-ready data backbone. Key Responsibilities 1. Data Architecture & Infrastructure Design and implement a scalable, secure, and future-ready data architecture from scratch. Lead the selection, configuration, and deployment of data lakes, warehouses (e.g., AWS Redshift, Azure Synapse), and ETL/ELT pipelines. Establish robust data ingestion pipelines from PLCs, DCS systems, SAP, Excel files, and third-party APIs. Ensure data quality, governance, lineage, and metadata management. 2. Data Engineering & Tooling Build and maintain modular, reusable ETL/ELT pipelines using Python, SQL, Apache Airflow, or equivalent. Set up real-time and batch processing capabilities using tools such as Kafka, Spark, or Azure Data Factory. Deploy and maintain scalable data storage solutions and optimize query performance. Tech Stack Strong hands-on expertise in: Python, SQL, Spark, Pandas ETL tools: Airflow, Azure Data Factory, or equivalent Cloud platforms: Azure (preferred), AWS or GCP Databases: PostgreSQL, MS SQL Server, NoSQL (MongoDB, etc.) Data lakes/warehouses: S3, Delta Lake, Snowflake, Redshift, BigQuery Monitoring and Logging: Prometheus, Grafana, ELK, etc.
Posted 3 days ago
5.0 - 8.0 years
10 - 18 Lacs
Bengaluru
Work from Office
Job Title: Machine Learning Engineer Pricing Optimizer Development Relevant Experience - 5 to 8 Years Bangalore location - Hybrid Mode Immediate Joiners only preferred Role Overview We are looking for a skilled ML Engineer to design and build a robust, interpretable, and scalable optimization engine that enables data-driven pricing decisions. The engine will leverage SKU-level demand, elasticity, and profitability data to recommend optimal pricing strategies based on business goals. Key Responsibilities Refactor and enhance the existing optimization logic for speed, scalability, and modularity. Model price-volume relationships using elasticity inputs at SKU and brand levels. Develop optimization logic using constrained techniques (e.g., linear/quadratic programming). Integrate business rules and constraints such as price bounds, SKU/brand-level caps, and TDP limits. Enable scenario-based optimization with user-defined goals (e.g., maximize profit or volume). Support multi-brand optimization without interdependency between SKUs and brands. Collaborate with cross-functional teams to validate model behavior and outcomes. Inputs You’ll Work With SKU-level data: volume, elasticity, profitability, TDP, and price bounds. Business rules: brand-level caps, SKU exclusions, optimization targets. Deliverables Modular Python-based optimization engine. JSON-based input/output support for seamless integration. Logging and fallback mechanisms for infeasible constraints. Unit tests and validation with historical data. Tech Stack & Skills Proficient in Python, especially NumPy, Pandas, SciPy or Pyomo/CVXOPT. Experience with constrained optimization techniques. (Preferred) Background in price optimization, demand modeling, or econometrics.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough