Jobs
Interviews

497 Scipy Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Engineering team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a skilled and passionate Senior Software Engineer expertise in Python to join our development team. As a Senior Software Engineer, you will be a crucial member of our development team, responsible for leading and driving the development of complex, scalable, and high-performance Python-based applications. One of your main focus will be on developing and supporting efficient, reusable and highly scalable APIs & components to deliver a compelling experience to users across platforms. You will collaborate with cross-functional teams, mentor junior developers, and coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important. You will take part in the planning and strategy to come up with the solutions with full ownership. You will own the development and its quality independently and be responsible for high quality deliverables. And you will work with a great team with excellent benefits. Responsibilities & Skills You should: Design, develop, and maintain high-quality software applications using Python and the Django framework. Collaborate with cross-functional teams to define, design, and ship new features and enhancements. Integrate third-party APIs (REST, SOAP, streaming services) into the existing product. Optimize application performance and ensure scalability and reliability. Write clean, maintainable, and efficient code, following best practices and coding standards. Participate in code reviews and provide constructive feedback to peers. Troubleshoot and debug applications, identifying root causes of issues. Stay current with industry trends, technologies, and best practices in software development. Required Skills (Python) Bachelor’s or Master’s degree in Computer Science or related field from IIT, NIT, or any other reputed institute. 7-10 years of experience in software development, with at least 4 years of background in Python and Django . Working knowledge of Golang Experience integrating third-party APIs (REST, SOAP, streaming services) into applications. Familiarity with database technologies, particularly MySQL(must have) and HBase.(nice of have) Experience with message brokers like Kafka (must), Rabbitmq and Redis Experience on Version control systems such as Github Familiarity with RESTful APIs and integration of third-party APIs. Strong understanding of software development methodologies, particularly Agile. Demonstrable experience with writing unit and functional tests Excellent problem-solving skills and ability to work collaboratively in a team environment. Experience with database systems such as PostgreSQL, MySQL, or MongoDB. Required The following experiences are not required, but you'll stand out from other applicants if you have any of the following, in our order of importance: Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience Knowledge on IEEE 2030.5 standard (Protocol) Knowledge on Serverless architecture, preferably AWS Lambda Experience with PySpark, Pandas, Scipy, Numpy libraries is a plus Experience in microservices architecture Solid CI/CD experience You are a Git guru and revel in collaborative workflows You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide Knowledge of modern authorization mechanisms, such as JSON Web Token Good to have front end technologies like - ReactJS, NodeJS Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Uplight provides equal employment opportunities to all employees and applicants and prohibits discrimination and harassment of any type without regard to race (including hair texture and hairstyles), color, religion (including head coverings), age, sex, national origin, caste, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Show more Show less

Posted 2 months ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

This is a key position supporting client organization with strong Analytics and data science capabilities. There is significant revenue and future opportunities associated with this role. Job Description: Develop and maintain data tables (management, extraction, harmonizing etc.) using GCP/ SQL/ Snowflake etc. This involves designing, implementing, and writing optimized codes, maintaining complex SQL queries to extract, transform, and load (ETL) data from various tables/sources, and ensuring data integrity and accuracy throughout the data pipeline process. Create and manage data visualizations using Tableau/Power BI. This involves designing and developing interactive dashboards and reports, ensuring visualizations are user-friendly, insightful, and aligned with business requirements, and regularly updating and maintaining dashboards to reflect the latest data and insights. Generate insights and reports to support business decision-making. This includes analyzing data trends and patterns to provide actionable insights, preparing comprehensive reports that summarize key findings and recommendations, and presenting data-driven insights to stakeholders to inform strategic decisions. Handle ad-hoc data requests and provide timely solutions. This involves responding to urgent data requests from various departments, quickly gathering, analyzing, and delivering accurate data to meet immediate business needs, and ensuring ad-hoc solutions are scalable and reusable for future requests. Collaborate with stakeholders to understand and solve open-ended questions. This includes engaging with business users to identify their data needs and challenges, working closely with cross-functional teams to develop solutions for complex, open-ended problems, and translating business questions into analytical tasks to deliver meaningful results. Design and create wireframes and mockups for data visualization projects. This involves developing wireframes and mockups to plan and communicate visualization ideas, collaborating with stakeholders to refine and finalize visualization designs, and ensuring that wireframes and mockups align with user requirements and best practices. Communicate findings and insights effectively to both technical and non-technical audiences. This includes preparing clear and concise presentations to share insights with diverse audiences, tailoring communication styles to suit the technical proficiency of the audience, and using storytelling techniques to make data insights more engaging and understandable. Perform data manipulation and analysis using Python. This includes utilizing Python libraries such as Pandas, NumPy, and SciPy for data cleaning, transformation, and analysis, developing scripts and automation tools to streamline data processing tasks, and conducting statistical analysis to generate insights from large datasets. Implement basic machine learning models using Python. This involves developing and applying basic machine learning models to enhance data analysis, using libraries such as scikit-learn and TensorFlow for model development and evaluation, and interpreting and communicating the results of machine learning models to stakeholders. Automate data processes using Python. This includes creating automation scripts to streamline repetitive data tasks, implementing scheduling and monitoring of automated processes to ensure reliability, and continuously improving automation workflows to increase efficiency. Requirements: 3 to 5 years of experience in data analysis, reporting, and visualization. This includes a proven track record of working on data projects and delivering impactful results and experience in a similar role within a fast-paced environment. Proficiency in GCP/ SQL/ Snowflake/ Python for data manipulation. This includes strong knowledge of GCP/SQL/Snowflake services and tools, advanced SQL skills for complex query writing and optimization, and expertise in Python for data analysis and automation. Strong experience with Tableau/ Power BI/ Looker Studio for data visualization. This includes demonstrated ability to create compelling and informative dashboards, and familiarity with best practices in data visualization and user experience design. Excellent communication skills, with the ability to articulate complex information clearly. This includes strong written and verbal communication skills, and the ability to explain technical concepts to non-technical stakeholders. Proven ability to solve open-ended questions and handle ad-hoc requests. This includes creative problem-solving skills and a proactive approach to challenges, and flexibility to adapt to changing priorities and urgent requests. Strong problem-solving skills and attention to detail. This includes a keen eye for detail and accuracy in data analysis and reporting, and the ability to identify and resolve data quality issues. Experience in creating wireframes a nd mockups. This includes proficiency in design tools and effectively translating ideas into visual representations. Ability to work independently and as part of a team. This includes being self-motivated and able to manage multiple tasks simultaneously and having a collaborative mindset and willingness to support team members. Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 months ago

Apply

1.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Role Description We are seeking a skilled Python Data Analyst to join our dynamic data analytics team. This is a full-time, on-site role based in Indore. As a key member of our team, you will leverage Python to analyze data, create insightful visualizations, and develop machine learning models to drive business decisions. Only candidates currently residing in Indore or willing to relocate to Indore are eligible to apply. What you will do? Python Programming Utilize Python libraries to efficiently process and analyze data. Develop automation scripts to streamline calculations and tabulations traditionally performed using Excel pivot tables and formulas. Test, debug, and optimize Python code to ensure accuracy and efficiency. Provide technical support to end-users utilizing the developed scripts. Document code and conduct training sessions for end-users. What we are looking for? Minimum 1 year of experience automating repetitive calculations using Python. Proficiency in at least one popular Python framework (such as Pandas, openpyxl, xlwings, NumPy, SciPy, etc.). Strong written and verbal communication skills. Advanced proficiency in MS Excel (formulas, pivot tables). Working knowledge of MS PowerPoint for reporting and presentations. Bachelor's degree Why join us? A fast-paced, intellectually stimulating environment where high performers thrive Competitive salary Work with elite professionals and global clients Opportunities to shape the future of insights through AI-driven innovation A culture that values curiosity, ownership, and bold thinking About the Company: MavenMagnet AI is transforming the insights generation industry by delivering in-depth analytics with significant time and cost efficiency as compared to traditional alternatives. Our SaaS platform - Insights Assistant - uses disruptive deep learning analytical capabilities to harness innumerable datasets, and empower brands and organization with actionable qualitative insights on a quantitative scale to help them make more efficient business decisions. If you're ready to bring your passion and dedication to a team that’s redefining the future of consumer insights, we’d love to hear from you. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Data Scientist Intern Job ID: 0479 Work Mode: Remote Experience Level: Entry Level (Fresher) Stipend: ₹8,500 per month About The Role We’re looking for a motivated and curious Data Scientist Intern to join our remote team. This internship offers hands-on experience in real-world data projects, from data preprocessing to machine learning and visualization. It’s a great opportunity for freshers who want to build a strong foundation in data science and analytics. Key Responsibilities Data Collection & Preprocessing: Gather and clean data from various sources such as databases, APIs, and spreadsheets Exploratory Data Analysis: Analyze data using statistics and visualizations to uncover patterns and insights Machine Learning: Assist in building and evaluating machine learning models for predictive tasks Data Visualization & Reporting: Create reports and dashboards using tools like Tableau, Power BI, Matplotlib, or Seaborn Cross-Functional Collaboration: Work with teams including engineering and product to support data-driven initiatives Documentation: Maintain detailed documentation of data workflows, tools used, and insights generated Continuous Learning: Stay updated with industry best practices and emerging tools in data science What We’re Looking For Currently pursuing or recently completed a degree in Data Science, Computer Science, Statistics, Mathematics, or related fields Proficiency in Python or R for data analysis and machine learning Familiarity with Pandas, NumPy, SciPy, and SQL Basic knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch Ability to visualize and interpret data effectively Strong analytical and communication skills Eagerness to learn and grow in a fast-paced environment Eligibility Open to freshers and recent graduates Candidates from all technical and analytical backgrounds are encouraged to apply Note: This is a paid internship.Skills: pandas,python,numpy,data visualization,r,machine learning,data analysis,tensorflow,data preprocessing,exploratory data analysis,sql,pytorch,scikit-learn,data science,scipy,data collection Show more Show less

Posted 2 months ago

Apply

5.0 - 6.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Job Summary: We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Send your resume to hr@epsumlabs.com Show more Show less

Posted 2 months ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

This role is essential for advancing the strategic objectives of the insurance actuarial function by implementing analytics and transformational initiatives. The team is primarily responsible for innovation, research & development. Utilizing high end analytical & technical skills, team members design models/methodologies tailored for actuarial reserving processes, as well as analyze financial data to generate actionable insights that aid decision-making. This position offers an exciting opportunity to participate in a range of analytics projectsdescriptive, diagnostic, predictive, and prescriptivewhile also focusing on Artificial Intelligence and cloud migration, thereby enhancing the organizations ability to adapt to emerging business needs. DISCOVER your opportunity What will your essential responsibilities include? Implementation of analytics projects and other transformational projects that directly impact the organization's strategic objectives. Run tools based in Python, R, SQL, and execute ETL processes to facilitate business deliverables, with a focus on future development work to drive actionable insights and business impact. Write high-quality, effective code that can be easily scaled across platforms using Python/R programming. Deepen the understanding of the business to contribute to other analytics initiatives, including predictive modeling, and collaborate on data-driven projects with cross-functional teams. Learn in-house software platforms used for actuarial reserving and manage their use in the processes, contributing to the enhancement of analytical capabilities. Manage quarterly/monthly/yearly financial data for MI reporting and collaborate with stakeholders to provide valuable insights and support decision-making. Partner with global technology teams to deliver changes to our data and processes to meet strategic goals, actively participating in transformative projects including move to the cloud. Demonstrate proactive communication with Business users, Development, Technology, Production Support, and Delivery Teams, and Senior Management to drive collaborative problem-solving and knowledge sharing. Develop and maintain process documentation to ensure transparency and knowledge transfer within the team and across stakeholders. Support ad-hoc activities to address emerging business needs and contribute to the agility of the team.You will report to Lead, AFR. SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: University Graduate (B.E/B.Tech/CS/IT/BSc). Relevant years of work experience, preferably in the insurance industry, financial services, or consultancy. Good knowledge of Statistics and mathematical functions. Good hands-on computer application skills, specifically python programming, SQL, Power BI & MS Excel. In-depth knowledge of Python software development, including frameworks, tools, and systems (NumPy, Pandas, Django, SciPy, PyTorch, etc.). Desired Skills and Abilities: Good to have knowledge of R programming (dplyr) and QlikView. Excellent analytical, research, and problem-solving skills. Understanding of cloud principles with good exposure to Microsoft Azure stack (Databricks, SQL DB etc.). Understanding of AI fundamentals including exposure to LLMs.

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Requirements Work with a team to develop advanced analytic techniques to interrogate, visualize, interpret, and contextualize data and develop novel solutions to healthcare specific problems Implement a variety of analytics from data processing & QA to exploratory analytics and complex predictive models Understand client / product needs and translate them into tactical initiatives with defined goals and timelines Implement models using high level software packages (SKlearn, TensorFlow, PySpark, Databricks) Collaborate on software projects, providing analytical guidance and contributing to codebase Devises modeling and measuring techniques, and utilizes mathematics, statistical methods, engineering methods, operational mathematics techniques (linear programming, game theory, probability theory, symbolic language, etc.) and other principles and laws of scientific and economic disciplines to investigate complex issues, identify, and solve problems, and aid better decision making Plans, designs, coordinates and controls the progress of project work to meet client objectives; prepares and presents reports to clients Solves highly specialized technical objectives or problems without a pre-defined approach where the use of creative, imaginative solutions is required Synthesize raw data into digestible and actionable information. Identify specific research areas that merit investigation, develop new hypotheses and approaches for studies and evaluate the feasibility of such endeavors. Initiate, formulate, plan, execute and controls studies, which are designed for the purpose of identifying, analyzing, and reporting on healthcare related issues. Advise management on the selection of an appropriate study design, analysis, and in interpretation of study results. Work Experience BS/ MS in mathematics, physics, statistics, engineering, or similar discipline. Ph.D. preferred. Minimum of 5 years analytics/ Datascience experience Solid experience writing SQL queries Strong programming abilities Python (pandas, sklearn, numpy/ scipy, pyspark) Knowledge of statistical methods- regression, ANOVA, EDA, PCA, etc. Basic visualization skills- matplotlib/seaborn/plotly/etc. PowerBI experience highly preferred. Experience manipulating data sets through commercial and open source software (e.g. Redshift, Snowflake, Spark, Python, R, Databricks) Working knowledge of medical claims data (ICD-10 codes, HCPCS, CPT, etc.) Experience utilizing a range of analytics involving standard data in the Pharmaceutical Industry e.g. claims data from Symphony, IQVIA, Truven, Allscripts, etc. Must be comfortable conversing with the end-users Excellent analytical, verbal and communication skills Ability to thrive in a fast-paced, innovative environment Advanced Excel skills including (v-look-up, pivot tables, charts, graphing , and macros) Excellent documentation skills Excellent planning, organizational, and time management skills Ability to lead meetings and give presentation. Show more Show less

Posted 2 months ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Role: Python Developer Hyderabad- Hybrid Job Description: We are seeking a skilled Python Developer to join our team and contribute to designing, developing, and maintaining high-performance applications. The ideal candidate should have strong experience in Python, along with expertise in web frameworks like Flask or Django , database management, and API development. Required Skills & Qualifications: Strong proficiency in Python (3.x) and knowledge of OOP principles. Experience with Flask or Django for web application development. Proficiency in working with databases ( SQL and NoSQL ). Hands-on experience with RESTful API development and integration. Familiarity with version control tools like Git, GitHub, or GitLab . Experience with cloud platforms ( AWS, Azure, or Google Cloud ) is a plus. Knowledge of containerization tools like Docker and Kubernetes is an advantage. Strong debugging, testing, and problem-solving skills. Experience with CI/CD pipelines is a plus. Ability to work independently and collaboratively in an agile environment. Preferred Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. Experience with asynchronous programming (Celery, RabbitMQ) is a plus. Knowledge of data processing, analytics, or AI/ML frameworks is beneficial.

Posted 2 months ago

Apply

2.5 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Description The Smart Cube, a WNS company, is a trusted partner for high performing intelligence that answers critical business questions. And we work with our clients to figure out how to implement the answers, faster. Job Description Roles and responsibilities Implement Web applications in Python/Django Understanding project requirement & converting into technical requirements Independently do the coding/development of complex modules Independent contributor, high quality software within the timelines Ensure high quality releases through appropriate QC and QA activities Participate in technical discussions / reviews Works collaboratively and professionally with other associates in cross functional teams to achieve goals Apply a sense of urgency, commitment and focus on the right priorities in developing solutions in a timely fashion Independently look for solutions to problems, but keep detailed records of what assumptions and steps were taken, and be able to communicate the logic in a clear and concise manner Qualifications Ideal Candidate 2.5-3 years relevant experience in developing applications for financial services clients 3-5 years of experience implementing web application in Python/ Django Knowledge of SciPy/NumPy/Pandas libraries in Python to develop quant models AWS lambda, RabbitMQ, Celery Good understanding of SDLC phases of application development Experience in successfully delivering software projects in a variety of domains, such as retail, financial services and procurement under time/cost pressures. Experience in agile methodology approach and project management principles Must have good understanding of SDLC phases of application development Must be able to design and develop web applications using Open source technology (Python) Should have good understanding and exposure to unit testing. Should have good understanding and exposure to data structures, design patterns, design and architecture of web-based applications. Show more Show less

Posted 2 months ago

Apply

2.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

WHO WE ARE: At Optiver, trading is our business, and we believe that to be competitive, we need to adapt, innovate, and continuously improve. Our strategy research team combines data and a systematic approach to identify opportunities in the market. Then, we iterate on these ideas to maximise the performance of our strategies. This is where you come in! In Asia Pacific, Optiver was one of the first global market makers to establish a presence in the region, with the incorporation our Sydney office in 1996. Since then, we have expanded our footprint by establishing offices in Taipei (2005), Hong Kong (2007), Shanghai (2012) and Singapore (2021). The business in Mumbai is newly established and deemed to be an integral part of the APAC strategy, with an anticipation of significant growth over the coming years. WHAT YOU’LL DO: You will improve our trading results using insights driven by data. You'll work closely with the development, research, and trading teams. You will develop hypotheses and test them experimentally. You'll direct engineering efforts and build tools to monitor and optimise our low-latency trading strategies. You'll work with cutting-edge hardware and terabytes of data. You’ll reverse engineer systems to understand how to improve our trading success. In short, you’ll be responsible for defining and driving improvements to our trading through technology. WHO YOU ARE: 2-5 years' years in an equivalent role. An understanding of how computer systems work e.g., memory allocation, caches, system programming, networking. Experience in programming Python (Pandas, Numpy, SciPy) or other data science toolkits (R, Matlab) strongly preferred. Scientific and critical thinking skills, as well as an eye for detail. Outstanding communication skills. You’re able to coordinate with multiple stakeholder groups, challenging thinking and obtaining buy-in. The ability to present complex ideas with data to expedite decision making. A focus on pragmatic outcomes, and the ability to prioritise effectively. An interest in, or knowledge of, trading. Reverse engineering experience a bonus. WHAT YOU’LL GET: The chance to work alongside diverse, intelligent, and driven peers in a rewarding environment Competitive remuneration, including an attractive bonus structure Training, mentorship and personal development opportunities Competitive benefits A work-from-home allowance and support As an intentionally flat organisation, we believe that great ideas and impact can come from everyone. We are passionate about empowering individuals and creating diverse teams that thrive. Every person at Optiver should feel included, valued and respected, because we believe our best work is done together. Our commitment to diversity and inclusion is hardwired through every stage of our hiring process. We encourage applications from candidates from any and all backgrounds, and we welcome requests for reasonable adjustments during the process to ensure that you can best demonstrate your abilities. Show more Show less

Posted 2 months ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities: • Develop an in-depth understanding of user journeys and generate data driven insights & recommendations to help product and customer success teams in meticulous decision-making. • Define and analyze key product data sets to understand customer and product behavior. • Work with stakeholders throughout the organization to identify opportunities for leveraging data to identify areas of growth and build strong data backed business cases around the same. • Perform statistical analysis/modelling on data and uncover hidden data patterns and correlations. • Perform feature engineering and develop and deploy predictive models/algorithms. • Coordinate with different teams to implement and deploy AI/ML driven models. • Conduct ad-hoc analysis around product areas for growth hacking and produce consumable reports for multiple business stakeholders. • Develop processes and tools to monitor and analyze model performance and data accuracy. Technical Skills : • At least 4-6 years of experience of working with real-world data and building statistical models. • Hands-on experience of programming with Python, including working knowledge of packages like Pandas, Numpy, SciPy, Scikit-Learn,Seaborn,Plotly etc. • Hands-on knowledge of SQL and Excel. • Deep understanding of key supervised and unsupervised ML algorithms – should be able to explain what is happening under the hood and their real-world advantages/drawbacks. • Strong foundation of statistics and probability theory. • Knowledge of advanced statistical techniques and concepts (properties of distributions, statistical tests, simulations, Markov chain etc.) and experience with applications. Other Skills: • Preferred Domain Experience: Gaming, E-Commerce, or any B2C experience. • Ability to break a problem into smaller chunks and design solution accordingly. • Ability to dive deeper into data, ask right questions, analyze with statistical methods and generate insights. • Ability to write modular, clean and well-documented code along with crisp design documents. • Strong communication and presentation skills. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day. Basic Qualifications 3+ years of building machine learning models for business application experience PhD, or Master's degree and 6+ years of applied research experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning Preferred Qualifications Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2713237 Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Are you ready for a thrilling opportunity to join a vibrant team in a challenging environment? As a Quant Modelling Vice President in QR Markets Capital (QRMC) team's team, you will play a pivotal role by implementing the next generation of risk analytics platform. The QR Markets Capital (QRMC) team's mission is to build the models and infrastructure used for the risk management of Market Risk such as of Value at Risk(VAR)/Stress/Fundamental Review of the Trading Book(FRTB). The QRMC team in India will therefore play a critical role and support the activities of QRMC group globally. We also work closely with Front Office and Market Risk functions to develop tools and utilities for model development and risk management purposes. Job Responsibilities Work on the implementation of the next generation of risk analytics platform; Assess model performance, perform back testing analysis and P&L attribution; Improve performance and scalability of analytics algorithms; Develop and enhance mathematical models for VaR/Stress/FRTB; Assess the appropriateness of quantitative models and their limitations, identifying and monitoring the associated model risk; Design efficient numerical algorithms and implementing high performance computing solutions; Design and develop software frameworks for analytics and their delivery to systems and applications. Required Qualifications, Capabilities, And Skills Advanced degree (PhD, MSc, B.Tech or equivalent) in Engineering, Mathematics, Physics, Computer Science, etc.; 3+ years of relevant experience in Python and/or C++ along with proficiency in data structures, standard algorithms and object oriented design; You have basic understanding of product knowledge across a range of asset classes – Credit, Rates, Equities, Commodities, FX & SPG; You’re interested in applying agile development practices; You demonstrate quantitative and problem-solving skills as well as research skills; You understand basic mathematics such as statistics, probability theory; You demonstrate good interpersonal and communication skills, ability to work in a group; You’re attentive to detail and easily adaptable; Preferred Qualifications, Capabilities, And Skills Experience applying statistical and/or machine learning techniques in the financial industry; Knowledge of options pricing theory, trading algorithms or financial regulations; Experience using multi-threading, GPU, MPI, grid, or other HPC technologies is a plus; Excellent knowledge on data analysis tools in python like Pandas, Numpy, Scipy etc; Knowledge of advanced mathematics such as stochastic calculus; Knowledge of front-end technologies like HTML, React and integration with large data sets. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team The Corporate & Investment Bank is a global leader across investment banking, wholesale payments, markets and securities services. The world’s most important corporations, governments and institutions entrust us with their business in more than 100 countries. We provide strategic advice, raise capital, manage risk and extend liquidity in markets around the world. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Job summary: Financial Institutions routinely use models for a broad range of activities including analyzing business strategies, informing business decisions, identifying and measuring risk, valuing exposures or instruments, hedging derivative positions, conducting stress testing, assessing capital adequacy, managing clients assets, informing investment process, measuring compliance with internal limits, maintaining the formal control apparatus of the bank, meeting financial or regulatory reporting requirements and issuing public disclosures. Model Risk arises from the potential adverse consequences of making decisions based on incorrect or misused model outputs and reports, leading to financial loss, poor business decision making, or reputational damage. As part of the firm’s model risk management function, Model Risk Governance and Review Group (MRGR) is charged with developing model risk policy and control procedures, performing model validation activities, providing guidance on a model’s appropriate usage in the business context, evaluating ongoing model performance testing, and ensuring that model users are aware of the model strengths and limitations. Model manager roles within MRGR provide attractive career paths for model development and model validation quants in a dynamic setting working closely with Model Developers, Model Users, Risk and Finance professionals, where they act as key stakeholders on day-to-day model-related risk management decisions as well as conduct independent model validation of new and existing models. Core Responsibilities The successful candidate will be a member of the MRGR Group in Bengaluru covering all Data Science across all ex-trading applications of artificial intelligence and machine learning models across CIB: Engage in new model validation activities for all Data Science models in the coverage area - evaluate conceptual soundness of model specification; reasonableness of assumptions and reliability of inputs; fit for purpose; completeness of testing performed to support the correctness of the implementation; robustness of numerical aspects; suitability and comprehensiveness of performance metrics and risk measures associated with use of model. Conduct independent testing Perform additional model review activities ranging from proposed enhancements to existing models, extensions to scope of existing models. Liaise with Model Developers, Model Users, Risk and Finance professionals to provide oversight of and guidance on appropriate usage, controls around model restrictions & limitations, and findings for ongoing performance assessment & testing Maintain model risk control apparatus of the bank for the coverage area & serve as first point of contact Keep up with the latest developments in coverage area in terms of products, markets, models, risk management practices and industry standards Essential Skills, Experience, And Qualifications A Ph.D. or Master’s degree in a Data Science oriented field such as Data Science, Computer Science or Statistics, is required. Prior experience in following backgrounds (2-4+ years): Data Science, Quantitative Model Development, Model Validation or Technology focused on Data Science including hands on experience with building/testing machine learning models Strong understanding of Machine Learning / Data Science theory, techniques and tools including Transformers, Large Language Models, NLP, GANs, Deep Learning, OCR, XGBoost, and Reinforcement Learning Understanding of the machine learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop is an asset Proficiency in Python programming. Python machine learning library and ecosystem experience: Numpy Scipy Scikit-learn Theano TensorFlow Keras PyTorch Pandas Excellent writing skills: previous experience in writing scientific text with the ability to describe evidence and present logical reasoning clearly. Strong communication skills and ability to interface with other functional areas in the bank on model-related issues Risk and control mindset: ability to ask incisive questions, converge on critical matters, assess materiality and escalate issues ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About The Role This is a unique opportunity to join our fast-growing Global Product Management Data Science team. You will be working with the team managing ‘The Client Experience Digital Platform’ which is the go-to place for all clients to interact with Gartner to get value. Our clients are IT and business leaders around the world. They expect to get help from Gartner on their most critical priorities. The Digital Platform is embedded in their workflows to help them on every step of their journey. Our mission is to deliver tremendous client value by building scalable, intelligent digital products and solutions. We are constantly looking for product management, UX and data science leaders to help us accelerate innovation to disrupt the marketplace and disrupt ourselves. What You Will Do Work on Data science projects in close collaboration with the Data Engineering team, Application development team, Product owners and business leaders to deliver high value business capabilities Solve ‘Search’ value stream problems and help improve Gartner’s client experience in finding the most meaningful and valuable insights Build user query understanding and intent refinement models to refine query to content similarity Build and incorporate LLMs in addition to vector search capabilities Be responsible for high quality data science solutions with respect to accuracy and coverage. Be accountable for solutions’ scalability, stability, and business adoption Responsible for maintaining proper documentation and further code-reusability principles Responsible for ownership of algorithms and its enhancements/optimizations as per business requirement Collaborate with Director, Data Science in long term vision, strategy, and solution roadmap to align with bigger business objectives and mission critical priorities of the organization Responsible to pitch ideas, present solutions and influence senior leaders with strong business value propositions Stay on top of fast-moving AI/ML models and technologies. Understand and follow disruptive data science solutions Collaborate with engineering and product teams to launch MVPs and iterate quickly Independently plan and drive data science projects that deliver clear business value What You Will Need 6-8 years hands-on experience building predictive models, search systems, or other machine learning/artificial intelligence applications to drive business impact Bachelor’s degree required while a master’s degree in a quantitative field (math, computer science, engineering, etc.) is strongly preferred Demonstrated ability to translate quantitative analysis into actionable business strategies. Strong communication skills in technical and business domains Working experience in some of the following data science areas: Machine Learning and Predictive modeling Text mining and Natural Language Processing Search or Recommendation systems Data analytics with multi-dimensional data Generative models Strong working knowledge of Lean product principles, software development lifecycle, and machine learning life cycle Practical, intuitive problem solver with a demonstrated ability to translate business objectives into actionable data science tasks and translate quantitative analysis into actionable business strategies Ability to implement latest ML research to improve our current algorithms Experience and proficiency with Python, machine learning tools (e.g., scikit-learn, spacy, nltk), deep learning (e.g., pytorch, tensorflow), statistical packages (e.g., Scipy), SQL/relational databases (e.g., Oracle) and NoSQL databases (e.g., MongoDB, graph database), distributed machine learning (spark), Linux and shell scripting Experience with cloud computing services such as AWS or Azure ML Ability to work collaboratively across product, data science and technical stakeholders Ability to work in a culture that thrives on feedback and seeks opportunities to stretch outside comfort zone Bias for action and client outcome oriented Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:100434 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser. Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience : 4 – 7 years Location : Pune, India (Work from Office) Job Description Strong background in machine learning (unsupervised and supervised techniques) with significant experience in text analytics/NLP. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc. Strong programming ability in Python with experience in the Python data science ecosystem: Pandas, NumPy, SciPy, scikit-learn, NLTK, etc. Good knowledge of database query languages like SQL and experience with databases (PostgreSQL/MySQL/ Oracle/ MongoDB). Excellent verbal and written communication skills. Excellent analytical and problem-solving skills. Degree in Computer Science, Engineering or relevant field is preferred. Proven Experience as Data Analyst or Data Scientist. Good To Have Familiarity with Hive, Pig and Scala. Experience in embeddings, Retrieval Augmented Generation (RAG), Gen AI Experience with Data Visualization Tools like matplotlib, plotly, seaborn, ggplot, etc. Experience with using cloud technologies on AWS/ Microsoft Azure. Apply for this position Upload Resume* Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About the role: This is a unique opportunity to join our fast-growing Global Product Management Data Science team. You will be working with the team managing ‘The Client Experience Digital Platform’ which is the go-to place for all clients to interact with Gartner to get value. Our clients are IT and business leaders around the world. They expect to get help from Gartner on their most critical priorities. The Digital Platform is embedded in their workflows to help them on every step of their journey. Our mission is to deliver tremendous client value by building scalable, intelligent digital products and solutions. We are constantly looking for product management, UX and data science leaders to help us accelerate innovation to disrupt the marketplace and disrupt ourselves. What you will do: Work on Data science projects in close collaboration with the Data Engineering team, Application development team, Product owners and business leaders to deliver high value business capabilities Solve ‘Search’ value stream problems and help improve Gartner’s client experience in finding the most meaningful and valuable insights Build user query understanding and intent refinement models to refine query to content similarity Build and incorporate LLMs in addition to vector search capabilities Be responsible for high quality data science solutions with respect to accuracy and coverage. Be accountable for solutions’ scalability, stability, and business adoption Responsible for maintaining proper documentation and further code-reusability principles Responsible for ownership of algorithms and its enhancements/optimizations as per business requirement Collaborate with Director, Data Science in long term vision, strategy, and solution roadmap to align with bigger business objectives and mission critical priorities of the organization Responsible to pitch ideas, present solutions and influence senior leaders with strong business value propositions Stay on top of fast-moving AI/ML models and technologies. Understand and follow disruptive data science solutions Collaborate with engineering and product teams to launch MVPs and iterate quickly Independently plan and drive data science projects that deliver clear business value What you will need: 6-8 years hands-on experience building predictive models, search systems, or other machine learning/artificial intelligence applications to drive business impact Bachelor’s degree required while a master’s degree in a quantitative field (math, computer science, engineering, etc.) is strongly preferred Demonstrated ability to translate quantitative analysis into actionable business strategies. Strong communication skills in technical and business domains Working experience in some of the following data science areas: Machine Learning and Predictive modeling Text mining and Natural Language Processing Search or Recommendation systems Data analytics with multi-dimensional data Generative models Strong working knowledge of Lean product principles, software development lifecycle, and machine learning life cycle Practical, intuitive problem solver with a demonstrated ability to translate business objectives into actionable data science tasks and translate quantitative analysis into actionable business strategies Ability to implement latest ML research to improve our current algorithms Experience and proficiency with Python, machine learning tools (e.g., scikit-learn, spacy, nltk), deep learning (e.g., pytorch, tensorflow), statistical packages (e.g., Scipy), SQL/relational databases (e.g., Oracle) and NoSQL databases (e.g., MongoDB, graph database), distributed machine learning (spark), Linux and shell scripting Experience with cloud computing services such as AWS or Azure ML Ability to work collaboratively across product, data science and technical stakeholders Ability to work in a culture that thrives on feedback and seeks opportunities to stretch outside comfort zone Bias for action and client outcome oriented Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:100434 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

India

Remote

Python Developer We are looking for an enthusiastic and skilled Python Developer with a passion for AI-based application development to join our growing technology team. This position offers the opportunity to work at the intersection of software engineering and data analytics, contributing to innovative AIdriven solutions that drive business impact. If you have a strong foundation in Python, a flair for problem-solving, and an eagerness to build intelligent systems, we would love to meet you! Key Responsibilities • Develop and deploy AI-focused applications using Python and associated frameworks. • Collaborate with Developers, Product Owners, and Business Analysts to design and implement machine learning pipelines. • Create interactive dashboards and data visualizations for actionable insights. • Automate data collection, transformation, and processing tasks. • Utilize SQL for data extraction, manipulation, and database management. • Apply statistical methods and algorithms to derive insights from large datasets. Required Skills and Qualifications • 2–3 years of experience as a Python Developer, with a strong portfolio of relevant projects. • Bachelor’s degree in Computer Science, Data Science, or a related technical field. • In-depth knowledge of Python, including frameworks and libraries such as NumPy, Pandas, SciPy, and PyTorch. • Proficiency in front-end technologies like HTML, CSS, and JavaScript. • Familiarity with SQL and NoSQL databases and their best practices. • Excellent communication and team-building skills. • Strong problem-solving abilities with a focus on innovation and self-learning. • Knowledge of cloud platforms such as AWS is a plus. Additional Requirements This opportunity enhances your work life balance with allowance for remote work. To be successful your computer hardware and internet must meet these minimum requirements: 1. Laptop or Desktop: • Operating System: Windows • Screen Size: 14 Inches • Screen Resolution: FHD (1920×1080) • Processor: I5 or higher • RAM: Minimum 8GB (Must) • Type: Windows Laptop • Software: AnyDesk • Internet Speed: 100 MBPS or higher About ARDEM ARDEM is a leading Business Process Outsourcing and Business Process Automation Service provider. For over twenty years ARDEM has successfully delivered business process outsourcing and business process automation services to our clients in USA and Canada. We are growing rapidly. We are constantly innovating to become a better service provider for our customers. We continuously strive for excellence to become the Best Business Process Outsourcing and Business Process Automation company. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description Syngenta is one of the world’s leading agriculture innovation company (Part of Syngenta Group) dedicated to improving global food security by enabling millions of farmers to make better use of available resources. Through world class science and innovative crop solutions, our 60,000 people in over 100 countries are working to transform how crops are grown. We are committed to rescuing land from degradation, enhancing biodiversity and revitalizing rural communities. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. To learn more visit: www.syngenta.com Job Description Purpose Be integral part of the P&S Data Science team that applies technical expertise in data management, data science, machine learning, artificial intelligence, and automation to design, build, deploy, and maintain solutions across multiple countries. Work with project teams and SMEs to understand business requirements and develop appropriate data and AI/ML solutions. Contribute solutions with explorative, predictive- or prescriptive models, utilizing optimization, simulation, and machine learning techniques to existing and new projects. Work independently to build applications for data collection and processing, exploration and visualization, analysis, regression, classification, and generation as required by the project and business teams. Accountabilities Develop and implement complex statistical models, machine learning algorithms, and data mining techniques to extract insights from large datasets. Lead data science projects from conception to completion, defining scope, methodology, and deliverables. Collaborate with business leaders to translate data insights into actionable strategies and recommendations. Guide and mentor junior data scientists, fostering their professional development and technical skills. Contribute to the design and improvement of data architecture, pipelines, and storage solutions. Stay current with the latest advancements in data science and introduce new techniques or technologies to the organization. Establish and maintain standards for data quality, documentation, and ethical use of data. Present complex findings to both technical and non-technical audiences, including executive leadership. Address complex business challenges using data-driven approaches and creative solutions. Improve the efficiency and scalability of data processing and model deployment. Develop deep knowledge in Syngenta’s P&S operations to better contextualize data insights. Own the design, build, and deploy process including collaboration with users and multi-disciplinary teams to fulfil the user, business and technical requirements. Qualifications Required Knowledge & Technical Skills Bachelor’s degree in Computer Science, Data Science, Engineering, Mathematics, or a related discipline (or equivalent practical experience). Understanding of ETL processes and data pipeline design. Required development skills: Data Science – TensorFlow Scikit-learn, SciPy, NumPy, Pandas, XGBoost, Keras, etc. Programming – Python with pytorch and tensorflow Data – SQL, EDA, Descriptive and Predictive analysis, visualization Preferred additional skills: Web services – RESTful APIs and API testing, JSON, etc. Front end – JavaScript, Flask, React, Node.js, Vue, Django, or other. Tools – Git, npm, pip, Heroku, or other tools. Knowledge and hands-on experience supervised and unsupervised ML using Logistic/multi-variate regression, Gradient Boosting, Decision Trees, Neural Network, Random Forest, Support Vector Machine, Naive Bayes, Time Series, Optimization, etc. Preference for proven experience in adapting algorithms to required models - Regression, Decision Trees, Random Forests, LLM. Preference for experience in production and supply domain: production planning, supply chain, logistics, track and trace, CRM, etc. Preference for experience in Deep Learning model development in agriculture, supply chain or related domains. Documentation of documentation of APIs, models, and operational manuals (Markdown, etc.). Must be able to work on end to end activities from design, development and deployment Required Experience Previous internship, placement, or project experience in data engineering, data science, software development, or a related field. Required 2 years experience of building data science projects using AI/ML models. Exposure to cloud platforms such as AWS, Azure, Google Cloud, or Data Bricks (preferred but not essential). Additional Information Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. Follow us on: Twitter & LinkedIn https://twitter.com/SyngentaAPAC https://www.linkedin.com/company/syngenta/ India page https://www.linkedin.com/company/70489427/admin/ Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Specialist, AFR Gurgaon, Haryana, India This role is essential for advancing the strategic objectives of the insurance actuarial function by implementing analytics and transformational initiatives. The team is primarily responsible for innovation, research & development. Utilizing high end analytical & technical skills, team members design models/methodologies tailored for actuarial reserving processes, as well as analyze financial data to generate actionable insights that aid decision-making. This position offers an exciting opportunity to participate in a range of analytics projects—descriptive, diagnostic, predictive, and prescriptive—while also focusing on Artificial Intelligence and cloud migration, thereby enhancing the organization’s ability to adapt to emerging business needs. What You’ll Be DOING What will your essential responsibilities include? Implementation of analytics projects and other transformational projects that directly impact the organization's strategic objectives. Run tools based in Python, R, SQL, and execute ETL processes to facilitate business deliverables, with a focus on future development work to drive actionable insights and business impact. Write high-quality, effective code that can be easily scaled across platforms using Python/R programming. Deepen the understanding of the business to contribute to other analytics initiatives, including predictive modeling, and collaborate on data-driven projects with cross-functional teams. Learn in-house software platforms used for actuarial reserving and manage their use in the processes, contributing to the enhancement of analytical capabilities. Manage quarterly/monthly/yearly financial data for MI reporting and collaborate with stakeholders to provide valuable insights and support decision-making. Partner with global technology teams to deliver changes to our data and processes to meet strategic goals, actively participating in transformative projects including move to the cloud. Demonstrate proactive communication with Business users, Development, Technology, Production Support, and Delivery Teams, and Senior Management to drive collaborative problem-solving and knowledge sharing. Develop and maintain process documentation to ensure transparency and knowledge transfer within the team and across stakeholders. Support ad-hoc activities to address emerging business needs and contribute to the agility of the team. You will report to Lead, AFR. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities University Graduate (B.E/B.Tech/CS/IT/BSc). Relevant years of work experience, preferably in the insurance industry, financial services, or consultancy. Good knowledge of Statistics and mathematical functions. Good hands-on computer application skills, specifically python programming, SQL, Power BI & MS Excel. In-depth knowledge of Python software development, including frameworks, tools, and systems (NumPy, Pandas, Django, SciPy, PyTorch, etc.). Desired Skills And Abilities Good to have knowledge of R programming (dplyr) and QlikView. Excellent analytical, research, and problem-solving skills. Understanding of cloud principles with good exposure to Microsoft Azure stack (Databricks, SQL DB etc.). Understanding of AI fundamentals including exposure to LLMs. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are looking for an experienced Big Data Administrator with strong Linux and AWS infrastructure experience to join our growing data team. The ideal candidate will have deep knowledge of Big Data platforms, hands-on experience managing large clusters in production, and a solid foundation in scripting and automation. You will play a crucial role in maintaining, optimizing, and scaling our data infrastructure to meet business needs. Must-Have Skills: Strong understanding of Linux OS , networking, and security fundamentals. Proven experience with AWS Cloud Platform – infrastructure, services, and architecture. Expertise with Infrastructure as Code (IaC) tools such as Terraform or Ansible . Hands-on experience managing large Big Data clusters (at least one of: Cloudera, Hortonworks, EMR ). Strong experience in observability for Big Data platforms using tools like: Prometheus InfluxDB Dynatrace Grafana Splunk Expert-level understanding of: Hadoop Distributed File System (HDFS) Hadoop YARN Familiarity with Hadoop file formats: ORC, Parquet, Avro , etc. Deep knowledge of compute engines such as: Hive (Tez, LLAP) Presto Apache Spark Ability to interpret query plans and optimize performance for complex SQL queries on Hive and Spark . Experience supporting Spark with Python (PySpark) and R (SparklyR, SparkR) . Strong scripting skills – at least one of: Shell scripting Python Experience collaborating with Data Analysts and Data Scientists and supporting tools like: SAS R-Studio JupyterHub H2O Ability to read and understand code written in Java, Python, R, Scala . Nice-to-Have Skills: Experience with workflow orchestration tools such as Apache Airflow or Oozie . Familiarity with analytical libraries like Pandas, NumPy, SciPy, PyTorch , etc. Experience with DevOps tools like Packer , Chef , or Jenkins . Knowledge of Active Directory , Windows OS , and VDI platforms such as Citrix or AWS Workspaces . Key Responsibilities: Administer and manage Big Data infrastructure in a hybrid cloud environment. Ensure high availability, scalability, and security of Big Data platforms. Collaborate with DevOps, Data Engineering, and Data Science teams to support data initiatives. Monitor, troubleshoot, and optimize platform performance. Automate routine tasks using scripts and IaC tools. Provide support and guidance for analytical tools and platforms. Participate in capacity planning and architectural reviews. Show more Show less

Posted 2 months ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Dreaming big is in our DNA Its who we are as a company Its our culture Its our heritage And more than ever, its our future A future where were always looking forward Always serving up new ways to meet lifes moments A future where we keep dreaming bigger We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential The power we create together when we combine your strengths with ours is unstoppable Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Senior ML Engineer Location: Bangalore Reporting to: Director Data Analytics Purpose of the role Anheuser-Busch InBev (AB InBev)s Supply Analytics is responsible for building competitive differentiated solutions that enhance brewery efficiency through data-driven insights We optimize processes, reduce waste, and improve productivity by leveraging advanced analytics and AI-driven solutions, Senior MLE, will be responsible for the end-to-end deployment of machine learning models on edge devices You will take ownership of all aspects of edge deployment, including model optimization, scaling complexities, containerization, and infrastructure management, ensuring high availability and performance, Key tasks & accountabilities Lead the entire edge deployment lifecycle, from model training to deployment and monitoring on edge devices Develop, and maintain a scalable Edge ML pipeline that enables real-time analytics at brewery sites, Optimize and containerize models using Portainer, Docker, and Azure Container Registry (ACR) to ensure efficient execution in constrained edge environments, Own and manage the GitHub repository, ensuring structured, well-documented, and modularized code for seamless deployments, Establish robust CI/CD pipelines for continuous integration and deployment of models and services, Implement logging, monitoring, and alerting for deployed models to ensure reliability and quick failure recovery Ensure compliance with security and governance best practices for data and model deployment in edge environments, Document the thought process & create artifacts on team repo/wiki that can be used to share with business & engineering for sign off, Review code quality and design developed by the peers, Significantly improve the performance & reliability of our code that creates high quality & reproducible results, Develop internal tools/utils that improve productivity of entire team, Collaborate with other team members to advance the teams ability to ship high quality code fast! Mentor/coach junior team members to continuously upskill them, Maintain basic developer hygiene that includes but not limited to, writing tests, using loggers, readme to name a few, Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Academic degree in, but not limited to, Bachelors or master's in computer application, Computer science, or any engineering discipline, Previous Work Experience 5+ years of real-world experience to develop scalable & high-quality ML models, Strong problem-solving skills with an owners mindset?proactively identifying and resolving bottlenecks, Technical Skills Required Proficiency with pandas, NumPy, SciPy, scikit-learn, stats models, TensorFlow, Good understanding of statistical computing, parallel processing, Experience with advanced TensorFlow distributed, NumPy, joblib, Good understanding of memory management & parallel processing in python, Profiling & optimization of production code, Strong at Python coding Exposure to working in IDEs such as VSC or PyCharm, Experience in code versioning using Git, maintaining modularized code base for multiple deployments, Experience in working in an Agile environment, In depth understand of data bricks (Workflows, cluster creation, repo management), In depth understanding of machine learning solution in Azure cloud, Best practices in coding standards, unit testing, and automation, Proficiency in Docker, Kubernetes, Portainer, and container orchestration for edge computing, Other Skills Required Experience in real-time analytics and edge AI deployments Exposure to DevOps practices, including infrastructure automation and monitoring tools Contributions to OSS or Stack overflow, And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 2 months ago

Apply

3.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements (Qualifications): Bachelor or Master degree with 3+ years of strong Python development experience Core skills: Bachelor or Master degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, Numpy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements (Qualifications): Bachelor or Master degree with 3+ years of strong Python development experience Core skills: Bachelor or Master degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, Numpy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Requirements (Qualifications): Bachelor or Master degree with 3+ years of strong Python development experience Core skills: Bachelor or Master degree with 3+ years of strong Python development experience OOPs concepts: Functions, Classes, Decorators Python and experience in anyone frameworks Flask/Django/Fast API Python Libraries (Pandas, TensorFlow, Numpy, SciPy) AWS Cloud Experience Docker, Kubernetes and microservices Postgres/MySQL GIT, SVN or any Code repository tools Design Patterns SQL Alchemy/ any ORM libraries (Object Relational Mapper) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies