Home
Jobs

1905 Numpy Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

About This Role About This Role Data Strategy & Solutions (DS&S) is accelerating the future of investment research at BlackRock with alternative data, insights, and emerging technology. DS&S seeks a data engineer who is high-reaching and passionate about extraction and processing of unstructured web extracted data. In this role, the candidate will collaborate with experienced engineers and teams to build and maintain data pipelines, develop internal web/data extraction tools and create dashboards to support investment decision-making. This is a great opportunity for someone eager to grow within a fast-paced, innovative environment while working on impactful projects. You should be Someone who is passionate about solving sophisticated business problems through data! A developer building large scale scraping and extraction pipelines Excited to orchestrate workflows and manage deployment within GCP and Snowflake. Assisting with monitoring, quality assurance, and maintenance of data extraction systems. Enthusiastic to collaborate with global teams to validate and prototype new datasets, rapidly iterating toward usable formats for portfolio research. Always eager to contribute to internal tooling or dashboards that improve dataset accessibility and audibility. What We’re Looking For 2–4 years of experience in data engineering, data extraction, web scraping, or unstructured data processing. Strong proficiency in Python, Pandas/NumPy, Regex & Text Processing, Shell scripting/Bash. Familiarity with web scraping tools or Beautiful Soup and data governance. Knowledge of frontend/ backend development (React, APIs, Python Flask or FastAPI, databases, cloud technologies) is a plus. Someone capable of working with unstructured or alternative data sources Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions along with experience with Snowflake for data modeling and performance tuning Experience working in a fast-paced environment with evolving priorities Effective communication and an ability to collaborate across technical and non-technical teams. Data product lifecycle management from acquisition to QC and delivery is a plus. Strong problem-solving skills with attention to detail and a proactive approach #EarlyCareers Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Your Team Responsibilities Datadesk is a Go to team for any data requirements, new vendor data on-boardings across business units within MSCI. As part of Datadesk, you are proficient in handling a diverse range of datasets, including (Equity, FI, Crypto, Pharma, Thematic, ESG, Private, etc.) Datadesk is a Centralized team to manage askData service. Datadesk also Participates in integration of recently acquired companies. Early adopter of new technologies (AI, Cloud, DSP, etc.) Your Key Responsibilities Utilize Python and frameworks like pandas, numpy, and dask to process, aggregate, and manipulate large financial datasets. Apply statistical modeling and AI techniques to improve data processing, forecasting, and decision-making. Identify opportunities for AI adoption in data processing, analytics, and decision-making. Ensure data quality, integrity, and consistency across different sources. Create presentations and reports that effectively communicate data findings to stakeholders. Take ownership of assigned tasks with minimal supervision, ensuring timely and high-quality deliverables. Your Skills And Experience That Will Help You Excel Degree in computer science, statistics, Information Technology and/or Finance with 4 -7 years of professional experience. You are proficient in PYTHON and it's various frameworks like pandas , numpy and dask. Good knowledge in statistics as well as statistical modelling which will be used for aggregating big data set , resampling data, and explaining data. Data Visualization Framework like Power Bi , Stream lit or any other Python Frontend framework is a plus. You have strong interest in Finance - work experience in finance and /or capital markets You have experience dealing with providers of financial data products (MSCI, Refinitiv, ICE, S&P, Factset etc.) preferred Good communication skills (written and oral) and proficiency in creating presentations. You are an independent worker who can drive certain parts of the work with minimal oversight About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com Show more Show less

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Details: We are seeking a highly motivated and enthusiastic Junior Data Scientist with 2-3 years of experience to join our data science team. This role offers an exciting opportunity to contribute to both traditional Machine Learning projects for our commercial IoT platform (EDGE Live) and cutting-edge Generative AI initiatives. Position: Data Scientist Division & Department: Enabling Functions_Business Technology Group (BTG) Reporting To: Customer & Commercial Experience Products Leader Educational Qualifications Bachelor's degree in Mechanical, Computer Science, Data Science, Mathematics, or a related field. Experience 2-3 years of hands-on experience with machine learning Exposure to Generative AI concepts and techniques, such as Large Language Models (LLMs), RAG Architecture Experience in manufacturing and with an IoT platform is preferable Role And Responsibilities Key Objectives Machine Learning (ML) Assist in the development and implementation of machine learning models using frameworks such as TensorFlow, PyTorch, or scikit-learn. Help with Python development to integrate models with the overall application Monitor and evaluate model performance using appropriate metrics and techniques. Generative AI Build Gen AI-based tools for various business use cases by fine-tuning and adapting pre-trained generative models Support the exploration and experimentation with Generative AI models Research & Learning Stay up-to-date with the latest advancements and help with POCs Proactively research and propose new techniques and tools to improve our data science capabilities. Collaboration And Communication Work closely with cross-functional teams, including product managers, engineers, and business stakeholders, to understand requirements and deliver impactful solutions. Communicate findings, model performance, and technical concepts to both technical and non-technical audiences. Technical Competencies Programming: Proficiency in Python, with experience in libraries like numpy, pandas, and matplotlib for data manipulation and visualization. ML Frameworks: Experience with TensorFlow, PyTorch, or scikit-learn. Cloud & Deployment: Basic understanding of cloud platforms such as Databricks, Google Cloud Platform (GCP), or Microsoft Azure for model deployment. Data Processing & Evaluation: Knowledge of data preprocessing, feature engineering, and evaluation metrics such as accuracy, F1-score, and RMSE. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Data Analyst Experience: 3+ Years Job Summary We are seeking a skilled Data Analyst with strong experience in the Mortgage domain to join our team. The ideal candidate will be responsible for analyzing mortgage-related data, supporting decision-making, generating business insights, and contributing to ongoing data integration and reporting initiatives. Key Responsibilities Analyze mortgage loan data, origination, servicing, delinquency, and default trends. Collaborate with business stakeholders to gather data requirements and translate them into actionable insights. Develop dashboards and reports using tools such as Power BI or Tableau. Write complex SQL queries to extract, manipulate, and validate data from relational databases. Work with ETL pipelines to clean, transform, and load data for reporting. Document business rules, data definitions, and report specifications. Provide data-driven insights for process improvements and risk analysis in mortgage operations. Ensure data quality and consistency across systems and processes. Required Skills Minimum 3 years of experience as a Data Analyst Strong knowledge of SQL (must-have) Hands-on experience with Power BI / Tableau / Looker Experience working with Excel, Python (Pandas/Numpy optional), and ETL tools Familiarity with data modeling and data warehouse concepts Good understanding of mortgage lifecycle – Origination, Underwriting, Servicing, Foreclosure, etc. Experience working with large datasets and relational databases Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Finance, or related field Experience working in Agile/Scrum environments Knowledge of US mortgage regulations and investor guidelines (e.g., Fannie Mae, Freddie Mac) is a plus Exposure to cloud platforms (AWS/Azure/GCP) is desirable Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job title : Python developer Location : Trivandrum/Bangalore/Hybrid Key responsibilities : Collaborate with the Development team, Business Analysts and Product owner to determine application requirements. Write scalable and testable python code using the Python programming language and relevant third-party libraries. Testing and debugging applications. Code migration from Python 2.x to Python 3x. Liaising and interacting with IT Business Analysts and Business Architects regarding specific items of software functionality that are being requested by and/or through internal users. Adherence to standard software development principles and established development processes. Document all workflows & propose efficiencies when applicable. Additional duties as assigned. Qualifications and Experience Relevant degree or diploma in computer science, information technology, computer engineering and information system management 4+ years of experience in relevant technologies. Expert knowledge of Python, related frameworks and third-party libraries including numpy and pandas. A deep understanding of multiprocessor architecture and the threading limitations of Python. Experience with handling large data volumes efficiently and affinity to data. Skills and Knowledge Professional understanding of Python 2.7 and higher. Knowledge in Mongodb 3-5, query language, including aggregation framework. Ideally including experience in setting up and maintaining replica sets. Front end skills in PHP, Laravel, JavaScript, Typescript and knowledge of Angular and React are a plus. Any additional proficiencies in .NET C# MVC and .NET Core 5, jQuery, MSSQL are warmly welcomed. Experience using Docker, Powershell, or Linux Bash. Proficiency in using git. Show more Show less

Posted 1 week ago

Apply

50.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About The Opportunity Job Type: Permanent Application Deadline: 17 June 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location GGN Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About Your Team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About Your Role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. Show more Show less

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

GlassDoor logo

We are seeking a dynamic and experienced AI Trainer with expertise in Machine Learning, Deep Learning, and Generative AI including LLMs (Large Language Models) . The candidate will train students and professionals in real-world applications of AI/ML as well as the latest trends in GenAI such as ChatGPT, LangChain, Hugging Face Transformers, Prompt Engineering, and RAG (Retrieval-Augmented Generation) . Key Responsibilities: Deliver hands-on training sessions in AI, ML, Deep Learning , and Generative AI . Teach the fundamentals and implementation of algorithms like regression, classification, clustering, decision trees, neural networks, CNNs, and RNNs. Train students in LLMs (e.g., OpenAI GPT, Meta LLaMA, Google Gemini) and prompt engineering techniques . LangChain Hugging Face Transformers LLM APIs (OpenAI, Cohere, Anthropic, Google Vertex AI) Vector databases (FAISS, Pinecone, Weaviate) RAG pipelines Design and evaluate practical labs and capstone projects (e.g., chatbot, image generator, smart assistants). Keep training materials updated with latest industry developments and tools. Provide mentorship for student projects and support during hackathons or workshops. Required Skills: AI/ML Core: Python, NumPy, pandas, scikit-learn, Matplotlib, Jupyter Good knowledge in Machine Learning and Deep Learning algorithms Deep Learning: TensorFlow / Keras / PyTorch OpenCV (for Computer Vision), NLTK/spaCy (for NLP) Generative AI & LLM: Prompt engineering (zero-shot, few-shot, chain-of-thought) LangChain and LlamaIndex (RAG frameworks) Hugging Face Transformers OpenAI API, Cohere, Anthropic, Google Gemini, etc. Vector DBs like FAISS, ChromaDB, Pinecone, Weaviate Streamlit, Gradio (for app prototyping) Qualifications: B.E/B.Tech/M.Tech/M.Sc in AI, Data Science, Computer Science, or related Practical experience in AI/ML, LLMs, or GenAI projects Previous experience as a developer/trainer/corporate instructor is a plus Salary / Remuneration: ₹30,000 – ₹75,000/month based on experience and engagement type Job Type: Full-time Pay: ₹30,000.00 - ₹75,000.00 per month Schedule: Day shift Application Question(s): How many years of experience you have ? Can you commute to Kakkanad, Kochi ? What is your expected Salary ? Work Location: In person

Posted 1 week ago

Apply

0 years

6 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

POSITION SUMMARY Zoetis, Inc. is the world's largest producer of medicine and vaccinations for pets and livestock. Join us at Zoetis India Capability Center (ZICC) in Hyderabad, where innovation meets excellence. As part of the world's leading animal healthcare company, ZICC is at the forefront of driving transformative advancements and applying technology to solve the most complex problems. Our mission is to ensure sustainable growth and maintain a competitive edge for Zoetis globally by leveraging the exceptional talent in India. At ZICC, you'll be part of a dynamic team that partners with colleagues worldwide, embodying the true spirit of One Zoetis. Together, we ensure seamless integration and collaboration, fostering an environment where your contributions can make a real impact. Be a part of our journey to pioneer innovation and drive the future of animal healthcare. Zoetis is seeking a talented, experienced individual to provide Rapid Application Development in support of R&D. This is a unique opportunity to work directly with scientists to learn about the processes involved in progressing pharma therapeutics and biotherapeutics from idea through research & development, quickly develop solutions to improve their processes, and deliver scalability through our Zoetis Technology & Digital (ZTD) infrastructure and software capabilities, globally. This position will require a highly motivated individual who can effectively collaborate with other team members across the organization to advance projects. The ideal candidate is a duel degreed person with both a scientific disciplined formal education and computer science/data science education that is willing to learn about scientists’ processes and needs, provide guidance on possible solutions to address gaps, and implement solutions in collaboration with scientists, lab automation/data specialists, disciplined data scientists, and our Zoetis Tech & Digital organization. It is essential that the candidate possess excellent listening and problem-solving skills, communicates effectively, is change agile, and can work both within a team and individually to deliver on objectives related to data capture, storage, searching, integration, and visualization. POSITION RESPONSIBILITIES Percent of Time Scientific Collaboration & Data Solutions Collaborating with scientists to identify data gaps and implement best solutions for data capture, storage, searching, integration, and visualization, incorporating FAIR data practices. Partnering with ZTD and VMRD technology groups to align regarding available tools, platforms, and recommended tech stack to apply. 40% Application & Database Development Rapidly designing and developing software applications and/or databases to solve data needs of scientists, including capturing/providing data to existing or new data pipelines for advanced analytics and data visualization. Ensuring the development of high-quality, reliable, and scalable applications within tight deadlines. Performing testing and debugging of applications to ensure optimal performance and functionality. 40% Code Quality, Research, & Support Participating in code/platform reviews with other colleagues and to maintain code quality and incorporate best practices. Continuously researching and learning about the latest technologies and industry trends to improve application development processes. Providing technical support and troubleshooting for existing applications. Collaborating with the team to create and maintain project documentation. 10% Innovation and Expertise Development: Stay updated on the latest trends in application development, visualization technology and methods relevant to pharmaceutical research. Research and apply advanced techniques related to advanced technology deployment (AI, LLM, ML, etc.) 10% ORGANIZATIONAL RELATIONSHIPS Animal Health Research & Development Collaborate across the full spectrum of R&D functions, including pharmaceutical, biopharmaceutical, vaccine, device, and genetics R&D groups, to align technology solutions with the diverse needs of scientific disciplines and development pipelines. Zoetis Tech & Digital (ZTD) Partner closely with ZTD teams, with a particular focus on the VMRD-ZTD Engineering group, documentation specialists, and portfolio management groups, to ensure seamless integration of IT solutions and alignment with organizational objectives. EDUCATION AND EXPERIENCE Bachelor’s degree or equivalent with direct experience in Computer-related field. Proven experience as a Rapid Application Developer/Engineer or similar role within the Life Sciences industry. Knowledge of rapid application development methodologies, such as Agile or Scrum. Experience developing solutions in the area of CMC, including cell-line development, upstream, & downstream is a plus. Strong problem-solving skills and the ability to think critically and creatively in a fast-paced, innovative environment sometimes requiring frequent project transitions. Excellent interpersonal and communication skills and the ability to explain complex ideas to audiences without a technical background. Demonstrated ability for collaboration and conflict resolution with individuals representing a broad cross-section of the drug research and development function. Ability to work independently with minimal supervision or within a team as well as manage multiple projects simultaneously. TECHNICAL SKILLS REQUIREMENTS Programming: Proficiency in Python, R, Java, or C# and using code sharing tools such as Git or GitHub. Familiarity with front-end languages, such as React, CSS, and JavaScript/TypeScript. Data Handling: Experience with SQL, Pandas, NumPy, and ETL processes. Experience with database design and management, such as SQL, NoSQL, or Oracle. Familiarity with cloud technologies, such as Azure or AWS. Knowledge of software development best practices, including version control and continuous integration. Familiarity with visualization tools, such as Tableau, Spotfire, or Power BI. Soft Skills: o Strong storytelling ability to convey scientific insights visually. o Effective communication and collaboration with interdisciplinary teams. o Analytical thinking to align visualizations with research goals. PHYSICAL POSITION REQUIREMENTS Travel requirements are minimal, 0-10% Full time

Posted 1 week ago

Apply

5.0 years

8 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team's Impact Join our dynamic Machine Learning and AI team, where we build innovative models and solutions that drive business transformation and unlock new opportunities. You’ll be at the forefront of AI-powered initiatives, collaborating closely with product teams to shape the future of data-driven insights. This position offers high visibility and the chance to directly influence key decisions across the organization. You will report to the Manager of AI. Working Mode: Hybrid (3 days mandatory Office) What You'll Do Technical Leadership & Strategy Lead the design, development, and deployment of machine learning models and systems at scale. Define and drive the technical roadmap for ML initiatives in alignment with business goals. Evaluate and select appropriate ML techniques, architectures, and tools for various problems (e.g., NLP, CV, tabular data). Ensure robust experimentation, validation, and performance benchmarking practices. Team Guidance & Mentorship Mentor and support junior and mid-level ML engineers, guiding them on model development, research approaches, and code quality. Conduct technical reviews of models, pipelines, and code to ensure high standards. Promote a culture of continuous learning, innovation, and scientific rigor within the team. System & Pipeline Development Architect and implement scalable ML pipelines for training, validation, inference, and monitoring. Collaborate with data engineers to ensure high-quality data ingestion, feature engineering, and labeling workflows. Contribute to MLOps practices by building reproducible, testable, and maintainable model delivery frameworks. Assist in designing, developing, and implementing machine learning models for real-world applications. Work on data collection, preprocessing, feature engineering, and model evaluation tasks. Collaborate with cross-functional teams including Data Science, Software Engineering, and Product. Perform exploratory data analysis (EDA) and prepare datasets for training/testing. Contribute to the deployment and monitoring of models in production environments. Write clean, efficient, and well-documented code in Python or similar languages. Stay updated with the latest developments in AI/ML research and tools. Assist in model optimization, hyperparameter tuning, and performance scaling. Stay current with the latest industry trends and technologies, contributing innovative ideas to ongoing projects. Test and validate models to ensure their reliability and effectiveness in production environments. Work with large datasets to extract meaningful insights using various statistical and ML techniques. What We're Looking For Required Skills Bachelor’s or master’s degree in computer science, Engineering, or a related field is required. 5+ years of experience in software development, with a focus on systems handling large-scale data operations. Strong foundation in Machine Learning concepts (supervised/unsupervised learning, regression, classification, clustering, etc.). Good programming skills in Python (or similar languages like R, Java, C++). Hands-on experience with ML libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch, Keras). Understanding of data structures, algorithms, and basic mathematics/statistics and database management systems. Excellent verbal and written communication skills, capable of articulating complex concepts to technical and non-technical audiences. Familiarity with data handling tools (e.g., Pandas, NumPy, SQL). Good analytical, problem-solving, and communication skills. Ability to learn new technologies quickly and work independently or as part of a team. Ability to work collaboratively in a team environment, contributing to group success while expanding personal skills. Desired Skills Exposure to deep learning, NLP, computer vision, or reinforcement learning projects (academic or internships). Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with version control systems (e.g., Git). Understanding of MLOps concepts and pipelines (bonus) What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 1 week ago

Apply

3.0 - 6.0 years

8 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team's Impact FactSet is seeking an Experienced software development engineering with proven proficiency in deployment of software adhering to best practices and with fluency in the development environment and with related tools, code libraries and systems. Responsible for the entire development process and collaborates to create a theoretical design. Demonstrated ability to critique code and production for improvement, as well as to receive and apply feedback effectively. Proven ability to maintain expected levels of productivity and increasingly becoming independent as a software developer, requiring less direct engagement and oversight on a day to day basis from one’s manager. Focus is on developing applications, testing & maintaining software, and the implementation details of development ; increasing volume of work accomplished (with consistent quality, stability and adherence to best practices), along with gaining a mastery of the products to which one is contributing and beginning to participate in forward design discussions for how to improve based on one’s observations of the code, systems and production involved. Software Developers provide project leadership and technical guidance along every stage of the software development life cycle. What You'll Do Work on the Data Lake platform handling millions of documents annually, built on No SQL Architecture. Focus on developing new features while supporting and maintaining existing systems, ensuring the platform's continuous improvement. Develop innovative solutions for feature additions and bug fixes, optimizing existing functionality as needed to enhance system efficiency and performance. Engage with Python, Frontend and C#.NET repositories to support ongoing development and maintenance, ensuring robust integration and functionality across the application stack. Participate in weekly On Call support to address urgent queries and issues in common communication channels, ensuring operational reliability and user satisfaction. Create comprehensive design documents for major architectural changes and facilitate peer reviews to ensure quality and alignment with best practices. Utilize object-oriented programming principles to develop low-level designs that effectively support high-level architectural frameworks, contributing to scalable solutions. Collaborate with product managers and key stakeholders to thoroughly understand requirements and propose strategic solutions, leveraging cross-functional insights. Actively participate in technical discussions with principal engineers and architects to support proposed design solutions, fostering a collaborative engineering environment. Accurately estimate key development tasks and share insights with architects, engineering directors, to align on priorities and resource allocation. Operate within an agile framework, collaborating with engineers and product developers using tools like Jira and Confluence. Engage in test-driven development and elevate team practices through coaching and reviews. Create and review documentation and test plans to ensure thorough validation of new features and system modifications. Work effectively as part of a geographically diverse team, coordinating with other departments and offices for seamless project progression. These responsibilities aim to highlight the complexity of managing a platform that ingests millions of documents, underscoring the importance of innovative solutions, technical proficiency, and collaborative efforts to ensure the Data Lake platform's success. What We're Looking For Bachelor’s or master’s degree in computer science, Engineering, or a related field is required. 3-6 years of experience in software development, with a focus on systems handling large-scale data operations. In-depth understanding of data structures and algorithms to optimize software performance and efficiency. Proficiency in object-oriented design principles is essential. Strong skills in Python, AWS, Frontend and C#.NET to comprehend and contribute to existing applications. Experience with non-relational databases, specifically DynamoDB, MongoDB and Elasticsearch, for optimal query development and troubleshooting. Experience with frontend technologies like Angular, React or Vue.js to support development of key interfaces Software Development:Familiarity with GitHub-based development processes, facilitating seamless collaboration and version control. Experience in building and deploying production-level services, demonstrating ability to deliver reliable and efficient solutions. API and System Integration: Proven experience working with APIs, ensuring robust connectivity and integration across the system. AWS Expertise: Working experience with AWS services such as Lambda, EC2, S3, and AWS Glue is beneficial for cloud-based operations and deployments. Problem-Solving and Analysis: Strong analytical and problem-solving skills are critical for developing innovative solutions and optimizing existing platform components. Communication and Collaboration: Excellent collaborative and communication skills, enabling effective interaction with geographically diverse teams and key stakeholders. On Call and Operational Support: Capability to address system queries and provide weekly On Call support, ensuring system reliability and user satisfaction. Organizational Skills: Ability to prioritize and manage work effectively in a fast-paced environment, demonstrating self-direction and resourcefulness. Required Skills: Python Proficiency: Experience with Python and relevant libraries like Pandas and NumPy is beneficial for data manipulation and analysis tasks. Jupyter Notebooks: Familiarity with Jupyter Notebooks is a plus for supporting data visualization and interactive analysis. Agile Methodologies: Understanding of Agile software development is advantageous, with experience in Scrum as a preferred approach for iterative project management. Linux/Unix Experience: Exposure to Linux/Unix environments is desirable, enhancing versatility in system operations and development. What's In It for You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Roles and Responsibilities: As a, Associate Manager - Senior Data scientist you will solve some of the most impactful business problems for our clients using a variety of AI and ML technologies. You will collaborate with business partners and domain experts to design and develop innovative solutions on the data to achieve predefined outcomes. • Engage with clients to understand current and future business goals and translate business problems into analytical frameworks • Develop custom models based on in-depth understanding of underlying data, data structures, and business problems to ensure deliverables meet client needs • Create repeatable, interpretable and scalable models • Effectively communicate the analytics approach and insights to a larger business audience • Collaborate with team members, peers and leadership at Tredence and client companies Qualification: Bachelor's or Master's degree in a quantitative field (CS, machine learning, mathematics, statistics) or equivalent experience. 5+ years of experience in data science, building hands-on ML models Experience with LMs (Llama (1/2/3), T5, Falcon, Langchain or framework similar like Langchain) Candidate must be aware of entire evolution history of NLP (Traditional Language Models to Modern Large Language Models), training data creation, training set-up and finetuning Candidate must be comfortable interpreting research papers and architecture diagrams of Language Models Candidate must be comfortable with LORA, RAG, Instruct fine-tuning, Quantization, etc. Experience leading the end-to-end design, development, and deployment of predictive modeling solutions. Excellent programming skills in Python. Strong working knowledge of Python’s numerical, data analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, Jupyter, etc. Advanced SQL skills with SQL Server and Spark experience. Knowledge of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Experience with Natural Language Processing (NLTK) and text analytics for information extraction, parsing and topic modeling. Excellent verbal and written communication. Strong troubleshooting and problem-solving skills. Thrive in a fast-paced, innovative environment Experience with data visualization tools — PowerBI, Tableau, R Shiny, etc. preferred Experience with cloud platforms such as Azure, AWS is preferred but not required. Show more Show less

Posted 1 week ago

Apply

5.0 years

7 - 12 Lacs

India

On-site

GlassDoor logo

Dear candidate, We are the hiring partner to one of esteemed clients requiring for below position. Kindly, go through the details before applying. Role : Big Data Administrator Exp- 5+ years Location –Hyderabad (Hybrid) Position Type: Contract (Upto 12 months- extendable) Role: The candidate has a strong technical background in Linux, networking, and security , along with hands-on experience in AWS cloud infrastructure . Proficiency in Infrastructure as Code (Terraform, Ansible) and have managed large-scale BigData clusters (Cloudera, Hortonworks, EMR). Their expertise includes Hadoop Distributed File System (HDFS), YARN, and various Hadoop file formats (ORC, Parquet, Avro). D eep knowledge of Hive, Presto, and Spark compute engines , with the ability to optimize complex SQL queries . They also support Spark with Python (PySpark) and R (SparklyR, SparkR) . Additionally, they have solid coding experience in scripting languages (Shell, Python) and have worked with Data Analysts and Scientists using tools like SAS, R-Studio, JupyterHub, and H2O . Nice-to-have skills include workflow management tools (Airflow, Oozie), analytical libraries (Pandas, Numpy, Scipy, PyTorch), and experience with Packer, Chef, Jenkins . They also have prior knowledge of Active Directory and Windows-based VDI platforms (Citrix, AWS Workspaces). Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹700,000.00 - ₹1,200,000.00 per year Application Question(s): What is your total experience ? How soon you can join ? You understand that this is contract position and your are fine with the same ? What is your current/last salary ? What salary you are expecting now? Work Location: In person

Posted 1 week ago

Apply

0 years

7 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: AI Implementation: Implement AI solutions to automate tasks, improve customer experiences, and optimize operations Data Analysis and Interpretation : Analyze large datasets to extract meaningful insights and trends using Python and data science techniques Model Development: Develop, train, and deploy machine learning models to solve business problems and enhance decision-making processes Data Collection and Cleaning: Data Analysts are responsible for gathering data from multiple sources, ensuring its accuracy and completeness. This involves cleaning and preprocessing data to remove inaccuracies, duplicates, and irrelevant information. Proficiency in data manipulation tools such as SQL, Excel, and Python is essential for efficiently handling large data sets Analysis and Interpretation : One of the primary tasks of a Data Analyst is to analyse data to uncover trends, patterns, and correlations. They use statistical techniques and software such as R, SAS, and Tableau to conduct detailed analyses. The ability to interpret results and communicate findings clearly is crucial for guiding business decisions Reporting and Visualization: Data Analysts create comprehensive reports and visualizations to present data insights to stakeholders. These visualizations, often created using tools like Power BI and Tableau, make complex data more understandable and actionable. Analysts must be skilled in designing charts, graphs, and dashboards that effectively convey key information Collaboration and Communication: Effective collaboration with other departments, such as marketing, finance, and IT, is vital for understanding data needs and ensuring that analysis aligns with organizational goals. Data Analysts must communicate their findings clearly and concisely, often translating technical data into understandable insights for non-technical stakeholders Predictive Modelling and Forecasting: Advanced Data Analysts also engage in predictive modelling and forecasting, using machine learning algorithms and statistical methods to predict future trends and outcomes. This requires a solid understanding of data science principles and familiarity with tools like TensorFlow and Scikit-learn Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience Python Proficiency: Solid knowledge of Python programming, including libraries for data analysis (e.g., Pandas, NumPy) and machine learning (e.g., scikit-learn, TensorFlow, Keras) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

7.0 years

4 - 9 Lacs

Gurgaon

On-site

GlassDoor logo

About this Role The Senior Data Scientist in the Interaction Insights team will help in generating actionable insights for driving the creation of fact-based research from client interactions and other Gartner internal and external data assets. At the core we are looking for a person who is passionate about learning new technologies, text analytics, digging into rows of unstructured data, drawing recommendations & actionable insights, and using creative ways to visualize & interpret it for research. This person will have a flair of innovation along with a strong experience in Python, Data Science, NLP, Machine Learning, Deep Learning, LLMs and the analysis of unstructured data. They would be working closely with various stakeholders to explore opportunities to optimize data mining techniques and improve the accuracy of insights generated using NLP or various types of Machine Learning/Deep Learning/LLM models. What you will do: Execute large-scale, high-impact data science projects, translating research objectives into actionable data and insights from both internal and external data assets Provide ad hoc modeling and analytical insights to inform strategic and operational initiatives Analyze unstructured text data to discover insights and patterns using advanced data science techniques, including machine learning (ML) and natural language processing (NLP) Interpret data-driven patterns and insights in alignment with original business objectives Address critical business challenges by leveraging data science, ML, and NLP methodologies Interact with internal and external stakeholders to refine and enhance findings Collaborate as part of a team in a fast-paced environment, meeting strict SLAs and turnaround times Develop a deep understanding of the technology industry and maintain expertise in the data science domain Ensure delivery of high-quality data science solutions with a focus on accuracy and coverage Be accountable for the scalability, stability, and business adoption of data science solutions Maintain proper documentation and adhere to code reusability and best practices Take ownership of algorithms, including their ongoing enhancement and optimization to meet evolving business requirements Stay current with advancements in AI/ML models and technologies, and apply disruptive data science solutions as appropriate. Collaborate with engineering and product teams to launch MVPs and iterate quickly based on feedback Independently plan, drive and execute data science projects that deliver clear and measurable business value What you will need: Education - Master’s in engineering, Computer Science, Computer Applications, Statistics, Mathematics, Applied Mathematics, Computer Science with focus on AI or Data Science; BE/BTech or Bachelors (Comp. Sc./AI) with relevant experience Total experience - 7 to 10 years of which 5+ yrs as hands-on Data Science, AI/ML experience on real-world industry problems, with focus on Text Mining and Natural Language Processing/text analytics Advanced proficiency in Python and SQL, with experience using key data science libraries and tools (e.g., Pandas, Numpy, scikit-learn, TensorFlow, PyTorch, Spacy, NLTK, Scipy) Hands-on experience with machine learning, deep learning, NLP (including LLMs/GenAI), and applying these models to real-world business problems Strong background in data preparation, including cleaning, normalization, and handling unstructured text data for modeling Experience with cloud computing platforms such as AWS or Azure for model training, testing, and deployment Familiarity with both relational (e.g., Oracle) and NoSQL databases (e.g., MongoDB, graph databases), as well as distributed computing frameworks like Spark Proficiency in best coding practices and code/repo management using GitHub or Bitbucket Basic skills in Power BI, Excel, and PowerPoint Strong analytical, critical thinking, and problem-solving skills, with the ability to extract insights from complex and unstructured datasets Demonstrated ability to collaborate across product, engineering, and data science teams, and to influence stakeholders at all levels Excellent communication skills in both technical and business contexts Self-motivated, eager to learn, adaptable to feedback, and comfortable working in a fast-paced, milestone-driven environment What you will get: Competitive salary, generous paid time off policy, charity match program, Group Medical Insurance, Parental Leave, Employee Assistance Program (EAP) and more! Collaborative, team-oriented culture that embraces diversity Professional development and unlimited growth opportunities #LI-PM3 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:100853 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

2.0 years

6 Lacs

Gurgaon

On-site

GlassDoor logo

Job description Company Profile WhizHack is a product engineering and human capital development company currently working with top academic research institutions in India like IITs and NITs and key research partners from Israel & Canada.WhizHack will galvanize Scientific Imagination, Deep Research, Rigorous Training by Experts, Leverage Product Architects, provide Active Mentorships and accelerated access to key markets both in India and globally for managing complete value chain of secured cyber environment.Our Mission is to not only to create a pipeline of cyber security products but also a team of empowered manpower that can drive sustainable innovation in securing digital assets of tomorrow. Job Description As a Software Developer, you will work as part of the core development team to design and develop high-quality software solutions for enterprise applications using Programming Languages such as Python, Rust, GoLang, C/C++, Data Analysis frameworks such as Pandas Polars numpy etc.Your basic responsibilities are catered to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments. Responsibilities · Helping with identifying processes and tasks that can be automated with internal tools.· Will also be directly responsible for building these tools. · Participates in the data domain technical and business discussions relative to future architect direction. · Assists in the analysis, design and development of a roadmap, design pattern, and implementation based upon a current vs. future state in a cohesive architecture viewpoint. · Gathers and analyzes data and develops architectural requirements at project level. · Supports the development data and data delivery platforms that are service-oriented with reusable components that can be orchestrated together into different methods for different businesses. · Deep understanding in Data Warehousing, Enterprise Architectures, Dimensional Modelling, ETL Architect, ETL (Extract/Transform/Load), Data Analysis, Data Conversion/Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support etc. · Should have independently worked on proposing architecture, design and data ingestion concepts. · Understanding existing firmware and upgrading. Develop, code, test and debugging hardware and Firmware. · Preparing the Development environment as per the requirements. Site installation and support. Skills Required · A minimum of 2-3 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure. · Strong background in writing software using any these of the following languages (Python, Golang, Rust, C, C++ ) · Strong background in Linux/Unix Administration and Python/Shell Scripting. · Familiarity with micro service architectures and cloud-based computing. (Docker) · Knowledge of building ETL pipelines using Python Pandas, Numpy is a must. · Designing and Developing Cloud Native applications using Microservices Orchestration , using Kubernetes · Solid Understanding of Data structures, Algorithm and Analytical problems. Qualifications · Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). · Excellent problem-solving and communication skills. · Strong collaboration and teamwork abilities. · Proficiency in writing and maintaining documentations Job Types: Full-time, Permanent Pay: From ₹650,000.00 per year Benefits: Food provided Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 - 0 Lacs

Mohali

On-site

GlassDoor logo

Job Title: Python Developer Location: Mohali, Punjab Company: RevClerx About RevClerx: RevClerx Pvt. Ltd., founded in 2017 and based in the Chandigarh/Mohali area (India), is a dynamic Information Technology firm providing comprehensive IT services with a strong focus on client-centric solutions. As a global provider, we cater to diverse business needs including website designing and development, digital marketing, lead generation services (including telemarketing and qualification), and appointment setting. Job Summary: We are seeking a motivated and skilled Python Developer with 3-4 years of professional experience to join our dynamic engineering team. The ideal candidate will be proficient in developing, deploying, and maintaining robust Python-based applications and services. You will play a key role in the entire software development lifecycle, from conceptualization and design through testing, deployment, and ongoing maintenance. While core Python development is essential, we highly value candidates with an interest or experience in emerging technologies like AI/ML and Large Language Model (LLM) applications. Key Responsibilities: Design, develop, test, deploy, and maintain high-quality, scalable, and efficient Python code. Collaborate closely with product managers, designers, and other engineers to understand requirements and translate them into technical solutions. Participate in the full software development lifecycle (SDLC) using Agile methodologies. Write clean, maintainable, well-documented, and testable code. Contribute to code reviews to ensure code quality, share knowledge, and identify potential issues. Troubleshoot, debug, and upgrade existing software systems. Develop and integrate with RESTful APIs and potentially other web services. Work with databases (like Postgersql) to store and retrieve data efficiently. Optimize applications for maximum speed, scalability, and reliability. Stay up-to-date with the latest industry trends, technologies, and best practices in Python development and related fields. Potentially assist in the integration of AI/ML models or contribute to projects involving LLM-based agents or applications. Minimum Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3-4 years of professional software development experience with a primary focus on Python. Strong proficiency in Python and its standard libraries. Proven experience with at least one major Python web framework (e.g., Django, Flask, FastAPI). Solid understanding of object-oriented programming (OOP) principles. Experience working with relational databases (e.g., PostgreSQL, MySQL) and/or NoSQL databases (e.g., MongoDB, Redis). Proficiency with version control systems, particularly Git. Experience designing, building, and consuming RESTful APIs. Familiarity with Agile development methodologies (e.g., Scrum, Kanban). Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Preferred (Good-to-Have) Qualifications: AI/ML Knowledge: Basic understanding of machine learning concepts and algorithms. Experience with relevant Python libraries for data science and ML (e.g., Pandas, NumPy, Scikit-learn). Experience integrating pre-trained ML models into applications. Familiarity with deep learning frameworks (e.g., TensorFlow, PyTorch) is a plus. LLM Experience: Demonstrable interest or hands-on experience in building applications leveraging Large Language Models (LLMs). Experience working with LLM APIs (e.g., OpenAI GPT, Anthropic Claude, Google Gemini). Familiarity with LLM frameworks or libraries (e.g., LangChain, LlamaIndex). Understanding of basic prompt engineering techniques. Experience building or experimenting with LLM-powered agents or chatbots. Containerization & Orchestration: Experience with containerization technologies like Docker and orchestration tools like Kubernetes. CI/CD: Experience setting up or working with Continuous Integration/Continuous Deployment (CI/CD) pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Asynchronous Programming: Experience with Python's asynchronous libraries (e.g., asyncio, aiohttp). What We Offer: Challenging projects with opportunities to work on cutting-edge technologies especially in the field of AI. Competitive salary and comprehensive benefits package. Opportunities for professional development and learning (e.g., conferences, courses, certifications). A collaborative, innovative, and supportive work environment. Job Type: Full-time Pay: ₹16,526.97 - ₹68,399.45 per month Benefits: Food provided Health insurance Location Type: In-person Schedule: Monday to Friday Work Location: In person Speak with the employer +91 9041633697

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s In It For You Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good To Have 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

India

On-site

GlassDoor logo

Job Summary: We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 01/07/2025

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai

On-site

GlassDoor logo

We are seeking a skilled and motivated Software Engineer with strong experience in Google Cloud Platform (GCP) and Python programming. In this role, you will be responsible for designing, developing, and maintaining scalable and reliable cloud-based solutions, data pipelines, or applications on GCP, leveraging Python for scripting, automation, data processing, and service integration. Bachelor’s degree in computer science or engineering 3+ plus years of software development and support experience including analysis, design, & testing. Domain experience within Automotive, Manufacturing and Supply chain Strong proficiency in Python programming, including experience with standard libraries and popular frameworks/libraries (e.g., Pandas, NumPy, FastAPI, Flask, Django, Scikit-learn, TensorFlow/PyTorch - depending on the role). Hands-on experience designing, deploying, and managing resources and services on Google Cloud Platform (GCP). Familiarity with database querying (SQL) and understanding of database concepts. Understanding of cloud architecture principles, including scalability, reliability, and security. Proven experience working effectively within an Agile development or operations team (e.g., Scrum, Kanban). Experience using incident tracking and project management tools (e.g., Jira, ServiceNow, Azure DevOps). Excellent verbal and written communication skills, with the ability to explain technical issues clearly to both technical and non-technical audiences. Excellent teamwork, written and verbal communication, and organizational skills are essential, ability to solve complex problems in a global environment Ability to multi-task effectively and prioritize work based on business impact and urgency Nice-to-Have Skills: GCP certifications (e.g., Associate Cloud Engineer, Professional Cloud DevOps Engineer, Professional Cloud Architect). Experience with other cloud providers (AWS, Azure). Experience with containerization (Docker) and orchestration (Kubernetes). Experience with database administration (e.g., PostgreSQL, MySQL). Familiarity with security best practices and tools in a cloud environment (DevSecOps). Experience with serverless technologies beyond Cloud Functions/Run. Contribution to open-source projects. Design, implement, and manage scalable, secure, and reliable infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) principles, primarily with Terraform. Develop and manage APIs or backend services in Python deployed on GCP services like Cloud Run Function, App Engine, or GKE. Build and maintain robust CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitHub) to enable frequent and reliable application deployments. Work closely with software development teams to understand application requirements and translate them into cloud-native solutions on GCP. Implement and manage monitoring, logging, and alerting solutions (e.g., Cloud Monitoring, Prometheus, Grafana, Cloud Logging) to ensure system health and performance. Implement and enforce security best practices within the GCP environment (e.g., IAM policies, network security groups, security scanning). Troubleshoot and resolve production issues across various services (Applications) and infrastructure components (GCP). Work closely with product manager and business stakeholders to understand the business needs and associated systems requirements to meet customization required in SaaS Solution. Run and protect the SaaS Solution in AWS Environment and troubleshoots production issues. Active participant in all team agile ceremonies, manage the daily deliverables in Rally with proper user story and acceptance criteria. Provides input to product governance communications.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida

On-site

GlassDoor logo

About Company, Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters Job Posting Title: AI Solution Engineer What does a AI Solution Engineer do? As an AI Solution Engineer at, you will be responsible for developing, deploying, and implementing advanced AI-enabled applications for our highly sophisticated systems, ensuring compliance with security standards and delivering innovative and efficient solutions within a secure environment. Self-discipline and a strong desire to build applications with high integrity are essential for success in this role. What will you do: Research and Innovation : Stay updated with the latest AI technologies, tools, and trends to continuously improve and innovate //'s data conversion processes. Documentation : Maintain comprehensive documentation of AI solutions, methodologies, and deployment processes. Design and Develop AI Models : Implement AI solutions focused on automating and enhancing the core data conversion processes. Data Handling : Work with large datasets, ensuring the integrity and security of sensitivity information during the conversion process. Secure Environment Compliance : Develop and deploy AI solutions in accordance with security protocols, ensuring all processes meet compliance standards. Collaboration : Work closely with cross-functional teams including data scientists, software engineers, and business analysts to create integrated AI solutions. Testing and Validation : Conduct rigorous testing and validation of AI models to ensure accuracy and reliability. Performance Optimization : Continuously monitor and optimize AI models for efficiency and performance improvements. Perform application scoring and data aggregation What you will need to have: Programming Skills : Proficiency in programming languages such as Python, JS/NodeJS, and .NET Framework/Core C#. Machine Learning Frameworks : Familiarity with ML frameworks and libraries such as TensorFlow, PyTorch, Keras, or Scikit-Learn. Experience in selecting and implementing appropriate algorithms for specific tasks is highly valuable. Data Handling and Processing : Experience with data manipulation and analysis using tools like Pandas or NumPy. Understanding how to preprocess data, handle unstructured data, and create datasets for training models is crucial. Deep Learning : Knowledge of deep learning concepts and architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers, especially for tasks related to image recognition, natural language processing, and more. Software Development Practices : Familiarity with software development methodologies, version control systems (like Git), and DevOps principles to ensure smooth integration and deployment of AI models. Cloud Computing : Experience with cloud-based services and platforms (e.g., AWS, Google Cloud, Azure) that provide tools for machine learning and AI deployment. System Design : Ability to design scalable AI systems, including understanding architecture patterns, APIs, and microservices for integrating AI models into broader applications. Problem-Solving : Strong analytical and problem-solving skills to identify the best AI solutions for various challenges and to troubleshoot issues that arise during implementation. Collaboration and Communication : Experience in working collaboratively with cross-functional teams, including data scientists, software engineers, and business stakeholders, to align AI solutions with business objectives. Hands-on Experience : 5-7+ years of technical implementation experience What would be great to have: Experience in the Financial Services Industry and an understanding of relevant compliance standards and regulations. Certification in AI/ML or relevant technologies. Experience with reporting tools like Splunk, SSRS, Cognos, and Power BI Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

Exploring numpy Jobs in India

Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization

Interview Questions

  • What is numpy and why is it used? (basic)
  • Explain the difference between a Python list and a numpy array. (basic)
  • How can you create a numpy array with all zeros? (basic)
  • What is broadcasting in numpy? (medium)
  • How can you perform element-wise multiplication of two numpy arrays? (medium)
  • Explain the use of the np.where() function in numpy. (medium)
  • What is vectorization in numpy? (advanced)
  • How does memory management work in numpy arrays? (advanced)
  • Describe the difference between np.array and np.matrix in numpy. (advanced)
  • How can you speed up numpy operations? (advanced)
  • ...

Closing Remark

As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies