Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a Python Developer at Innefu Lab, you will play a crucial role in the software development life cycle, contributing from requirements analysis to deployment. Working in collaboration with diverse teams, you will design and implement solutions that align with client requirements and industry standards. Your responsibilities encompass various key areas: Software Development: You will be responsible for creating, testing, and deploying high-quality Python applications and scripts. Code Optimization: Your role involves crafting efficient, reusable, and modular code while enhancing existing codebases for optimal performance. Database Integration: You will integrate Python applications with databases to ensure data integrity and efficient data retrieval. API Development: Designing and implementing RESTful APIs to enable seamless communication between different systems. Collaboration: Working closely with UI/UX designers, backend developers, and stakeholders to ensure effective integration of Python components. Testing and Debugging: Thoroughly testing applications, identifying and rectifying bugs, and ensuring software reliability. Documentation: Creating and maintaining comprehensive technical documentation for code, APIs, and system architecture. Continuous Learning: Staying updated on industry trends, best practices, and emerging technologies related to Python development. Required Skills: - Proficient in Python, Django, Flask - Strong knowledge of Regular Expressions, Pandas, Numpy - Excellent expertise in Web Crawling and Web Scraping - Experience with scraping modules like Selenium, Scrapy, Beautiful Soup, or URLib - Familiarity with text processing, Elasticsearch, and Graph-Based Databases such as Neo4j (optional) - Proficient in data mining, Natural Language Processing (NLP), and Optical Character Recognition (OCR) - Basic understanding of databases - Strong troubleshooting and debugging capabilities - Effective interpersonal, verbal, and written communication skills - Ability to extract data from structured and unstructured sources, analyze text, images, and videos, and utilize NLP frameworks for data enrichment - Skilled in collecting and extracting intelligence from data, utilizing regular expressions, and extracting information from RDBMS databases - Experience in web scraping frameworks like Scrapy for data extraction from websites Join us at Innefu Lab, where innovative offerings and cutting-edge technologies converge to deliver exceptional security solutions. Be part of our dynamic team driving towards excellence and growth in the cybersecurity domain.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Python data engineer specializing in ETL processes, Pyspark, and Pandas, your primary responsibility will be to collaborate with stakeholders to gather requirements, refine stories, design solutions, implement them, test thoroughly, and provide ongoing support in production environments. You will be utilizing data Behavior-Driven Development (BDD) techniques to work closely with users, analysts, developers, and other testers to ensure that we are building the right solutions. Your proficiency in Python programming, especially with libraries such as pandas, numpy, datetime, and re, will be instrumental in transforming raw data using various techniques like group by, filter, pivot, rank, merge, join, function, lambda function, and loop. Handling large datasets efficiently and creating data pipelines for reading data from APIs and storing them in databases will be key aspects of your role. Additionally, you should be adept at managing APIs in data pipelines, optimizing SQL data access through high-performance SQL scripts, and ideally have experience with Google Cloud Platform (GCP) solutions. Practical experience in building data engineering solutions, familiarity with version control systems like GIT, and a background in agile methodologies such as Scrum, Kanban, or XP will be valuable assets in this position. Your excellent English communication skills will enable effective discussions with stakeholders from the UK, US, and Germany, contributing to successful stakeholder management. Overall, your technical expertise combined with your collaborative approach and experience in data engineering will play a crucial role in driving the success of our projects.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
vadodara, gujarat
On-site
As a Data Automation Specialist at Qualifacts, you will be responsible for developing and maintaining automation scripts to fetch data from various web portals. You will utilize tools like Selenium, Beautiful Soup, or similar technologies to extract the required information efficiently. Your role will involve processing and cleaning the extracted data using Python libraries such as pandas to ensure its accuracy and relevancy. Additionally, you will be tasked with exporting the processed data to Excel with appropriate formatting using libraries like openpyxl. Collaboration with stakeholders is a key aspect of this position, as you will work closely with them to understand data requirements and ensure that the automation aligns with business needs. Your ability to troubleshoot and resolve any issues related to data extraction and automation processes will be crucial for the success of the projects. It is essential to stay updated with the latest trends and best practices in web scraping and data automation to enhance the efficiency and effectiveness of the processes. Qualifacts is an equal opportunity employer that values diversity and is dedicated to fostering an inclusive environment for all employees. Join us in our commitment to creating a workplace where everyone feels welcome and supported.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As a Lead AI Engineer at our company, you will be responsible for managing projects, clients, and teams with ease while inspiring confidence. You will have the opportunity to work with foreign clients and diverse cross-functional teams. We are seeking an individual with proven experience as an AI Developer, possessing a strong understanding of machine learning algorithms such as regression, clustering, neural networks, and decision trees. In this role, you will work with AI/ML frameworks like TensorFlow, PyTorch, Keras, and be proficient in programming languages such as Python, R, or other languages used for AI development. Familiarity with data science libraries like Pandas, NumPy, and Scikit-learn is essential. Knowledge of NLP, computer vision, or reinforcement learning would be a plus. Experience with cloud platforms like AWS, Azure, and Google Cloud, as well as deploying AI solutions in production, is also required. Your responsibilities will include developing, training, and optimizing machine learning models and AI systems. You will collaborate with data scientists, software engineers, and product managers to integrate AI models into existing applications. Building AI-powered tools, platforms, and applications will be a key part of your role. You will research and implement machine learning algorithms for classification, prediction, recommendation, and NLP, while also monitoring model performance and performing necessary tuning and updates. As a Lead AI Engineer, you will be expected to write clean, scalable, and efficient code for AI algorithms, stay updated with the latest AI/ML trends, tools, and frameworks, and conduct data preprocessing and feature engineering to ensure data quality for AI applications. Troubleshooting AI-related issues, improving model accuracy, and communicating effectively with external partners, customers, and product owners will also be part of your responsibilities. The ideal candidate will have an innovative and detail-oriented approach to problem-solving, excel at communicating with internal teams and external partners, and be a resourceful problem solver capable of finding solutions and navigating challenges in ambiguous situations. If you are passionate about AI engineering and possess the required skills and experience, we encourage you to apply for this exciting opportunity.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
We are looking for a highly experienced Senior AI/ML Engineer with a strong background in cloud infrastructure and IoT to join our dynamic team in Ahmedabad. As an ML Engineer/MLOps Engineer, your primary responsibility will be to lead the design, development, and deployment of AI/ML solutions on cloud platforms, integrate IoT technologies, and drive innovation in intelligent systems. Your role will involve leading the end-to-end development and deployment of AI/ML solutions on cloud platforms such as AWS for applications like predictive analytics, anomaly detection, and intelligent automation. You will also be responsible for integrating IoT sensors, devices, and protocols like MQTT into AI/ML solutions to enable intelligent decision-making, remote monitoring, and control in IoT environments. Additionally, you will architect and optimize cloud infrastructure for AI workloads, design data pipelines for IoT data ingestion, and train machine learning models for various tasks using state-of-the-art algorithms and frameworks like PyTorch. Collaboration with cross-functional teams, including data scientists, software engineers, and product managers, is essential to define project requirements, develop prototypes, and deliver scalable AI/ML solutions that align with business objectives. You will need to implement algorithms and models proficiently using programming languages like Python, R, or Java and leverage relevant libraries and frameworks for computer vision tasks. Furthermore, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or related field, along with 3-5 years of proven experience as an AI/ML Engineer. Strong programming skills in Python, familiarity with AI/ML libraries/frameworks, IoT development, containerization, and serverless computing are crucial requirements for this role. A deep understanding of machine learning algorithms, model training, validation, and deployment best practices is also essential. If you are willing to relocate to Ahmedabad and meet the educational and experience requirements mentioned above, we encourage you to apply for this full-time ML Engineer/MLOps Engineer position.,
Posted 3 weeks ago
8.0 - 10.0 years
10 - 13 Lacs
Hyderabad, Ahmedabad
Hybrid
We're Hiring: Data Scientists | Immediate Joiners Preferred Location: Hyderabad / Ahmedabad Experience: 8 to 10 years Salary: 18 20 LPA Notice Period: Immediate joiners or max 15 days preferred Are you passionate about solving complex problems using data? We're looking for Data Scientists who are excited to build scalable predictive models, optimization solutions, and data pipelines in a cloud-native environment. Key Responsibilities Develop predictive models using statistical techniques (Bayesian models preferred). Solve optimization problems like bin packing, TSP, and clustering using integer programming. Build and manage robust Python solutions using Git and Poetry. Process and analyze large datasets using Pandas, Polars, etc. Deploy and manage data pipelines on Azure Databricks and Azure Blob Storage. Monitor and troubleshoot cloud pipelines, logs, and performance issues. (Nice to have) Automate workflows using Power Apps or Power Automate. Optimize cloud costs and performance across data architectures. Requirements Strong experience in Data Science, ML, or Applied AI roles. Strong hands-on with Python, predictive modeling, and statistical analysis. Experience in integer programming, clustering, and optimization. Proficiency with Azure cloud services, Databricks, Blob Storage. Strong SQL and data transformation experience. Good understanding of CI/CD, DevOps (preferred but not mandatory). Strong communication and problem-solving skills. What We Offer Competitive salary: 18–20 LPA Opportunity to work on cutting-edge AI/ML and optimization problems Collaborative and growth-oriented work culture Immediate onboarding If you're ready to make an impact and meet the above criteria, we’d love to connect with you.
Posted 3 weeks ago
7.0 - 10.0 years
20 - 25 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
We are seeking a highly skilled Software Engineer with deep expertise in Python programming to design, build, test, and maintain a robust data and software infrastructure. The successful candidate will be integral to developing a data analytics system for a leading oil and gas company, processing structured and unstructured data from diverse sourcesincluding electrical systems and other industrial data inputs. Key Responsibilities: Design and develop scalable, high-performance software systems using Python. Build and maintain data pipelines that handle structured and unstructured data. Work with both SQL and NoSQL databases to manage and optimize data storage and access. Collaborate with data scientists, analysts, and engineers to support analytics and reporting workflows. Implement best practices in software development including testing, documentation, and CI/CD. Participate in system architecture design and contribute to technology decisions. Ensure data quality, integrity, and security across all systems. Required Qualifications: 5–10 years of hands-on software development experience , primarily in Python . Proven experience with SQL databases (PostgreSQL, MySQL, etc.) and NoSQL systems (MongoDB, Cassandra, etc.). Experience working with large-scale data processing systems . Strong understanding of software engineering principles , algorithms, and data structures. Ability to design and build data pipelines from heterogeneous data sources. Experience with data formats and processing methods for structured and unstructured data . Advanced degree (MS or PhD) in Computer Science or related field from a reputed institution .
Posted 3 weeks ago
5.0 - 10.0 years
27 - 30 Lacs
Navi Mumbai
Work from Office
We are seeking a Python Developer with a min of 5 years of experience, proficient in Django or Flask, ORM libraries, and SQL/NoSQL databases. Expertise in Git, and familiarity with cloud platforms and machine learning frameworks (e.g., Pandas)
Posted 3 weeks ago
6.0 - 9.0 years
0 - 2 Lacs
Bengaluru
Work from Office
Manager- Data Engineer: Elevate Your Impact Through Innovation and Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Corporate and Investment Banking & Investment Research (CIB & IR) As a global leader in knowledge processes, research, and analytics, youll be working with a team that specializes in global market research, working with the top-rated investment research organizations, bulge bracket investment banks, and leading asset managers. We cater to 8 of the top 10 global banks, working alongside their product and sector teams, supporting them on deal origination, execution, valuation, and transaction advisory -related projects. What you will be doing at Evalueserve Construct analytical dashboards from alternative data use cases, such as sector or thematic and financial KPI dashboards. Load and import data into internal warehouses through Azure Blob Storage and/or S3 deliveries, SFTP, and other ingestion mechanisms. Design and implement ETL workflows for preprocessing of transactional and aggregated datasets including complex joins, window functions, aggregations, bins and partitions. Manipulate and enhance time series datasets into relational data stores. Implement and refine panels in transactional datasets and relevant panel normalization. Conduct web scraping, extraction and post-processing of numerical data from web-based datasets. What were looking for Previous experience working within fundamental equity investment workflows, such as exposure to financial modeling. High proficiency in SQL and the Python data stack (pandas, numpy, sklearn). Experience working with scheduling and execution platforms, such as Airflow, Prefect, or similar scheduled DAG frameworks. Understanding of efficient query management in Snowflake, DataBricks, or equivalent platforms. Optional familiarity with automation of workflows that produce Excel outputs, such as through openpyxl. Optional familiarity with integrations and import/exports to REST/gRPC/GraphQL APIs. Security: This role is performed in a dedicated, secure workspace Travel: Annual travel to the U.S. for onsite collaboration is expected. Follow us on https://www.linkedin.com/compan y/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud. How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilities . Know more about ho w Evalueserve has climbed 16 places on the 50 Best Firms for Data Scientists in 2024! Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer: The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Please Note: We appreciate the accuracy and authenticity of the information you provide, as it plays a key role in your candidacy. As part of the Background Verification Process, we verify your employment, education, and personal details. Please ensure all information is factual and submitted on time. For any assistance, your TA SPOC is available to support you.
Posted 3 weeks ago
6.0 - 9.0 years
0 - 2 Lacs
Bengaluru
Work from Office
Manager- Data Engineer: Elevate Your Impact Through Innovation and Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Corporate and Investment Banking & Investment Research (CIB & IR) As a global leader in knowledge processes, research, and analytics, youll be working with a team that specializes in global market research, working with the top-rated investment research organizations, bulge bracket investment banks, and leading asset managers. We cater to 8 of the top 10 global banks, working alongside their product and sector teams, supporting them on deal origination, execution, valuation, and transaction advisory -related projects. What you will be doing at Evalueserve Constructanalytical dashboards from alternative data use cases, such as sector orthematic and financial KPI dashboards. Load andimport data into internal warehouses through Azure Blob Storage and/or S3deliveries, SFTP, and other ingestion mechanisms. Designand implement ETL workflows for preprocessing of transactional and aggregateddatasets including complex joins, window functions, aggregations, bins andpartitions. Manipulateand enhance time series datasets into relational data stores. Implementand refine panels in transactional datasets and relevant panel normalization. Conduct webscraping, extraction and post-processing of numerical data from web-baseddatasets. What were looking for Previousexperience working within fundamental equity investment workflows, such asexposure to financial modeling. Highproficiency in SQL and the Python data stack (pandas, numpy, sklearn). Experienceworking with scheduling and execution platforms, such as Airflow, Prefect, orsimilar scheduled DAG frameworks. Understandingof efficient query management in Snowflake, DataBricks, or equivalentplatforms. Optionalfamiliarity with automation of workflows that produce Excel outputs, such asthrough openpyxl. Optionalfamiliarity with integrations and import/exports to REST/gRPC/GraphQL APIs. Security:This role is performed in a dedicated, secure workspace Travel:Annual travel to the U.S. for onsite collaboration is expected. Follow us on https://www.linkedin.com/compan y/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud. How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilities . Know more about ho w Evalueserve has climbed 16 places on the 50 Best Firms for Data Scientists in 2024! Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer: The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Please Note: We appreciate the accuracy and authenticity of the information you provide, as it plays a key role in your candidacy. As part of the Background Verification Process, we verify your employment, education, and personal details. Please ensure all information is factual and submitted on time. For any assistance, your TA SPOC is available to support you.
Posted 3 weeks ago
5.0 - 10.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Project description The WMI Core stream provides Core Banking capabilities across WM International locations, and works towards integration and synergies across WMI locations, driving capability-driven and modular platform strategy for Core Banking. You'll be working in the WMI Core Technology team in one of our locations (Hyderabad) as per your eligibility and role requirements. Together as a team, we provide solutions to unique business/market requirements. Responsibilities As a Python Developer, you will be responsible for designing, developing, and maintaining Python-based applications and services. You will collaborate with cross-functional teams to deliver high-quality software solutions that meet the needs of our business divisions. Your role will involve Developing and maintaining Python applications and services. Collaborating with product managers, designers, and other developers to create efficient and scalable solutions. Writing clean, maintainable, and efficient code. Participating in code reviews and providing constructive feedback to peers. Troubleshooting and debugging issues across the stack. Ensuring the performance, quality, and responsiveness of applications. Staying up-to-date with emerging technologies and industry trends. Skills Must have Proven experience of more than 5 years as a Python Developer or similar role. Proficiency in Python and its frameworks such as Django or Flask. Strong knowledge of back-end technologies and RESTful APIs. Experience in Pandas, and knowledge of AI/ML libraries like TensorFlow, PyTorch, etc. Experience of working on Kubernetes / OpenShift, Dockers, Cloud Native frameworks. Experience with database technologies such as SQL, NoSQL, and ORM frameworks. Familiarity with version control systems like Git. Familiarity with Linux and Windows Operating Systems. Knowledge of Shell scripting (Bash for Linux and PowerShell for Windows) will be advantageous. Understanding of Agile methodologies and DevOps practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work independently and as part of a team. A degree in Computer Science, Engineering, or a related field is preferred. Nice to have Experience in Agile Framework
Posted 3 weeks ago
5.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project description We are seeking a highly skilled and motivated Data Scientist with 5+ years of experience to join our team. The ideal candidate will bring strong data science, programming, and data engineering expertise, along with hands-on experience in generative AI, large language models, and modern LLM application frameworks. This role also demands excellent communication and stakeholder management skills to collaborate effectively across business units. Responsibilities We are seeking a highly skilled and motivated Data Scientist with 5+ years of experience to join our team. The ideal candidate will bring strong data science, programming, and data engineering expertise, along with hands-on experience in generative AI, large language models, and modern LLM application frameworks. This role also demands excellent communication and stakeholder management skills to collaborate effectively across business units. Skills Must have Experience5+ years of industry experience as a Data Scientist, with a proven track record of delivering impactful, data-driven solutions. Programming Skills: Advanced proficiency in Python, with extensive experience writing clean, efficient, and maintainable code. Proficiency with version control tools such as Git. Data EngineeringStrong working proficiency with SQL and distributed computing with Apache Spark. Cloud PlatformsExperience building and deploying apps on Azure Cloud. Generative AI & LLMsPractical experience with large language models (e.g., OpenAI, Anthropic, HuggingFace). Knowledge of Retrieval-Augmented Generation (RAG) techniques and prompt engineering is expected. Machine Learning & ModelingStrong grasp of statistical modeling, machine learning algorithms, and tools like scikit-learn, XGBoost, etc. Stakeholder EngagementExcellent communication skills with a demonstrated ability to interact with business stakeholders, understand their needs, present technical insights clearly, and drive alignment across teams. Tools and librariesProficiency with libraries like Pandas, NumPy, and ML lifecycle tools such as MLflow. Team CollaborationProven experience contributing to agile teams and working cross-functionally in fast-paced environments. Nice to have Hands-on experience with Databricks and Snowflake. Hands-on experience building LLM-based applications using agentic frameworks like LangChain, LangGraph, and AutoGen. Familiarity with data visualization platforms such as Power BI, Tableau, or Plotly. Front-end/Full stack development experience. Exposure to MLOps practices and model deployment pipelines in production.
Posted 3 weeks ago
8.0 - 13.0 years
12 - 16 Lacs
Chennai
Work from Office
Project description We need a Senior Python and Pyspark Developer to work for a leading investment bank client. Responsibilities Develop software applications based on business requirements Maintain software applications and make enhancements according to project specifications Participate in requirement analysis, design, development, testing, and implementation activities Propose new techniques and technologies for software development. Perform unit testing and user acceptance testing to evaluate application functionality Ensure to complete the assigned development tasks within the deadlines Work in compliance with coding standards and best practices Provide assistance to Junior Developers when needed. Perform code reviews and recommend improvements. Review business requirements and recommend changes to develop reliable applications. Develop coding documentation and other technical specifications for assigned projects. Act as primary contact for development queries and concerns. Analyze and resolve development issues accurately. Skills Must have 8+ years of experience in data intensive Pyspark development. Experience as a core Python developer. Experience developing Classes, OOPS, exception handling, parallel processing . Strong knowledge of DB connectivity, data loading , transformation, calculation. Extensive experience in Pandas/Numpy dataframes, slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap, apply, map functions. Concurrency and error handling data pipeline batch of size [1-10 gb]. Ability to understand business requirements and translate them into technical requirements. Ability to design architecture of data pipeline for concurrent data processing. Familiar with creating/designing RESTful services and APIs. Familiar with application unit tests. Working with Git source control Service-orientated architecture, including the ability to consider integrations with other applications and services. Debugging application. Nice to have Knowledge of web backend technology Django, Python, PostgreSQL. Apache Airflow Atlassian Jira Understanding of Financial Markets Asset Classes (FX, FI, Equities, Rates, Commodities & Credit), various trade types (OTC, exchange traded, Spot, Forward, Swap, Options) and related systems is a plus Surveillance domain knowledge, regulations (MAR, MIFID, CAT, Dodd Frank) and related Systems knowledge is certainly a plus
Posted 3 weeks ago
3.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Project description Create tooling to facilitate the development and debugging of a new calibration framework for multiple camera calibration. Responsibilities Good fundamental knowledge and problem solving skill in engineering. Hands-on, self-motivated, independent, and dedicated Ability to work well in a diverse, collaborative and dynamic environment Fast debugging and triage of issues Facilitate development for new projects Skills Must have ML (Meachine Learning) OCR (Optical Character Recognition) Experience with processing huge amount of data Capability of building application Python data visualization library, Pandas, Metplotlib/Seaborn Nice to have Data analysis C++ experience
Posted 3 weeks ago
4.0 - 8.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Date 1 Location: Bangalore, KA, IN Company Alstom We are looking for an experienced Engineer to oversee the process and use of data systems used in ALSTOM. You will discover efficient ways to organize, store and analyze data with attention to security and confidentiality. A great Engineer Data Analyst is able to fully grasp the complexity of data management. The ideal candidate will have a strong understanding of databases and data analysis procedures. You will also be tech-savvy and possess excellent troubleshooting skills. The goal is to ensure that Parts Data information flows timely and securely to and from the Orchestra Tool to organization widly interphased tools. Purpose of the role Reporting to the Engineering Data Shared Services DL, and working closely with other Digital Transformation Teams, Business Process Owners, Data Owners and end users, you will: Be responsible to ensure consistency of Master data in compliance with core business rules. Contribute in defining the data standards and data quality criteria Manage critical activities in the process Be the subject matter expert and share knowledge Responsibilities Create and enforce Standard , Specific & Design parts for effective data management Formulate techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data Devise and implement efficient and secure procedures for data handling and analysis with attention to all technical aspects Support others in the daily use of data systems and ensure adherence to legal and company standards Assist with reports and data extraction when needed Monitor and analyze information and data systems and evaluate their performance to discover ways of enhancing them (new technologies, upgrades etc.) Troubleshoot data-related problems and authorize maintenance or modifications Manage all incoming data files. Continually develop data management strategies. Analyse & validate master data during rollouts. Raise incidents tickets and work closely with other IT operations teams to resolve MDM issues. Being resilient and strive towards taking the team to next level by highlighting roadblocks to management Critical Challenges Mtiers facing transformation challenges while business continuity must be maintained in Regions Complex end to end data flows with many cross-data dependencies
Posted 3 weeks ago
7.0 - 12.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Date 25 Jun 2025 Location: Bangalore, KA, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling, and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus. Things youll enjoy Join us on a life-long transformative journey the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. Youll also: Enjoy stability, challenges, and a long-term career free from boring daily routines. Work with advanced data and cloud technologies to drive innovation. Collaborate with cross-functional teams and helpful colleagues. Contribute to innovative projects that have a global impact. Utilise our flexible and hybrid working environment. Steer your career in whatever direction you choose across functions and countries. Benefit from our investment in your development, through award-winning learning programs. Progress towards leadership roles or specialized technical paths. Benefit from a fair and dynamic reward package that recognises your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension). You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.
Posted 3 weeks ago
1.0 - 3.0 years
7 - 11 Lacs
Chennai, India
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like you’d make a great addition to our vibrant team. We are looking for a Data Science Engineer. This position is for Chennai Location. What do I bring You Should have: o Proficiency in Python, including libraries like pandas, NumPy, scikit-learn, PyTorch/TF, statsmodels o Deep hands-on experience in machine learning & data mining, proficiency in algorithms development and ability to work with complex infrastructures o Good understanding of time-series modeling, forecasting, anomaly detection o Working exposure to AI/ML & Data Science Algorithm development o Familiarity with cloud data platforms (AWS preferred) and data lake architecture o Ability to translate data findings into clear, actionable insights for dashboards or stakeholders o Strong data fundamentals & demonstrated ability to organize, clean, schematize and process data, as well as good understanding of how to use data science methodologies to solve complex problems o Will to be a team player o Ability to effectively communicate in English, both written and spoken Desirable to have o Working experience in Scala, Graph analytics o Hands-on experience with frontend technologies such as Angular, JavaScript or similar o Proven experience working with IOT/sensor data o Working experience in Agile software development (daily scrum, pair sessions, sprint planning, retro & review, clean code and self-organized), configuration, testing and release management o Experience in container concepts (Docker, Kubernetes) o Experience in GitLab Continuous Integration & Docker o Knowledge of Energy & Sustainability domain o International collaboration & working experience, with distributed virtual teams What do I take away Apart from the several other evident benefits, you would also enjoy the following opportunities: To collaborate with global software product development teams, comprising business analysts, product managers, product owners & architects, whose technical & domain expertise stretches over decades To be part of a highly disciplined and influential work culture, where an individual's decision and contribution directly attributes to the success factor of the business objectives, customer goals & users’ lives Hybrid working opportunity Diverse and inclusive culture Great variety of learning & development opportunities What are my responsibilities You take a challenging role in the development of a cloud-based offering with an easy-to-use interface that monitors, analyzes, and helps to optimize energy utilization of the buildings & campuses – via multi-site performance dashboards visualizing historical and near real-time series data for energy consumption, costs, and emissions values You need to develop tomorrow’s data-driven products for buildings, incl. predictive maintenance, prescriptive simulations, anomaly detection, system optimization and forecasts, by understanding & analyzing business requirements by interacting with stake holders You will investigate the most appropriate techniques to solve complex problems by assessing diverse mathematical, statistical and AI models You have to evaluate & implement advanced data analytics techniques in collaboration with the global Data Analytics team, Software Architects/Engineers and Product Managers You will need to collaborate with Software Engineers to ensure the smooth integration of data science products into global offerings You should analyze large volumes of data covering a wide range of information from building operational data to equipment and user behavior data, to identify new patterns through data mining and AI Join us and be yourself! We value your unique identity and perspective, recognizing that our strength comes from the diverse backgrounds, experiences, and thoughts of our team members. We are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. We also support you in your personal and professional journey by providing resources to help you thrive. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Chennai and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. Find out more about Siemens careers at
Posted 3 weeks ago
1.0 - 6.0 years
16 - 20 Lacs
Noida, India
Work from Office
Gen AI Engineer Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client - facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer , you will play a critical role in building AI offering s for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch D evelop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong ex erience in predictive and stastical modelling . Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph . Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineer ing Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. S trong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch . Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI . Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. St rong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain . Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. P ossess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI . Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies . Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures.
Posted 3 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
India, Bengaluru
Work from Office
Radiologist The Medical Imaging Technology team of Siemens Healthcare Technology Center has an immediate opening in Princeton, NJ for an experienced Chest Radiologist to join our multidisciplinary team focused on advancing AI-based solutions in medical imaging. The ideal candidate will bring clinical expertise in thoracic imaging and play a pivotal role in the development, validation, and deployment of AI-driven diagnostic tools. This is an exciting opportunity to work at the intersection of clinical radiology and cutting-edge technology, contributing to the future of AI in healthcare. What are my responsibilities Collaborate with data scientists, AI engineers, and clinical teams to develop algorithms for automated detection, diagnosis, and quantification of thoracic diseases (e.g., lung cancer, pneumonia, interstitial lung disease) in medical imaging. Lead the clinical validation efforts for AI models by evaluating algorithm performance, ensuring diagnostic accuracy, and comparing AI outputs against radiologist interpretation. Provide expert annotations and guidance on chest imaging datasets to improve AI model training. Ensure the quality and consistency of labeled datasets used in AI development. Work with AI developers to refine algorithms, providing clinical feedback on areas for improvement and offering suggestions to enhance diagnostic performance. Assist in integrating AI solutions into radiology workflows, ensuring tools are user-friendly, efficient, and improve clinical outcomes. What do I need to qualify for this job Education Doctor of Medicine (MD) degree with board certification in Diagnostic Radiology. Fellowship training in Chest or Thoracic Imaging is required. Experience Minimum of 5 years of experience in chest imaging, with a strong background in diagnosing pulmonary and thoracic conditions. Experience in AI development, clinical research, or advanced imaging techniques is highly desirable. Technical Skills Familiarity with AI applications in medical imaging, including automated segmentation, classification, and detection algorithms. Experience with AI platforms, machine learning models, and medical imaging software is a plus. Communication Skills Excellent communication and collaboration skills, with the ability to work effectively in a multidisciplinary team environment.
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
As a Machine Learning Engineer, you will be in charge of building end-to-end machine learning pipelines that operate at a huge scale, from data investigation, ingestions and model training to deployment, monitoring, and continuous optimization. You will ensure that each pipeline delivers measurable impact through experimentation, high-throughput inference, and seamless integration with business-critical systems. This job combines 70% machine learning engineering and 30% algorithm engineering and data science. We're seeking an Adtech pro who thrives in a team environment, possesses exceptional communication and analytical skills, and can navigate high-pressure demands of delivering results, taking ownership, and leveraging sales opportunities. Responsibilities: Build ML pipelines that train on real big data and perform on a massive scale. Handle a massive responsibility, Advertise on lucrative placement (Samsung appstore, Xiaomi phones, Truecaller). Train models that will make billions of daily predictions and affect hundreds of millions users. Optimize and discover the best solution algorithm to data problems, from implementing exotic losses to efficient grid search. Validate and test everything. Every step should be measured and chosen via AB testing. Use of observability tools. Own your experiments and your pipelines. Be Frugal. Optimize the business solution at minimal cost. Advocate for ai. Be the voice of data science and machine learning answering business needs. Build future products involving agentic AI and data science. Affect million of users every instant and handle massive scale Requirements MSc in CS/EE/STEM with at least 5 years of proven experience (or BSc with equivalent experience) as a Machine Learning Engineer: strong focus on MLOps, data analytics, software engineering, and applied data science- Must Hyper communicator: Ability to work with minimal supervision and maximal transparency. Must understand requirements rigorously, while frequently giving an efficient honest picture of his/hers work progress and results. Flawless verbal English- Must Strong problem solving skills, drive projects from concept to production, working incrementally and smart. Ability to own features end to end, theory, implementation and measurement. Articulate data driven communication is also a must. Deep understanding of machine learning, including the internals of all important ML models and ML methodologies. Strong real experience in Python, and at least one another programming languages (C#, C++, Java, Go). Ability to write efficient, clear and resilient production grade code. Flawless in SQL. Strong background in probability and statistics. Experience with tools and ML models Experience with conducting A/B test. Experience with using cloud providers and services (AWS) and python frameworks: TensorFlow/PyTorch, Numpy, Pandas, SKLearn (Airflow, MLflow, Transformers, ONNX, Kafka are a plus). AI/LLMs assistance: Candidates have to hold all skills independently without using AI assist. With that candidates are expected to use AI effectively, safely and transparently. Preferred: Deep Knowledge in ML aspects including ML Theory, Optimization, Deep learning tinkering, RL, Uncertainty quantification, NLP, classical machine learning, performance measurement. Prompt engineering and Agentic workflows experience Web development skills Publication in leading machine learning conferences and/or medium blogs
Posted 3 weeks ago
5.0 - 8.0 years
10 - 20 Lacs
Chennai
Remote
Job Type: Contract Job Summary We are seeking a highly skilled and mathematically grounded Machine Learning Engineer to join our AI team. The ideal candidate will have 5+ years of ML experience with a deep understanding of machine learning algorithms, statistical modeling, and optimization techniques, along with hands-on experience in building scalable ML systems using modern frameworks and tools. Key Responsibilities Design, develop, and deploy machine learning models for real-world applications. Collaborate with data scientists, software engineers, and product teams to integrate ML solutions into production systems. Understand the mathematics behind machine learning algorithms to effectively implement and optimize them. Conduct mathematical analysis of algorithms to ensure robustness, efficiency, and scalability. Optimize model performance through hyperparameter tuning, feature engineering, and algorithmic improvements. Stay updated with the latest research in machine learning and apply relevant findings to ongoing projects. Required Qualifications Mathematics & Theoretical Foundations Strong foundation in Linear Algebra (e.g., matrix operations, eigenvalues, SVD). Proficiency in Probability and Statistics (e.g., Bayesian inference, hypothesis testing, distributions). Solid understanding of Calculus (e.g., gradients, partial derivatives, optimization). Knowledge of Numerical Methods and Convex Optimization. Familiarity with Information Theory, Graph Theory, or Statistical Learning Theory is a plus. Programming & Software Skills Proficient in Python (preferred), with experience in libraries such as: o NumPy, Pandas, Scikit-learn, Matplotlib, Seaborn Experience with deep learning frameworks: o TensorFlow, PyTorch, Keras, or JAX Familiarity with ML Ops tools: o MLflow, Kubeflow, Airflow, Docker, Kubernetes Experience with cloud platforms (AWS, GCP, Azure) for model deployment. Machine Learning Expertise Hands-on experience with supervised, unsupervised, and reinforcement learning. Understanding of model evaluation metrics and validation techniques. Experience with large-scale data processing (e.g., Spark, Dask) is a plus. Preferred Qualifications Masters or Ph.D. in Computer Science, Mathematics, Statistics, or a related field. Publications or contributions to open-source ML projects. Experience with LLMs, transformers, or generative models. Location- Remote (Chennai - possible to travel Chennai for meetings)
Posted 3 weeks ago
0.0 - 3.0 years
1 - 3 Lacs
Noida
Work from Office
We, at Macreel Info Soft are looking for enthusiastic and self-driven Full Stack AI & ML Engineers join our dynamic team who are passionate about Artificial Intelligence, Machine Learning , Full Stack Development to work real-world AI applications.
Posted 3 weeks ago
4.0 - 6.0 years
4 - 9 Lacs
Kolkata, Pune, Chennai
Work from Office
We are seeking an experienced Python Developer with strong hands-on expertise in AWS cloud services and data libraries. The ideal candidate will be proficient in designing and deploying applications using Python, AWS (Lambda, EC2, S3), and familiar with DevOps tools such as GitLab. Experience with NumPy and Pandas for data processing or ML-related tasks is essential.
Posted 3 weeks ago
4.0 - 6.0 years
18 - 25 Lacs
Hyderabad
Work from Office
Job Summary: We are looking for a highly skilled and experienced AI/ML Developer-Lead with 4-5 years of hands-on relevant experience to join our technology team. You will be responsible for designing, developing, and optimizing machine learning models that drive intelligent business solutions. The role involves close collaboration with cross-functional teams to deploy scalable AI systems and stay abreast of evolving trends in artificial intelligence and machine learning. Key Responsibilities: 1. Develop and Implement AI/ML Models Design, build, and implement AI/ML models tailored to solve specific business challenges, including but not limited to natural language processing (NLP), image recognition, recommendation systems, and predictive analytics. 2. Model Optimisation and Evaluation Continuously improve existing models for performance, accuracy, and scalability. 3. Data Preprocessing and Feature Engineering Collect, clean, and preprocess structured and unstructured data from various sources. Engineer relevant features to improve model performance and interpret ability. 4. Collaboration and Communication Collaborate closely with data scientists, back end engineers, product managers, and stakeholders to align model development with business goals. Communicate technical insights clearly to both technical and non-technical stakeholders. 5. Model Deployment and Monitoring Deploy models to production using MLOps practices and tools (e.g., MLflow, Docker, Kubernetes). Monitor live model performance, diagnose issues, and implement improvements as needed. 6. Staying Current with AI/ML Advancements Stay informed of current research, tools, and trends in AI and machine learning. Evaluate and recommend emerging technologies to maintain innovation within the team. 7. Code Reviews and Best Practices Participate in code reviews to ensure code quality, scalability, and adherence to best practices. Promote knowledge sharing and mentoring within the development team. Required Skills and Qualifications: Bachelors or Masters degree in computer science, Data Science, Engineering, or a related field. 4-5 years' of experience in machine learning, artificial intelligence, or applied data science roles. Strong programming skills in Python (preferred) and/or R. Proficiency in ML libraries and frameworks , including: scikit-learn, XGBoost, LightGBM, TensorFlow or Keras, PyTorch Skilled in data preprocessing and feature engineering , using; pandas, numpy, sklearn.preprocessing Practical experience in deploying ML models into production environments using REST APIs and containers. Familiarity with version control systems (e.g., Git) and containerization tools (e.g., Docker). Experience working with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure . Understanding software development methodologies , especially Agile/Scrum. Strong analytical thinking, debugging, and problem-solving skills in real-world AI/ML applications.
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough