Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,
Posted 3 days ago
6.0 - 12.0 years
0 Lacs
maharashtra
On-site
Automation Anywhere is a leader in AI-powered process automation, utilizing AI technologies to drive productivity and innovation across organizations. The company's Automation Success Platform offers a comprehensive suite of solutions including process discovery, RPA, end-to-end process orchestration, document processing, and analytics, all with a security and governance-first approach. By empowering organizations globally, Automation Anywhere aims to unleash productivity gains, drive innovation, enhance customer service, and accelerate business growth. Guided by the vision to enable the future of work through AI-powered automation, the company is committed to unleashing human potential. Learn more at www.automationanywhere.com. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 6 to 12 years of relevant experience. - Proven track record as a Solution Architect or Lead, focusing on integrating Generative AI or exposure to Machine Learning. - Expertise in at least one RPA tool such as Automation Anywhere, UiPath, Blue Prism, Power Automate, and proficiency in programming languages like Python or Java. Skills: - Proficiency in Python or Java for programming and architecture. - Strong analytical and problem-solving skills to translate business requirements into technical solutions. - Experience with statistical packages and machine learning libraries (e.g., R, Python scikit-learn, Spark MLlib). - Familiarity with RDBMS, NoSQL, and Cloud Platforms like AWS/AZURE/GCP. - Knowledge of ethical considerations and data privacy principles related to Generative AI for responsible integration within RPA solutions. - Experience in process analysis, technical documentation, and workflow diagramming. - Designing and implementing scalable, optimized, and secure automation solutions for enterprise-level AI applications. - Expertise in Generative AI technologies such as RAG, LLM, and AI Agent. - Advanced Python programming skills with specialization in Deep Learning frameworks, ML libraries, NLP libraries, and LLM frameworks. Responsibilities: - Lead the design and architecture of complex RPA solutions incorporating Generative AI technologies. - Collaborate with stakeholders to align automation strategies with organizational goals. - Develop high-level and detailed solution designs meeting scalability, reliability, and security standards. - Take technical ownership of end-to-end engagements and mentor a team of senior developers. - Assess the applicability of Generative AI algorithms to optimize automation outcomes. - Stay updated on emerging technologies, particularly in Generative AI, to evaluate their impact on RPA strategies. - Demonstrate adaptability, flexibility, and willingness to work from client locations or office environments as needed. Kindly note that all unsolicited resumes submitted to any @automationanywhere.com email address will not be eligible for an agency fee.,
Posted 1 week ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 week ago
2.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Were looking for a Big Data Engineer who can find creative solutions to tough problems. As a Big Data Engineer, youll create and manage our data infrastructure and tools, including collecting, storing, processing and analyzing our data and data systems. You know how to work quickly and accurately, using the best solutions to analyze mass data sets, and you know how to get results. Youll also make this data easily accessible across the company and usable in multiple departments. Skillset Required Bachelors Degree or more in Computer Science or a related field. A solid track record of data management showing your flawless execution and attention to detail. Strong knowledge of and experience with statistics. Programming experience, ideally in Python, Spark, Kafka or Java, and a willingness to learn new programming languages to meet goals and objectives. Experience in C, Perl, Javascript or other programming languages is a plus. Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks. Experience in MapReduce is a plus. Deep knowledge of data mining, machine learning, natural language processing, or information retrieval. Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources. Experience with machine learning toolkits including, H2O, SparkML or Mahout A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done. Experience in production support and troubleshooting. You find satisfaction in a job well done and thrive on solving head-scratching problems.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist with a focus on Predictive Analytics and expertise in Databricks, your primary responsibilities will involve designing and implementing predictive models for various applications such as forecasting, churn analysis, and fraud detection. You will utilize tools like Python, SQL, Spark MLlib, and Databricks ML to deploy these models effectively. Your role will also include building end-to-end machine learning pipelines on the Databricks Lakehouse platform, encompassing data ingestion, feature engineering, model training, and deployment. It will be essential to optimize model performance through techniques like hyperparameter tuning, AutoML, and leveraging MLflow for tracking. Collaboration with engineering teams will be a key aspect of your job to ensure the operationalization of models, both in batch and real-time scenarios, using Databricks Jobs or REST APIs. You will be responsible for implementing Delta Lake to support scalable and ACID-compliant data workflows, as well as enabling CI/CD for machine learning pipelines using Databricks Repos and GitHub Actions. In addition to your technical duties, troubleshooting Spark Jobs and resolving issues within the Databricks Environment will be part of your routine tasks. To excel in this role, you should possess 3 to 5 years of experience in predictive analytics, with a strong background in regression, classification, and time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark is crucial for success in this position. Familiarity with tools like MLflow, Feature Store, and Unity Catalog for governance purposes will be advantageous. Industry experience in Life Insurance or Property & Casualty (P&C) is preferred, and holding a certification as a Databricks Certified ML Practitioner would be considered a plus. Your technical skill set should include proficiency in Python, PySpark, MLflow, and Databricks AutoML. Expertise in predictive modeling techniques such as classification, clustering, regression, time series analysis, and NLP is required. Familiarity with cloud platforms like Azure or AWS, Delta Lake, and Unity Catalog will also be beneficial for this role.,
Posted 2 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Pune
Work from Office
Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. In this role as a Senior Machine Learning Engineer in the Digital Technology team, you will collaborate with data scientists and stakeholders to understand business requirements and translate them into machine learning solutions. This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction. Responsibilities may include the following and other duties may be assigned: Design, develop, and implement machine learning models and algorithms to solve complex business problems. Use software design principles to develop production ready code. Collect, preprocess, and analyze large datasets to train and evaluate machine learning models. Optimize and fine-tune machine learning models for performance and accuracy. Deploy machine learning models into production environments and monitor their performance. Develop processes and tools to monitor and analyze model performance and data accuracy. Stay up to date with the latest advancements in machine learning and apply them to improve existing models and algorithms. Collaborate with cross-functional teams to integrate machine learning solutions into existing systems and workflows. Document and communicate machine learning solutions, methodologies, and results to technical and non-technical stakeholders. Mentor and provide guidance to junior Machine Learning Engineers. Required Knowledge and Experience: Bachelors degree from an accredited institution in a technical discipline such as the sciences, technology, engineering, or mathematics 5+ years of industry experience in writing production level, scalable code (e.g. in Python) 4+ years of experience with one or more of the following machine learning topics: classification, clustering, optimization, recommendation system, deep learning 4+ years of industry experience with distributed computing frameworks such as Pyspark 4+ years of industry experience with popular ml frameworks such as Spark MLlib, Keras, Tensorflow, PyTorch, HuggingFace Transformers and other libraries (like scikit-learn, spacy, genism etc.). 4+ years of industry experience with major cloud computing services like AWS or Azure or GCP 1+ years of experience in building and scaling Generative AI Applications, specifically around frameworks like Langchain, PGVector, Pinecone, Bedrock. Experience in building Agentic AI applications. 2+ years of technical leadership leading junior engineers in a product development setting.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Hyderabad, Greater Noida
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop
Posted 3 weeks ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Flask Developer to create lightweight and high-performance web services using Python. Key Responsibilities: Develop web APIs using Flask and deploy them on cloud or containers Use SQLAlchemy or MongoEngine for data access Write modular blueprints and configure middleware Perform request validation and error handling Work on REST/GraphQL integration with frontend teams Required Skills & Qualifications: Expertise in Flask , Python , and Jinja2 Familiar with Gunicorn , Docker , and PostgreSQL Understanding of JWT , OAuth , and API security Bonus: Experience in FastAPI or Flask-SocketIO Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 4 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Senior Azure Data Engineer with 5 to 10 years of experience to design and implement scalable data pipelines using Azure technologies, driving data transformation, analytics, and machine learning. The ideal candidate will have a strong background in data engineering and proficiency in Python, PySpark, and Spark Pools. Roles and Responsibility Design and implement scalable Databricks data pipelines using PySpark. Transform raw data into actionable insights through data analysis and machine learning. Build, deploy, and maintain machine learning models using MLlib or TensorFlow. Optimize cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. Execute large-scale data processing using Spark Pools and fine-tune configurations for efficiency. Collaborate with cross-functional teams to identify business requirements and develop solutions. Job Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python, PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow. Hands-on experience with Azure Data Lake, Blob Storage, Synapse Analytics, and other relevant technologies. Strong understanding of data modeling, data warehousing, and ETL processes. Experience with agile development methodologies and version control systems.
Posted 1 month ago
7.0 - 9.0 years
5 - 8 Lacs
Chennai, Bengaluru, Telangana
Work from Office
Job Details: Skill: Node JS Notice Period: Immediate Joiners or within 30 days Employee type : Fulltime Design and build advanced applications for the Android platform with JAVA Collaborate with cross-functional teams to define, design, and ship new features Work with outside data sources and APIs Unit-test code for robustness, including edge cases, usability, and general reliability Work on bug fixing and improving application performance Continuously discover, evaluate, and implement new technologies to maximize development efficiency Translate designs and wireframes into high-quality code Understand business requirements and translate them into technical requirements Design, build, and maintain high performance, reusable, and reliable Java code.
Posted 1 month ago
5.0 - 9.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Project description Luxoft DXC Technology Company is an established company focusing on consulting and implementation of complex projects in the financial industry. At the interface between technology and business, we convince with our know-how, well-founded methodology and pleasure in success. As a reliable partner to our renowned customers, we support them in planning, designing and implementing the desired innovations. Together with the customer, we deliver top performance! For one of our Clients in the Insurance Segment we are searching for a Senior Data Scientist with Databricks and Predictive Analytics Focus. Responsibilities Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML Build end-to-end ML pipelines (data ingestion feature engineering model training deployment) on Databricks Lakehouse Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions Troubleshoot issues in Spark Jobs and Databricks Environment. Client is in the USA. Candidate should be able to work until 11.00 am EST to overlap a few hours with the client and be able to attend meetings. Skills Must have : 5+ years in predictive analytics, with expertise in regression, classification, time-series modeling Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Skills: Python, PySpark , MLflow , Databricks AutoML Predictive Modelling ( Classification , Clustering , Regression , timeseries and NLP) Cloud platform (Azure/AWS) , Delta Lake , Unity Catalog Nice to have Certifications: Databricks Certified ML Practitioner OtherLanguagesEnglishC1 Advanced SenioritySenior
Posted 1 month ago
4.0 - 6.0 years
7 - 11 Lacs
Hyderabad, Chennai
Work from Office
Job Title : Data Scientist Location State : Tamil Nadu,Telangana Location City : Hyderabad, Chennai Experience Required : 4 to 6 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: Requirements: 5+ years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Skills: Python, PySpark , MLflow, Databricks AutoML. Predictive MoClienting (Classification , Clustering , Regression, timeseries and NLP). Cloud platform (Azure/AWS) , Delta Lake, Unity Catalog. Certifications; Databricks Certified ML Practitioner (Optional) Essential Job Functions: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion, feature engineering, model training, deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Qualifications: Skill Required: Data Science, Python for Data Science Experience Range in Required Skills: 4-6 Years How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000
Posted 1 month ago
8.0 - 11.0 years
45 - 50 Lacs
Noida, Kolkata, Chennai
Work from Office
Dear Candidate, We are hiring a Julia Developer to build computational and scientific applications requiring speed and mathematical accuracy. Ideal for domains like finance, engineering, or AI research. Key Responsibilities: Develop applications and models using the Julia programming language . Optimize for performance, parallelism, and numerical accuracy . Integrate with Python or C++ libraries where needed. Collaborate with data scientists and engineers on simulations and modeling. Maintain well-documented and reusable codebases. Required Skills & Qualifications: Proficient in Julia , with knowledge of multiple dispatch and type system Experience in numerical computing or scientific research Familiarity with Plots.jl, Flux.jl, or DataFrames.jl Understanding of Python, R, or MATLAB is a plus Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
3.0 - 8.0 years
10 - 15 Lacs
Noida
Work from Office
Job Summary- Data Scientist with good hands-on experience of 3+ years in developing state of the art and scalable Machine Learning models and their operationalization, leveraging off-the-shelf workbench production. Job Responsibilities- 1. Hands on experience in Python data-science and math packages such as NumPy, Pandas, Sklearn, Seaborn, PyCaret, Matplotlib 2. Proficiency in Python and common Machine Learning frameworks (TensorFlow, NLTK, Stanford NLP, PyTorch, Ling Pipe, Caffe, Keras, SparkML and OpenAI etc.) 3. Experience of working in large teams and using collaboration tools like GIT, Jira and Confluence 4. Good understanding of any of the cloud platform - AWS, Azure or GCP 5. Understanding of Commercial Pharma landscape and Patient Data / Analytics would be a huge plus 6. Should have an attitude of willingness to learn, accepting the challenging environment and confidence in delivering the results within timelines. Should be inclined towards self motivation and self-driven to find solutions for problems. 7. Should be able to mentor and guide mid to large sized teams under him/her Job - 1. Strong experience on Spark with Scala/Python/Java 2. Strong proficiency in building/training/evaluating state of the art machine learning models and its deployment 3. Proficiency in Statistical and Probabilistic methods such as SVM, Decision-Trees, Bagging and Boosting Techniques, Clustering 4. Proficiency in Core NLP techniques like Text Classification, Named Entity Recognition (NER), Topic Modeling, Sentiment Analysis, etc. Understanding of Generative AI / Large Language Models / Transformers would be a plus
Posted 1 month ago
6.0 - 11.0 years
14 - 19 Lacs
Noida
Work from Office
Job Summary- Data Scientist with good hands-on experience of 6+ years in developing state of the art and scalable Machine Learning models and their operationalization, leveraging off-the-shelf workbench production. Job Responsibilities- Hands on experience in Python data-science and math packages such as NumPy, Pandas, Sklearn, Seaborn, PyCaret, Matplotlib Proficiency in Python and common Machine Learning frameworks (TensorFlow, NLTK, Stanford NLP, PyTorch, Ling Pipe, Caffe, Keras, SparkML and OpenAI etc.) Experience of working in large teams and using collaboration tools like GIT, Jira and Confluence Good understanding of any of the cloud platform – AWS, Azure or GCP Understanding of Commercial Pharma landscape and Patient Data / Analytics would be a huge plus Should have an attitude of willingness to learn, accepting the challenging environment and confidence in delivering the results within timelines. Should be inclined towards self motivation and self-driven to find solutions for problems. Should be able to mentor and guide mid to large sized teams under him/her Job - Strong experience on Spark with Scala/Python/Java Strong proficiency in building/training/evaluating state of the art machine learning models and its deployment Proficiency in Statistical and Probabilistic methods such as SVM, Decision-Trees, Bagging and Boosting Techniques, Clustering Proficiency in Core NLP techniques like Text Classification, Named Entity Recognition (NER), Topic Modeling, Sentiment Analysis, etc. Understanding of Generative AI / Large Language Models / Transformers would be a plus
Posted 1 month ago
0.0 - 5.0 years
0 - 5 Lacs
Mumbai, Maharashtra, India
On-site
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your role and responsibilities Role Overview: Hiring an ML Engineer with experience in Cloudera ML to support end-to-end model development, deployment, and monitoring on the CDP platform. Key Responsibilities: Develop and deploy models using CML workspaces. Build CI/CD pipelines for ML lifecycle. Integrate with governance and monitoring tools. Enable secure model serving via REST APIs. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: Experience in Cloudera ML, Spark MLlib, or scikit-learn. ML pipeline automation (MLflow, Airflow, or equivalent). Model governance, lineage, and versioning. API exposure for real-time inference.
Posted 1 month ago
3.0 - 6.0 years
7 - 12 Lacs
Mumbai
Work from Office
Role Overview : Hiring an ML Engineer with experience in Cloudera ML to support end-to-end model development, deployment, and monitoring on the CDP platform. Key Responsibilities : Develop and deploy models using CML workspaces Build CI/CD pipelines for ML lifecycle Integrate with governance and monitoring tools Enable secure model serving via REST APIs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience in Cloudera ML, Spark MLlib, or scikit-learn ML pipeline automation (MLflow, Airflow, or equivalent) Model governance, lineage, and versioning API exposure for real-time inference
Posted 1 month ago
7.0 - 12.0 years
15 - 22 Lacs
Gurugram
Work from Office
Data Scientist: 7+ years experience in AI/ML & Big Data Proficient in Python, SQL, TensorFlow, PyTorch, Scikit-learn, Spark MLlib Cloud proficiency (GCP, AWS/Azure) Strong analytical & comms skills Location: Gurgaon Salary: 22 LPA Immediate joiners
Posted 2 months ago
7.0 - 10.0 years
15 - 25 Lacs
Gurugram
Hybrid
7+ years of hands-on experience in AI/ML and Big Data projects. Strong proficiency in Python , SQL , TensorFlow , PyTorch , Scikit-learn , and Spark MLlib . Proven experience designing and deploying scalable AI/ML models in cloud environments (preferably GCP ; AWS/Azure is a plus). Solid understanding of data modeling , data warehousing , and ETL in Big Data contexts. Ability to apply statistical and machine learning techniques to derive actionable business insights. Skilled in building and optimizing Big Data pipelines and integrating ML models into production. Strong communication skills with the ability to present complex findings to both technical and non-technical audiences. Experience with data visualization tools (e.g., Tableau, Power BI) is a plus. Bachelors or Masters degree in Computer Science , Data Science , Engineering , Statistics , Mathematics , or a related quantitative field.
Posted 2 months ago
8.0 - 13.0 years
25 - 30 Lacs
Pune
Work from Office
About The Role We are hiring a results-oriented Senior Machine Learning Engineer to join our growing team in Pune on a hybrid schedule Reporting to the Director of Machine Learning, you will partner with Product and Engineering teams to both solve problems and identify new opportunities for the business The ideal candidate will apply quantitative analysis, modeling, and data mining to help drive informed product decisions for PubMatic + get things done, What You'll Do Perform deep dive analysis to understand and optimize the key product KPIs, Apply statistics, modeling, and machine learning to improve the efficiency of systems and relevance algorithms across our business application products, Conduct data analysis to make product recommendations and design A/B experiments, Partner with Product and Engineering teams to solve problems and identify trends and opportunities, Collaborate with cross-functional stakeholders to understand their business needs, formulate and complete end-to-end analysis that includes data gathering, analysis, ongoing scaled deliverables and presentations, We'd Love for You to Have Five plus years of hands-on experience designing Machine Learning models to solve business problems with statistical packages, such as R, MATLAB, Python (NumPy, Scikit-learn + Pandas) or MLlib, Experience with articulating product questions and using statistics to arrive at an answer, Experience with scripting in SQL extracting large data sets and designing ETL flows, Work experience in an inter-disciplinary / cross-functional field, Deep interest and aptitude in data, metrics, analysis, and trends, and applied knowledge of measurement, statistics, and program evaluation, Distinctive problem-solving skills and impeccable business judgment, Capable of translating analysis results into business recommendations, Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known institute/university, Additional Information Return to Office: PubMatic employees throughout the globe have returned to our offices via a hybrid work schedule (3 days in office and 2 days working remotely) that is intended to maximize collaboration, innovation, and productivity among teams and across functions, Benefits: Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement As well, when were back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more!, Diversity and Inclusion: PubMatic is proud to be an equal opportunity employer; we dont just value diversity, we promote and celebrate it We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status About PubMatic PubMatic is one of the worlds leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes, Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand,
Posted 2 months ago
5.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
About The Role We are hiring a results-oriented Principal Machine Learning Engineer to join our growing team in Pune on a hybrid schedule Reporting to the Director of Machine Learning, you will partner with Product and Engineering teams to both solve problems and identify new opportunities for the business The ideal candidate will apply quantitative analysis, modeling, and data mining to help drive informed product decisions for PubMatic + get things done, What You'll Do Perform deep dive analysis to understand and optimize the key product KPIs, Apply statistics, modeling, and machine learning to improve the efficiency of systems and relevance algorithms across our business application products, Conduct data analysis to make product recommendations and design A/B experiments, Partner with Product and Engineering teams to solve problems and identify trends and opportunities, Collaborate with cross-functional stakeholders to understand their business needs, formulate and complete end-to-end analysis that includes data gathering, analysis, ongoing scaled deliverables, and presentations, We'd Love for You to Have Seven plus years of hands-on experience designing Machine Learning models to solve business problems with statistical packages, such as R, MATLAB, Python (NumPy, Scikit-learn + Pandas) or MLlib, Proven ability to inspire, mentor, and develop team members to deliver value consistently, Experience with articulating product questions and using statistics to arrive at an answer, Experience with scripting in SQL extracting large data sets and designing ETL flows, Work experience in an inter-disciplinary / cross-functional field, Deep interest and aptitude in data, metrics, analysis, and trends, and applied knowledge of measurement, statistics, and program evaluation, Distinctive problem-solving skills and impeccable business judgment, Capable of translating analysis results into business recommendations, Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known institute/university, Additional Information Return to Office: PubMatic employees throughout the globe have returned to our offices via a hybrid work schedule (3 days in office and 2 days working remotely) that is intended to maximize collaboration, innovation, and productivity among teams and across functions, Benefits: Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement As well, when were back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more! Diversity and Inclusion: PubMatic is proud to be an equal opportunity employer; we dont just value diversity, we promote and celebrate it We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status About PubMatic PubMatic is one of the worlds leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes, Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand,
Posted 2 months ago
8 - 12 years
17 - 22 Lacs
Mumbai, Hyderabad
Work from Office
Principal Data Scientist - NAV02CM Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting May 8, 2025 Unposting Date Jun 7, 2025 Reporting Manager Title Head of Data Intelligence Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. Were bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. The Role As a Data Science Leadwith Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. Conceptualise, build and manage AI/ML (more focus on unstructured data) platform by evaluating and selecting best in industry AI/ML tools and frameworks Drive and take ownership for developing cognitive solutions for internal stakeholders & external customers. Conduct research in various areas like Explainable AI, Image Segmentation,3D object detections and Statistical Methods Evaluate not only algorithms & models but also available tools & technologies in the market to maximize organizational spend. Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI/ML applications that scale from multi-user to enterprise class. Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations IT Skills & Experience Thorough understanding of complete AI/ML project life cycle to establish processes & provide guidance & expert support to the team. Expert knowledge of emerging technologies in Deep Learning and Reinforcement Learning Knowledge of MLOps process for efficient management of the AI/ML projects. Must have lead project execution with other data scientists/ engineers for large and complex data sets Understanding of machine learning algorithms, such as k-NN, GBM, Neural Networks Naive Bayes, SVM, and Decision Forests. Experience in the AI/ML components like Jupyter Hub, Zeppelin Notebook, Azure ML studio, Spark ML lib, TensorFlow, Tensor flow,Keras, Py-Torch and Sci-Kit Learn etc Strong knowledge of deep learning with special focus on CNN/R-CNN/LSTM/Encoder/Transformer architecture Hands-on experience with large networks like Inseption-Resnets,ResNeXT-50. Demonstrated capability using RNNS for text, speech data, generative models Working knowledge of NoSQL (GraphX/Neo4J), Document, Columnar and In-Memory database models Working knowledge of ETL tools and techniques, such as Talend,SAP BI Platform/SSIS or MapReduce etc. Experience in building KPI /storytelling dashboards on visualization tools like Tableau/Zoom data People Skills: Professional and open communication to all internal and external interfaces.Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing cultureStrong analytical skills. Industry Specific Experience: 10 -18 Years of experience of AI/ML project execution and AI/ML research Education Qualifications, Accreditation, Training Master or Doctroate degree Computer Science Engineering/Information Technology /Artificial Intelligence Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change.
Posted 2 months ago
9 - 11 years
37 - 40 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Dear Candidate, We are hiring a Scala Developer to work on high-performance distributed systems, leveraging the power of functional and object-oriented paradigms. This role is perfect for engineers passionate about clean code, concurrency, and big data pipelines. Key Responsibilities: Build scalable backend services using Scala and the Play or Akka frameworks . Write concurrent and reactive code for high-throughput applications . Integrate with Kafka, Spark, or Hadoop for data processing. Ensure code quality through unit tests and property-based testing . Work with microservices, APIs, and cloud-native deployments. Required Skills & Qualifications: Proficient in Scala , with a strong grasp of functional programming Experience with Akka, Play, or Cats Familiarity with Big Data tools and RESTful API development Bonus: Experience with ZIO, Monix, or Slick Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough