Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
haryana
On-site
You should have the ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python. Demonstrating competency in Probability and Statistics, you should be capable of utilizing ideas of Data Distributions, Hypothesis Testing, and other Statistical Tests. Your experience should include dealing with outliers, denoising data, and handling the impact of pandemic like situations. Performing Exploratory Data Analysis (EDA) of raw data and feature engineering wherever applicable is crucial. You should possess demonstrable competency in Data Visualization using the Python/R Data Science Stack. Leveraging cloud platforms for training and deploying large-scale solutions is essential. Being able to train and evaluate ML models using various machine learning and deep learning algorithms is a must. You should also be skilled in retraining and maintaining model accuracy in deployment. Knowledge of cloud platforms such as AWS, Azure, and GCP is required. Exposure to NoSQL databases (MongoDB, Cassandra, Cosmos DB, HBase) is preferred. An understanding of forecasting experience in products like SAP, Oracle, Power BI, and Qlik is beneficial. Proficiency in Excel (Power Pivot, Power Query, Macros, Charts) is expected. Having experience with large datasets and distributed computing (Hive/Hadoop/Spark) is advantageous. Knowledge of transfer learning using state-of-the-art models in different spaces like vision, NLP, and speech is a plus. Integration with external services and Cloud API is also a part of the role. Working with data annotation approaches and tools for text, images, and videos is desirable. You should be able to package and deploy large-scale models on on-premise systems using multiple approaches, including Docker. Taking complete ownership of the assigned project is necessary. Experience of working in Agile environments and being well-versed with JIRA or equivalent project tracking tools is expected.,
Posted 6 days ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
The ideal candidate for this position should have previous experience in building data science/algorithms based products, which would be a significant advantage. Experience in handling healthcare data is also desired. An educational qualification of Bachelors/Masters in computer science/Data Science or related subjects from a reputable institution is required. With a typical experience of 7-9 years in the industry, the candidate should have a strong background in developing data science models and solutions. The ability to quickly adapt to new programming languages, technologies, and frameworks is essential. A deep understanding of data structures and algorithms is necessary. The candidate should also have a proven track record of implementing end-to-end data science modeling projects and providing guidance and thought leadership to the team. Experience in a consulting environment with a hands-on attitude is preferred. As a Data Science Lead, the primary responsibility will be to lead a team of analysts, data scientists, and engineers to deliver end-to-end solutions for pharmaceutical clients. The candidate is expected to participate in client proposal discussions with senior stakeholders and provide technical thought leadership. Expertise in all phases of model development, including exploratory data analysis, hypothesis testing, feature creation, dimension reduction, model training, selection, validation, and deployment, is required. A deep understanding of statistical and machine learning methods such as logistic regression, SVM, decision tree, random forest, neural network, and regression is essential. Mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and their practical implementation to solve business problems is necessary. The candidate should also be able to implement ML models in an optimized and sustainable framework and gain business understanding in the healthcare domain to develop relevant analytics use cases. In terms of technical skills, the candidate should have expert-level proficiency in programming languages like Python/SQL, along with working knowledge of relational SQL and NoSQL databases such as Postgres and Redshift. Extensive knowledge of predictive and machine learning models, NLP techniques, deep learning, and unsupervised learning is required. Familiarity with data structures, pre-processing, feature engineering, sampling techniques, and statistical analysis is important. Exposure to open-source tools, cloud platforms like AWS and Azure, and AI tools like LLM models and visualization tools like Tableau and PowerBI is preferred. If you do not meet every job requirement, the company encourages candidates to apply anyway, as they are dedicated to building a diverse, inclusive, and authentic workplace. Your excitement for the role and potential fit may make you the right candidate for this position or others within the company.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
Oracle Cloud Infrastructure (OCI) is leading cloud innovation, combining startup agility with the reliability of a top enterprise software provider. The AI Science team focuses on cutting-edge machine learning solutions that address real-world challenges at scale. We are seeking an experienced Sr. Applied Scientist (IC3) with expertise in Generative AI and Computer Vision to develop highly complex and accurate data science models. As a Senior Applied Scientist, you will work on secure, scalable, and innovative AI solutions utilizing advanced techniques in computer vision and GenAI technologies. Key Responsibilities: - Develop advanced AI models and algorithms, particularly in large language model, large multimodal, computer vision, and foundational models. - Design, implement, and test critical features of AI services ensuring correctness, high availability, scalability, and cost-effectiveness. - Advocate best practices for testing, benchmarking, and model validation to ensure reliability and performance. - Analyze ML models, optimize for accuracy and latency, and handle large-scale training and production deployment. - Own data analysis, feature engineering, technique selection & implementation, debugging, and maintenance of production models. - Implement machine learning algorithms from scratch to production, work with complex data sets, and proactively address technical issues. - Collaborate with product managers, engineering leads, and data teams to define requirements and ensure data quality. - Leverage Oracle Cloud technology to develop best-in-class computer vision solutions for Oracle's business domain at scale. Preferred Qualifications: - Ph.D. or Master's degree in Computer Science, Machine Learning, Computer Vision, or related field. - Expertise in GenAI, LLMs, LMMs, object detection, facial recognition, and image classification. - Strong foundation in deep learning architectures like CNNs, transformers, diffusion models, and multimodal models. - Proficiency in high-level languages such as Python, Java, or C++, and experience in ML algorithm design and production deployment. - Familiarity with working in cloud environments like Oracle Cloud (OCI), AWS, GCP, or Azure, and Agile development processes. - Strong problem-solving skills, a drive to learn new technologies, and an ability to thrive in a fast-paced work environment. This role offers an opportunity to contribute to cutting-edge AI solutions, collaborate with cross-functional teams, and drive innovation in the field of computer vision.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Senior Specialist in Software Development (Artificial Intelligence) at Accelya, you will lead the design, development, and implementation of AI and machine learning solutions to tackle complex business challenges. Your expertise in AI algorithms, model development, and software engineering best practices will be crucial in working with cross-functional teams to deliver intelligent systems that optimize business operations and decision-making. Your responsibilities will include designing and developing AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. You will lead the implementation of advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. Additionally, you will develop scalable AI models, integrate them into software applications, and build APIs and microservices for deployment in cloud environments or on-premise systems. Collaboration with data scientists and data engineers will be essential in gathering, preprocessing, and analyzing large datasets. You will also implement feature engineering techniques to enhance the accuracy and performance of machine learning models. Regular evaluation of AI models using performance metrics and fine-tuning them for optimal accuracy will be part of your role. Furthermore, you will collaborate with business stakeholders to identify AI adoption opportunities, provide technical leadership and mentorship to junior team members, and stay updated with the latest AI trends and research to introduce innovative techniques to the team. Ensuring ethical compliance, security, and continuous improvement of AI systems will also be key aspects of your role. You should hold a Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field, along with at least 5 years of experience in software development focusing on AI and machine learning. Proficiency in AI frameworks and libraries, programming languages such as Python, R, or Java, and cloud platforms for deploying AI models is required. Familiarity with Agile methodologies, data structures, and databases is essential. Preferred qualifications include a Master's or PhD in Artificial Intelligence or Machine Learning, experience with NLP techniques and computer vision technologies, and certifications in AI/ML or cloud platforms. Accelya is looking for individuals who are passionate about shaping the future of the air transport industry through innovative AI solutions. If you are ready to contribute your expertise and drive continuous improvement in AI systems, this role offers you the opportunity to make a significant impact in the industry.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
We are seeking a talented and enthusiastic AI Engineer to be a part of our team. As an AI Engineer, you will be responsible for the design, development, and deployment of AI models and algorithms to address intricate challenges and drive business value. The ideal candidate will possess a solid foundation in machine learning, deep learning, and data science, coupled with exceptional programming abilities. Your main responsibilities will include designing, developing, and implementing AI models and algorithms to solve real-world issues. You will collaborate with diverse teams to comprehend business requirements and transform them into technical solutions. Data analysis and preprocessing will be crucial to ensure high-quality input for AI models. Furthermore, you will be involved in training, testing, and validating AI models to guarantee accuracy and performance, along with deploying these models into production environments and monitoring their performance. Continuous enhancement of AI models by integrating new data and feedback will also be part of your role. It is essential to stay updated with the latest advancements in AI and machine learning technologies. Documenting AI models, processes, and outcomes for knowledge sharing and future reference will be necessary. Additionally, participation in code reviews and providing constructive feedback to peers will be expected. The ideal candidate should have prior experience as an AI Engineer, Machine Learning Engineer, or a similar role. Proficiency in machine learning algorithms, deep learning frameworks, statistical methods, and programming languages like Python, R, or Java is required. Familiarity with LLMs such as GPT-4, Llama-2, NLP, RAG, data preprocessing, feature engineering, and data visualization techniques is essential. Experience with machine learning libraries and frameworks like TensorFlow, Keras, PyTorch, Scikit-learn, vector databases such as Pinecone and ChromaDB, cloud platforms like AWS, Google Cloud, Azure, and version control systems like Git is preferred. Strong problem-solving skills, attention to detail, excellent communication, and collaboration skills are also vital. The ideal candidate should have 2-6 years of experience and a Bachelor's or Master's degree in computer science, artificial intelligence, machine learning, or a related field.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
We are seeking a Data Scientist with over 3 years of experience in Machine Learning, Deep Learning, and Large Language Models (LLMs) to join our team at SparkBrains Private Limited in Chandigarh. As a Data Scientist, you will be responsible for leveraging your analytical skills and expertise in data modeling to develop and deploy AI-driven solutions that provide value to our business and clients. Your key responsibilities will include gathering, cleaning, and preparing data for model training, designing and optimizing machine learning and deep learning models, integrating Large Language Models (LLMs) for NLP tasks, identifying relevant features for model accuracy, conducting rigorous model evaluation and optimization, creating data visualizations and insights for stakeholder communication, developing and deploying APIs, and collaborating with cross-functional teams while documenting processes effectively. To qualify for this role, you must hold a Bachelors or Masters degree in Computer Science, Data Science, AI, Machine Learning, or a related field, along with a minimum of 3 years of experience as a Data Scientist or AI Engineer. You should also possess proficiency in Python and relevant ML/AI libraries, hands-on experience with LLMs, a strong understanding of NLP, neural networks, and deep learning architectures, knowledge of data wrangling and visualization techniques, experience with APIs and cloud platforms, analytical and problem-solving skills, as well as excellent communication skills for effective collaboration. By joining our team, you will have the opportunity to work on cutting-edge AI/ML projects, be part of a collaborative work environment that emphasizes continuous learning, gain exposure to diverse industries and domains, and benefit from competitive salary and growth opportunities. This is a full-time, permanent position with a day shift schedule from Monday to Friday, requiring in-person work at our Chandigarh office.,
Posted 1 week ago
9.0 - 12.0 years
14 - 24 Lacs
Gurugram
Remote
We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.
Posted 1 week ago
4.0 - 5.0 years
4 - 5 Lacs
Patna, Bihar, India
On-site
Responsibilities : Conduct feature engineering , data analysis , and data exploration to extract valuable insights. Develop and optimize Machine Learning models to achieve high accuracy and performance . Design and implement Deep Learning models , including Artificial Neural Networks (ANN) , Convolutional Neural Networks (CNN) , and Reinforcement Learning techniques. Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness . Deploy models in production environments and ensure continuous monitoring , improvement , and updates based on feedback. Collaborate with cross-functional teams to align ML solutions with business goals . Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. Bring in the latest advancements in ML and AI to drive innovation . Required Skills : 4-5 years of hands-on experience in Machine Learning and Deep Learning . Strong expertise in feature engineering , data exploration , and data preprocessing . Experience with imbalanced datasets and techniques to improve model generalization . Proficiency in Python , TensorFlow , Scikit-learn , and other ML frameworks . Strong mathematical and statistical knowledge with problem-solving skills. Ability to optimize models for high accuracy and performance in real-world scenarios . Preferred Skills : Experience with Big Data technologies (e.g., Hadoop , Spark ). Familiarity with containerization and orchestration tools (e.g., Docker , Kubernetes ). Experience in automating ML pipelines with MLOps practices . Experience in model deployment using cloud platforms ( AWS , GCP , Azure ) or MLOps tools .
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
gujarat
On-site
As a Data Scientist at Micron Technology in Sanand Gujarat, you will have the opportunity to play a pivotal role in transforming how the world uses information to enrich life for all. Micron Technology is a global leader in innovating memory and storage solutions, driving the acceleration of information into intelligence and inspiring advancements in learning, communication, and progress. Your responsibilities will involve a broad range of tasks, including but not limited to: - Developing a strong career path as a Data Scientist in highly automated industrial manufacturing, focusing on analysis and machine learning of terabytes and petabytes of diverse datasets. - Extracting data from various databases using SQL and other query languages, and applying data cleansing, outlier identification, and missing data techniques. - Applying the latest mathematical and statistical techniques to analyze data and identify patterns. - Building web applications as part of your job scope. - Utilizing Cloud-based Analytics and Machine Learning Modeling. - Building APIs for application integration. - Engaging in statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. - Demonstrating proficiency in data analysis and validation, as well as strong software development skills. In addition to the above, you should possess above-average skills in: - Programming fluency in Python. - Knowledge of statistics, machine learning, and other advanced analytical methods. - Familiarity with javascript, AngularJS 2.0, and Tableau, with a background in OOPS considered an advantage. - Understanding pySpark and/or libraries for distributed and parallel processing. - Experience with Tensorflow and/or other statistical software with scripting capabilities. - Knowledge of time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus. - Understanding of Manufacturing Execution Systems (MES) is beneficial. You should be able to work in a dynamic, fast-paced environment, be self-motivated, adaptable to new technologies, and possess a passion for data and information with excellent analytical, problem-solving, and organizational skills. Furthermore, effective communication with distributed teams (written, verbal, and presentation) and the ability to work collaboratively towards common objectives are key attributes for this role. To be eligible for this position, you should hold a Bachelors or Masters degree in Computer Science or Electrical/Electronic Engineering, with a CGPA of 7.0 and above. Join Micron Technology, Inc., where our relentless focus on customers, technology leadership, and operational excellence drives the creation of high-performance memory and storage products that power the data economy. Visit micron.com/careers to learn more about our innovative solutions and opportunities for growth. For any assistance with the application process or to request reasonable accommodations, please reach out to hrsupport_india@micron.com. Micron Technology strictly prohibits the use of child labor and complies with all applicable laws, rules, regulations, and international labor standards. Candidates are encouraged to use AI tools to enhance their application materials, ensuring accuracy and truthfulness in representing their skills and experiences. Fabrication or misrepresentation will lead to immediate disqualification. As a Data Scientist at Micron Technology, you will be part of a transformative journey that shapes the future of information utilization and enriches lives across the globe.,
Posted 1 week ago
3.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a highly motivated and experienced data scientist to lead our team of Gen-Ai Engineers. As a data scientist, you will oversee all processes from data extraction, cleaning, and pre-processing to training models and deploying them to production. The ideal candidate will have a strong passion for artificial intelligence and will stay updated with the latest advancements in this field. Your responsibilities will include utilizing frameworks like Langchain to develop scalable AI solutions, integrating vector databases such as Azure Cognitive Search, Weavite, or Pinecone to support AI model functionalities, and collaborating with cross-functional teams to define problem statements and prototype solutions using generative AI. It will also involve ensuring the robustness, scalability, and reliability of AI systems by implementing best practices in machine learning and software development. You will be required to explore and visualize data to gain insights, identify differences in data distribution that could impact model performance in real-world deployment, verify and ensure data quality through data cleaning, supervise the data acquisition process if additional data is needed, find and utilize available datasets for training, define validation strategies, feature engineering, and data augmentation pipelines, train models, tune hyperparameters, analyze model errors, and devise strategies to overcome them. You will also be responsible for deploying models to production. The ideal candidate should have a Bachelors/Masters degree in computer science, data science, mathematics or a related field, along with 3-10 years of experience in building Gen-Ai applications. Preferred skills include proficiency in statistical techniques such as hypothesis testing, regression analysis, clustering, classification, and time series analysis, expertise in deep learning frameworks like TensorFlow, PyTorch, and Keras, specialization in Deep Learning (NLP) and statistical machine learning, strong Python skills, experience in developing production-grade applications, familiarity with Langchain framework and vector databases, understanding and experience with retrieval algorithms, working knowledge of Big data platforms and technologies like Apache Hadoop, Spark, Kafka, or Hive, and familiarity with deploying applications on Ubuntu/Linux systems. Excellent communication, negotiation, and interpersonal skills are also essential for this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You will be responsible for working with a team of data scientists to oversee multiple projects. Your role will involve guiding the team in formulating, developing, and implementing models, while also engaging with business stakeholders to clarify model outcomes. With 6 to 9 years of experience, you should possess a deep understanding of Statistics and hands-on experience with Machine Learning algorithms and techniques. Proficiency in programming languages such as R/SQL and Python is essential, along with a strong background in DL frameworks like TensorFlow and Theano. Your scientific expertise should encompass real-world experience in Deep Learning, including Convolutional Neural Networks, Restricted Boltzmann Machines, and Deep Neural Networks. Your passion for solving complex analytical problems will be crucial, along with the ability to assess problems quantitatively and qualitatively. Collaboration with team members to identify and resolve challenging issues will be a key aspect of your role. Previous experience in applying Machine Learning techniques in a production environment is highly desirable. As part of your responsibilities, you will design, develop, and implement AI/Machine Learning solutions for our industry-specific data analytics platform. Creating scalable processes for collecting, manipulating, presenting, and analyzing large datasets in a production setting will be a core focus. You will define problems, handle all data aspects from acquisition to visualization, conduct feature engineering using ML algorithms, and deploy models effectively. Furthermore, you will be tasked with developing algorithm prototypes, evaluating performance metrics based on real-world datasets, and providing input and guidance to software engineers for algorithm implementation in solution/product development. To qualify for this role, you should have a minimum of 6 years of professional experience in Life Science/Pharma. A degree in Applied Mathematics, Statistics, Machine Learning, or Computer Science is required, with a preference for candidates holding a Ph.D. or MS degree.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Must have experience with Machine Learning Model Development. Expert Level Proficiency in Data Handling (SQL). Hands-on with Model Engineering and Improvement. Strong experience in Model Deployment and Productionalization. Extensive experience in developing and implementing machine learning, Deep Learning, NLP models across various domains. Strong proficiency in Python, including relevant libraries and frameworks for machine learning, such as scikit-learn, XGBoost, and Keras. Ability to write efficient, clean, and scalable code for model development and evaluation. Proven experience in enhancing model accuracy through engineering techniques, such as feature engineering, feature selection, and data augmentation. Ability to analyze model samples in relation to model scores, identify patterns, and iteratively refine models for improved performance. Strong expertise in deploying machine learning models to production systems. Familiarity with the end-to-end process of model deployment, including data preprocessing, model training, optimization, and integration into production environments. We recognise the significance of flexible work arrangements to provide support in a hybrid mode, you will get an environment to maintain a healthy work-life balance. Our focus will be your career growth & professional development to support you in exploring the world of opportunities. Equip yourself with valuable certifications & training programmes in the latest technologies such as AIML, Machine Learning. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a member of the 7-Eleven Global Solution Center team, you will have the opportunity to take ownership and responsibility within specific product areas, contributing to end-to-end solution delivery. You will support local teams and integrate new digital assets while challenging yourself to work on products deployed across a vast network of convenience stores processing over a billion transactions annually. Your role will involve building scalable solutions to address the diverse needs of 84,000+ stores in 19 countries. At 7-Eleven GSC, you will experience growth through cross-functional learning opportunities and be encouraged to embody leadership and service in meeting the needs of customers and communities. We are guided by our Leadership Principles at 7-Eleven, each of which is associated with specific behaviors that help us serve customers and support stores effectively. As a part of the team, you will be expected to be customer-obsessed, courageous in expressing your views, challenge the status quo, act like an entrepreneur, maintain a can-do attitude, do the right thing, and be accountable for your actions. 7-Eleven Global Solution Center is currently seeking a highly skilled senior AI/ML engineer to design, implement, and deploy innovative and efficient AI/ML solutions. The ideal candidate will possess extensive experience in Langchain, NLP, RAG-based systems, Prompt Engineering, Agentic Systems, and cloud platforms such as Azure and AWS. You will be responsible for building AI-driven applications and optimizing Langchain, Gen AI, and NLP models for specific use cases. Additionally, you will experiment with different machine learning models, propose feasible solutions quickly, and communicate effectively with leadership. The qualifications required for this role include 3-5 years of experience in AI/ML engineering, proficiency in Python and machine learning frameworks like TensorFlow and PyTorch, expertise in Generative AI, NLP, and conversational AI technologies, and experience in building and deploying AI-powered applications at scale. You should also have a strong understanding of machine learning algorithms, structured and unstructured data, model evaluation metrics, and business requirements translation into technical requirements. A bachelor's or master's degree in computer science, Artificial Intelligence, Machine Learning, or a related field is necessary, along with familiarity with Git versioning tools and exposure to the retail industry. At 7-Eleven Global Solution Center, we are committed to diversity in the workplace and focus on workplace culture, diverse talent, and our engagement with the communities we serve. We embrace diversity, equity, and inclusion as a business imperative, recognizing the importance of these aspects for our customers, franchisees, and employees. In addition to a challenging and rewarding career, 7-Eleven Global Solution Center offers a comprehensive benefits plan to improve the overall experience of our employees. This plan includes flexible work schedules, diverse leave options, medical coverage for family members, transportation, cafeteria facilities, certification and training programs, and support for employee relocation to Bangalore, India.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Senior Analyst - Data Analytics, you will leverage your 3+ years of experience in Data Analytics and reporting to design and build interactive dashboards and reports using Power BI and Microsoft Fabric. Your strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R will be instrumental in performing advanced data analysis and visualization to support business decision-making. You will utilize your experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue to develop and maintain data pipelines and queries using SQL and Python. Applying data science techniques such as predictive modeling, classification, clustering, and regression, you will solve business problems and uncover actionable insights. Your hands-on experience in building and deploying machine learning models will be crucial in building, validating, and tuning machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. Collaborating with stakeholders, you will translate business questions into data science problems and communicate findings in a clear, actionable manner. You will use statistical techniques and hypothesis testing to validate assumptions and support decision-making. Documenting data science workflows and maintaining the reproducibility of experiments and models will ensure the success of analytics projects. Additionally, you will support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts. Professional certifications such as Microsoft Certified: Power BI Data Analyst Associate (PL-300), SnowPro Core Certification (Snowflake), Microsoft Certified: Azure Data Engineer Associate, and AWS Certified: Data Analytics Specialty are preferred or in progress to enhance your expertise in the field.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
DecisionX is pioneering a new category with the world's first Decision AI, an AI Super-Agent that assists high-growth teams in making smarter, faster decisions by transforming fragmented data into clear next steps. Whether it involves strategic decisions in the boardroom or operational decisions across various departments like Sales, Marketing, Product, and Engineering, down to the minutiae that drives daily operations, Decision AI serves as your invisible co-pilot, thinking alongside you, acting ahead of you, and evolving beyond you. We are seeking a dedicated and hands-on AI Engineer to join our Founding team. In this role, you will collaborate closely with leading AI experts to develop the intelligence layer of our exclusive "Agentic Number System." Key Responsibilities - Building, fine-tuning, and deploying AI/ML models for tasks such as segmentation, scoring, recommendation, and orchestration. - Developing and optimizing agent workflows using LLMs (OpenAI, Claude, Mistral, etc.) for contextual reasoning and task execution. - Creating vector-based memory systems utilizing tools like FAISS, Chroma, or Weaviate. - Working with APIs and connectors to incorporate third-party data sources (e.g., Salesforce, HubSpot, GSuite, Snowflake). - Designing pipelines that transform structured and unstructured signals into actionable insights. - Collaborating with GTM and product teams to define practical AI agent use cases. - Staying informed about the latest developments in LLMs, retrieval-augmented generation (RAG), and agent orchestration frameworks (e.g., CrewAI, AutoGen, LangGraph). Must Have Skills - 5-8 years of experience in AI/ML engineering or applied data science. - Proficient programming skills in Python, with expertise in LangChain, Pandas, NumPy, and Scikit-learn. - Experience with LLMs (OpenAI, Anthropic, etc.), prompt engineering, and RAG pipelines. - Familiarity with vector stores, embeddings, and semantic search. - Expertise in data wrangling, feature engineering, and model deployment. - Knowledge of MLOps tools such as MLflow, Weights & Biases, or equivalent. What you will get - Opportunity to shape the AI architecture of a high-ambition startup. - Close collaboration with a visionary founder and experienced product team. - Ownership, autonomy, and the thrill of building something from 0 to 1. - Early team equity and a fast growth trajectory.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As a GenAI Data Scientist at PwC US - Acceleration Center, you will be responsible for developing and implementing machine learning models and algorithms for GenAI projects. Your role will involve collaborating with product, engineering, and domain experts to identify high-impact opportunities, designing and building GenAI and Agentic AI solutions, processing structured and unstructured data for LLM workflows, validating and evaluating models, containerizing and deploying production workloads, communicating findings via various mediums, and staying updated with GenAI advancements. You should possess a Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field, along with 1-2 years of hands-on experience delivering GenAI solutions and 3-5 years of deploying machine learning solutions in production environments. Proficiency in Python, experience with vector stores and search technologies, familiarity with LLM-backed agent frameworks, expertise in data preprocessing and feature engineering, competence with cloud services, solid grasp of Git workflows and CI/CD pipelines, and proficiency in data visualization are essential requirements. Additionally, having relevant certifications in GenAI tools, hands-on experience with leading agent orchestration platforms, expertise in chatbot design, practical knowledge of ML/DL frameworks, and proficiency in object-oriented programming with languages like Java, C++, or C# are considered as nice-to-have skills. The ideal candidate should possess strong problem-solving skills, a collaborative mindset, and the ability to thrive in a fast-paced environment. If you are passionate about leveraging data to drive insights and make informed business decisions, this role offers an exciting opportunity to contribute to cutting-edge GenAI projects and drive innovation in the field of data science.,
Posted 1 week ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Independently design, develop, and implement machine learning and NLP models. Build and fine-tune LLM-based solutions (prompt engineering, few-shot prompting, chain of thought prompting). Develop robust, production-quality code for AI/ML applications using Python. Build, deploy, and monitor models using AWS services (SageMaker, Bedrock, Lambda, etc.). Conduct data cleaning, feature engineering, and model evaluation on large datasets. Experiment with new GenAI tools, LLM architectures, and APIs (HuggingFace, LangChain, OpenAI, etc.). Collaborate with senior data scientists for reviews but own end-to-end solutioning tasks. Document models, pipelines, experiments, and results clearly and systematically. Stay updated with the latest in AI/ML, GenAI, and cloud technologies.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Advanced analytics techniques are utilized to help clients optimize their operations and achieve strategic goals. In data analysis at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling are leveraged to support clients in solving complex business problems. PwC US - Acceleration Center is currently looking for a highly skilled and experienced GenAI Data Scientist to join the team at the Senior Associate level. As a GenAI Data Scientist, the critical role involves developing and implementing machine learning models and algorithms for GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Candidates with 4+ years of hands-on experience are preferred for this position. Responsibilities: - Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. - Develop and implement machine learning models and algorithms for GenAI projects. - Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. - Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. - Validate and evaluate model performance using appropriate metrics and techniques. - Develop and deploy production-ready machine learning applications and solutions. - Utilize object-oriented programming skills to build robust and scalable software components. - Utilize Kubernetes for container orchestration and deployment. - Design and build chatbots using GenAI technologies. - Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. - Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements: - 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. - Strong programming skills in languages such as Python, R, or Scala. - Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. - Experience with data preprocessing, feature engineering, and data wrangling techniques. - Solid understanding of statistical analysis, hypothesis testing, and experimental design. - Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. - Knowledge of data visualization tools and techniques. - Strong problem-solving and analytical skills. - Excellent communication and collaboration abilities. - Ability to work in a fast-paced and dynamic environment. Preferred Qualifications: - Experience with object-oriented programming languages such as Java, C++, or C#. - Experience with developing and deploying machine learning applications in production environments. - Understanding of data privacy and compliance regulations. - Relevant certifications in data science or GenAI technologies. Nice To Have Skills: - Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. - Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. - Experience in chatbot design and development. Professional And Educational Background: Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Masters Degree /MBA,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
maharashtra
On-site
As a participant in the 4 to 6-month paid internship at Mason Interactive starting in Fall 2025, you will have the opportunity to lead a groundbreaking ML/AI integration project aimed at revolutionizing the utilization of marketing data. Your role will involve developing predictive models, creating automated insights, and implementing AI-driven optimization strategies based on authentic client datasets. Your responsibilities will include building ML models to forecast campaign performance, optimizing marketing strategies, constructing automated data pipelines that merge various marketing data sources, designing interactive dashboards to visually represent ML insights for stakeholders, researching state-of-the-art AI methods to address marketing challenges, and collaborating across different functions to recognize ML applications in marketing workflows. To excel in this role, you should possess a strong foundation in ML, including supervised and unsupervised learning, neural networks, and NLP. Proficiency in Python and familiarity with ML libraries such as scikit-learn, TensorFlow, and PyTorch are essential. Moreover, experience in data preprocessing, feature engineering, SQL, database management, and data visualization tools like Tableau, Power BI, and Looker Studio will be advantageous. In addition to technical skills, a grasp of marketing metrics such as ROI, CPA, ROAS, and CPM is required. You should be able to translate ML insights into actionable marketing recommendations effectively. Soft skills are equally crucial for this role. Excellent communication skills with non-technical stakeholders, self-motivation coupled with strong project management abilities, and a problem-solving mindset to tackle data integration challenges are highly valued. Ideal candidates for this position are rising seniors or recent graduates in Computer Science, Data Science, ML, Statistics, or related fields. A portfolio showcasing ML applications to business problems is preferred. You must be available for a Fall 2025 start, commit to a period of 4 to 6 months, and be willing to work a minimum of part-time. By joining Mason Interactive, you will be part of a competitive paid internship program that offers real compensation. You will lead a transformative project at the convergence of marketing and AI, with the opportunity to make a real-world impact on client business outcomes. Working alongside marketing experts and data scientists, you will have a chance to build an exceptional portfolio and contribute to shaping the future of data-driven marketing. If you are ready to revolutionize marketing analytics with AI, this internship opportunity at Mason Interactive awaits your innovative contributions.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
jaipur, rajasthan
On-site
Amplework Software is a full-stack development agency based in Jaipur (Rajasthan), IND, specializing in end-to-end software development solutions for clients worldwide. We are dedicated to delivering high-quality products that align with business requirements and leverage cutting-edge technologies. Our expertise encompasses custom software development, mobile applications, AI-driven solutions, and enterprise applications. Join our innovative team that drives digital transformation through technology. We are looking for a Mid-Level Python and AI Engineer to join our team. In this role, you will assist in building and training machine learning models using frameworks such as TensorFlow, PyTorch, and Scikit-Learn. You will experiment with pre-trained AI models for NLP, Computer Vision, and Predictive Analytics. Additionally, you will work with structured and unstructured data, collaborate with data scientists and software engineers, and continuously learn, experiment, and optimize models to enhance performance and efficiency. Ideal candidates should possess a Bachelor's degree in Computer Science, Engineering, AI, or a related field and proficiency in Python with experience in writing optimized and clean code. Strong problem-solving skills, understanding of machine learning concepts, and experience with data processing libraries are required. Familiarity with AI models and neural networks using frameworks like Scikit-Learn, TensorFlow, or PyTorch is essential. Preferred qualifications include experience with NLP using transformers, BERT, GPT, or OpenAI APIs, AI model deployment, database querying, and participation in AI-related competitions or projects. Soft skills such as analytical thinking, teamwork, eagerness to learn, and excellent English communication skills are highly valued. Candidates who excel in problem-solving, possess a willingness to adapt and experiment, and prefer a dynamic environment for AI exploration are encouraged to apply. A face-to-face interview will be conducted, and applicants should be able to attend the interview at our office location. Join the Amplework Software team to collaborate with passionate individuals, work on cutting-edge projects, make a real impact, enjoy competitive benefits, and thrive in a great working environment.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
At Lilly, we are committed to uniting caring with discovery to enhance the quality of life for individuals worldwide. As a global healthcare leader headquartered in Indianapolis, Indiana, we have a workforce of 39,000 employees dedicated to developing and delivering life-changing medicines, advancing disease understanding and management, and contributing to our communities through philanthropic efforts. Our priority is always people, and we are in search of individuals who share our passion for making a positive impact on global well-being. As part of our ongoing efforts to achieve these goals, we are focused on building and internalizing a cutting-edge recommendation engine platform. This platform is designed to facilitate more agile sales and marketing operations by incorporating various data sources, implementing advanced personalization models, and seamlessly integrating with all other Lilly operations platforms. The ultimate aim is to provide tailored recommendations to sales and marketing teams at the individual doctor level, thereby enhancing decision-making processes and elevating customer experiences. Key Responsibilities: - Demonstrate a strong grasp of deep learning models to develop optimized Omnichannel Promotional Sequences for implementation by sales teams - Analyze extensive datasets to identify trends and insights that inform modeling decisions - Ability to translate business challenges into statistical problem statements, propose solution approaches, and consider relevant constraints - Collaborate with stakeholders to effectively communicate analysis findings - Proficiency in pharma datasets and industry knowledge is advantageous - Experience in code refactoring, training, retraining, deployment, testing, and monitoring of ML models for drift - Optimization of model hyperparameters - Willingness to learn new skills, particularly in ML applications for solving business problems Required Qualifications: - Bachelor's degree in Computer Science, Statistics, or related field preferred - 2-6 years of hands-on experience with data analysis, including coding, summarization, and interpretation - Proficient in coding languages like SQL or Python - Prior experience in deploying, evaluating performance, and fine-tuning recommendation engine models using ML techniques (e.g., CNN/LSTM, GA, XGBoost) for healthcare-related industries - Proficiency in Feature Engineering, Feature Selection, and Model Validation on Big Data - Familiarity with cloud technologies such as AWS, including tools like S3, EMR, EC2, Redshift, Glue; experience with visualization tools like Tableau and Power BI is a plus At Lilly, we are dedicated to fostering an inclusive workforce that provides equal opportunities for individuals with disabilities. If you require accommodations to apply for a position at Lilly, please complete the accommodation request form for further assistance. Please note that this form is specifically for accommodation requests related to the application process.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
You will be joining a renowned global digital engineering firm, Srijan, a Material company, as a Senior Developer/ Lead specialized in Data Science. Your role will involve working on Generative AI models such as Azure OpenAI GPT and Multi-Agent System Architecture. You should be proficient in Python and AI/ML libraries like TensorFlow, PyTorch, Scikit-learn. Experience with frameworks like LangChain, AutoGen for multi-agent systems, and strong knowledge of Data Science techniques including data preprocessing, feature engineering, and model evaluation are essential. It is preferred that you have at least 4+ years of experience in a similar role. The job location can be either Gurgaon or Bangalore. Familiarity with Big Data tools such as Spark, Hadoop, Databricks, and databases like SQL, NoSQL will be beneficial. Additionally, expertise in ReactJS for building responsive and interactive user interfaces is a plus. In this role, you can expect professional development and mentorship, a hybrid work mode with a remote-friendly workplace, health and family insurance, 40+ leaves per year including maternity and paternity leaves, as well as access to wellness, meditation, and counseling sessions.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough