Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
40 - 50 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
11.0 - 20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Data Science Manager Location: Hybrid at Bengaluru, Karnataka, India Experience- 11-20 years Roles and Responsibilities Lead and manage a team of data scientists and analysts in delivering impactful projects. Coordinate with cross-functional teams to understand business objectives and define data-driven strategies. Design and implement machine learning models to solve complex business problems. Develop processes and systems to collect and analyze data, ensuring quality controls and data governance are in place. Ensure the application of natural language processing (NLP) techniques for text data analysis and insight generation. Utilize python and machine learning frameworks to build scalable data models. Required Qualifications Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. Extensive experience in using machine learning algorithms and frameworks. Proficiency in programming languages such as Python and familiarity with libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Proven ability to implement and scale up models in a production environment. Strong command over natural language processing (NLP) techniques and applications. Experience with data visualization tools and ability to present complex data insights clearly. Excellent communication and leadership skills to manage a team and communicate with stakeholders. Key Responsibilities Oversee the development and deployment of advanced machine learning models that contribute directly to business outcomes. Engage and collaborate with stakeholders to develop predictive models and data-driven strategies. Lead efforts in optimizing existing models to improve accuracy and efficiency. Guide the team in exploring and utilizing new data science techniques and technologies to enhance current methodologies. Mentor and train junior data scientists in technical skills and project management. Manage timelines, deliverables, and resources efficiently to ensure successful completion of projects. Stay updated on the latest industry trends and innovations to bring new ideas to the team.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
maharashtra
On-site
You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
maharashtra
On-site
Job Description: As an AI Engineer Intern at Brainwonders, you will have the exciting opportunity to work in cutting-edge fields like Large Language Models (LLMs), AI Agents, ML Fundamentals, and Cloud Deployment with a focus on AWS. This internship is designed for individuals with 0-1 year of experience, including final-year students and freshers. The position is based in Borivali East, Mumbai, and offers the possibility of a full-time offer upon successful completion of the internship. Your primary responsibilities will include building and testing LLM-based applications and workflows, experimenting with agent frameworks such as LangGraph and CrewAI, and utilizing transformer models like LLaMA and Mistral for domain-specific tasks. Additionally, you will deploy models and services on AWS using tools like Lambda, EC2, and SageMaker, and collaborate on prompt engineering, fine-tuning, and pipeline optimization. To excel in this role, you should have a solid understanding of LLMs, tokenization, and transformers, along with proficiency in Python programming using libraries like Pandas, NumPy, Hugging Face, and LangChain. Exposure to AWS services like EC2, Lambda, S3, or SageMaker is essential, as well as a strong eagerness to explore real-world AI use cases such as agents, copilots, and RAG-based systems. Basic knowledge of machine learning and hands-on project experience will be advantageous. It would be beneficial if you have experience with serverless deployments on AWS, familiarity with monitoring and logging tools like CloudWatch and OpenTelemetry, and a track record of contributing to open-source AI tools or models. This internship offers a unique opportunity to work with a dynamic team and gain valuable experience in the field of artificial intelligence. Thank you for considering Brainwonders as your next career destination.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
meerut, uttar pradesh
On-site
As an AI/ML Developer at our company in Meerut, you will utilize your 2 to 5 years of hands-on experience to design, build, and deploy machine learning models. Your main responsibilities will include developing models for various tasks like classification, regression, clustering, or NLP/computer vision. You will collaborate with cross-functional teams to understand business requirements and implement AI/ML solutions accordingly. Your expertise in Python libraries such as Pandas, NumPy, and Scikit-learn will be crucial in cleaning, preprocessing, and analyzing large datasets. Additionally, you will work with deep learning frameworks like TensorFlow, PyTorch, or Keras to fine-tune models for optimal performance, accuracy, and scalability. Developing APIs or services for model deployment and monitoring post-deployment performance will also be part of your responsibilities. To excel in this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Your strong proficiency in Python and experience with ML algorithms and evaluation metrics will be essential. Exposure to deep learning frameworks and tools like Flask, FastAPI, Docker, or AWS SageMaker will be advantageous. Knowledge of SQL, NoSQL databases, version control tools like Git, and cloud platforms like AWS/GCP/Azure will also be beneficial. Keeping yourself updated with the latest advancements in AI/ML research and techniques is encouraged to contribute effectively to our projects. If you are passionate about leveraging AI and ML technologies to drive innovation and solve real-world problems, we invite you to join our dynamic team and make a meaningful impact in the field.,
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
We are seeking a highly skilled and experienced Senior Python Developer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and deploying robust and scalable Python applications. You will be responsible for leading complex projects, mentoring junior developers, and contributing to architectural decisions. This role requires a deep understanding of Python best practices, excellent problem-solving skills, and a passion for building high-quality software. Key Responsibilities Development And Architecture Design, develop, and maintain high-performance, scalable, and secure Python applications. Lead the development of complex features and modules, ensuring adherence to architectural best practices. Contribute to architectural discussions and decisions, providing technical leadership and guidance. Implement and optimize microservices architecture. Provide technical expertise in the selection and implementation of appropriate technologies and frameworks. API Development And Integration Design and implement RESTful or GraphQL APIs for seamless integration with frontend and other backend systems. Integrate with third-party APIs and services. Ensure API security and Management : Design and implement database schemas (SQL and NoSQL) to support application requirements. Write efficient SQL queries and optimize database performance. Integrate applications with various database systems (e.g., PostgreSQL, MySQL, MongoDB, Optimization and Troubleshooting : Identify and resolve performance bottlenecks and latency issues. Debug and troubleshoot complex Python systems. Conduct thorough performance testing and optimization. Implement robust monitoring and logging solutions. Collaboration And Mentorship Collaborate with product managers, designers, and other developers to define andimplement software solutions. Mentor and guide junior developers, providing technical guidance and support. Participate in code reviews, providing constructive feedback and ensuring code quality. Contribute to technical documentation and knowledge sharing. Software Development Lifecycle Participate in agile development processes, including sprint planning, daily stand-ups, and retrospectives. Ensure proper documentation and maintain software development standards. Stay updated with the latest Python technologies and industry trends. Deployment And DevOps Deploy and maintain applications on cloud platforms (e.g., AWS, Azure, GCP). Implement CI/CD pipelines for automated testing and deployment. Work with containerization technologies (Docker, Kubernetes). Required Technical Skills Proficiency Extensive experience in Python 3. Strong understanding of Python design patterns and best practices. Experience with asynchronous programming (e.g., asyncio). Backend Frameworks Proficiency in at least one Python web framework (e.g., Django, Flask, FastAPI). API Development Strong understanding of RESTful or GraphQL API design principles and best practices. Experience with API documentation tools (e.g., Swagger, OpenAPI). Databases Proficiency in SQL and NoSQL databases. Experience with database ORMs (e.g., SQLAlchemy, Django ORM). Cloud Platforms Experience with cloud platforms such as AWS, Azure, or GCP. Experience with deploying and managing Python applications in cloud environments. Version Control Proficiency in Git version control. Testing Experience with unit testing, integration testing, and end-to-end testing. Proficiency in testing frameworks such as pytest or unittest. DevOps Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Experience with containerization technologies (e.g., Docker, Skills : Experience with message queues (e.g., RabbitMQ, Kafka). Knowledge of serverless architecture. Experience with caching mechanisms (e.g., Redis, Memcached). Experience with performance monitoring and profiling tools. Knowledge of security best practices for Python applications. Experience with data science libraries (e.g., NumPy, Pandas). Soft Skills Strong problem-solving and analytical skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Strong leadership and mentoring abilities. Ability to learn and adapt to new technologies. Strong attention to detail and a commitment to quality. Education : Bachelors or Masters degree in Computer Science, Engineering, or a related field (ref:hirist.tech)
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
RECODE minds is currently seeking a full-time Research Associate in Data Science to join our team for an exciting new venture within the e-learning industry. As the ideal candidate, you will play a crucial role in conducting analytical experiments with precision and regularly assessing alternative models using theoretical approaches. This opportunity offers you the chance to be part of a dynamic and innovative team that creates analysis tools impacting our products and clients positively. Your responsibilities will include researching and developing statistical learning models for data analysis, collaborating with product managers and technical trainers to address company needs, staying updated on the latest technological advancements, processing and validating data integrity, performing ad-hoc analysis, and effectively communicating results to key stakeholders. Proficiency in data science toolkits such as Python, R, Weka, NumPy, MatLab, etc., is essential, with expertise in at least one of these tools being highly desirable. You will also be required to implement new methodologies and optimize development efforts through strategic database usage and project design. The ideal candidate for this position should possess a Bachelor's or Master's degree in Statistics, Computer Science, Machine Learning, or a related field. Proficiency in Data Analytics using Python, ML, R, Advance ML, and Tableau (optional) is crucial, with 1-3 years of relevant work experience also considered acceptable. This position is open for both internship and full-time roles, based in Hyderabad. If you are interested in joining our team, please share your resume along with a brief summary of your profile to hr@recodeminds.com.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an experienced and passionate AI/ML Trainer, you will be responsible for delivering hands-on training for a structured Artificial Intelligence & Machine Learning program. This contract-based opportunity is ideal for professionals with strong technical expertise and a flair for teaching. Your key responsibilities will include delivering in-depth classroom and lab-based training sessions on core AI/ML topics, designing, developing, and maintaining training material, assignments, and project modules, conducting doubt-clearing sessions, assessments, and live coding exercises, guiding students through capstone projects and real-world datasets, evaluating learner performance, and providing constructive feedback. It will also be your responsibility to stay updated with the latest industry trends, tools, and techniques in AI/ML. The topics you will cover during the training sessions include Python for Data Science, Statistics and Linear Algebra Basics, Machine Learning (Supervised, Unsupervised, Ensemble Techniques), Deep Learning (ANN, CNN, RNN, etc.), Natural Language Processing (NLP), Model Deployment (Flask, Streamlit, etc.), and hands-on experience with libraries like NumPy, Pandas, Scikit-learn, TensorFlow/Keras, OpenCV, and tools such as Jupyter, Git, Google Colab, and VS Code. To be eligible for this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, along with at least 24 years of relevant industry/training experience in AI/ML. Strong communication and presentation skills are essential, as well as the ability to mentor and engage learners of varying skill levels. Prior training or teaching experience, whether academic or corporate, is preferred. Your skills should include expertise in ML, scikit-learn, TensorFlow, Jupyter, Natural Language Processing, OpenCV, Python for Data Science, Google Colab, Deep Learning, Model Deployment, NumPy, Keras, Linear Algebra, Statistics, Pandas, Git, VS Code, and a good understanding of contractual, artificial intelligence, and machine learning concepts.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As an AI/ML Engineer, your primary responsibility will be to collaborate effectively with cross-functional teams, including data scientists and product managers. Together, you will work on acquiring, processing, and managing data for the integration and optimization of AI/ML models. Your role will involve designing and implementing robust, scalable data pipelines to support cutting-edge AI/ML models. Additionally, you will be responsible for debugging, optimizing, and enhancing machine learning models to ensure quality assurance and performance improvements. Operating container orchestration platforms like Kubernetes with advanced configurations and service mesh implementations for scalable ML workload deployments will be a key part of your job. You will also design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engaging in advanced prompt engineering and fine-tuning of large language models (LLMs) will be crucial, with a focus on semantic retrieval and chatbot development. Documentation will be an essential aspect of your work, involving the recording of model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Researching and implementing cutting-edge LLM optimization techniques such as quantization and knowledge distillation will be part of your ongoing tasks to ensure efficient model performance and reduced computational costs. Collaborating closely with stakeholders to develop innovative natural language processing solutions, with a specialization in text classification, sentiment analysis, and topic modeling, will be another significant aspect of your role. Staying up-to-date with industry trends and advancements in AI technologies and integrating new methodologies and frameworks to continually enhance the AI engineering function will also be expected of you. In terms of qualifications, a Bachelor's degree in any Engineering stream is required, along with a minimum of 4+ years of relevant experience in AI. Proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) is essential. Experience with LLM frameworks, big data processing using Spark, version control, and experiment tracking, as well as proficiency in software engineering and development, DevOps, infrastructure, cloud services, and LLM project experience are also necessary. Your expertise should include a strong mathematical foundation in statistics, probability, linear algebra, and optimization, along with a deep understanding of ML and LLM development lifecycle. Additionally, you should have expertise in feature engineering, embedding optimization, dimensionality reduction, A/B testing, experimental design, statistical hypothesis testing, RAG systems, vector databases, semantic search implementation, and LLM optimization techniques. Strong analytical thinking, excellent communication skills, experience translating business requirements into data science solutions, project management skills, collaboration abilities, dedication to staying current with the latest ML research, and the ability to mentor and share knowledge with team members are essential competencies for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
ghaziabad, uttar pradesh
On-site
As a skilled Python Backend Engineer at Cognio Labs, you will be responsible for leveraging your expertise in FastAPI and your strong foundation in Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) technologies. Your role will involve a blend of backend development and data science to facilitate data processing for model fine-tuning and training. You should have a minimum of 2 years of experience in Python backend development and possess the ability to develop and maintain APIs using the FastAPI framework. Proficiency in asynchronous programming, background task implementation, and database management using both SQL and NoSQL databases, especially MongoDB, are essential. Additionally, familiarity with Git version control systems and RESTful API design and implementation is required. Experience with containerization technologies like Docker, understanding of component-based architecture principles, and the capability to write clean, maintainable, and testable code are valuable additional technical skills. Knowledge of testing frameworks, quality assurance practices, and AI technologies such as LangChain, ChatGPT endpoints, and other LLM frameworks will be advantageous. In the realm of AI and Data Science, your experience with LLMs and RAG implementation will be highly valued. You should be adept at data processing for fine-tuning language models, manipulating and analyzing data using Python libraries such as Pandas and NumPy, and implementing machine learning workflows efficiently. Your key responsibilities will include designing, developing, and maintaining robust, scalable APIs using the FastAPI framework, preparing data for model fine-tuning and training, implementing background tasks and asynchronous processing for system optimization, integrating LLM and RAG-based solutions into the product ecosystem, and following industry best practices to write efficient, maintainable code. Collaboration with team members, database design and implementation, troubleshooting and debugging codebase issues, as well as staying updated on emerging technologies in Python development, LLMs, and data science will be integral parts of your role at Cognio Labs.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
The role of AI Engineer at our company involves developing essential software features to ensure timely delivery while meeting the company's performance and quality standards. As an AI Engineer, you will be part of our team working on various data science initiatives, ranging from traditional analytics to cutting-edge AI applications. This position provides a great opportunity to gain experience in machine learning and artificial intelligence across different domains. Your core responsibilities will include developing and maintaining data processing pipelines using Python, creating and deploying GenAI applications through APIs and frameworks like semantic kernel and Langchain, conducting data analysis and generating insightful visualizations, collaborating with different teams to understand business requirements and implement AI solutions, contributing to the development and upkeep of language model evaluations and other AI applications, writing clean, documented, and tested code following best practices, and monitoring application performance while making necessary improvements. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, along with at least 2 years of hands-on experience in applied data science and machine learning. Proficiency in Python programming, familiarity with LLMs and their applications, experience with cloud platforms such as AWS, Azure, or GCP, version control using Git, strong analytical and problem-solving skills, and knowledge of PyTorch and Deep Learning fundamentals are essential requirements. In terms of technical skills, you should have knowledge of LLMs and Generative AI platforms, be familiar with popular LLM APIs, have experience with frameworks like Semantic Kernel, LangChain, OpenAI, and Google Vertex AI, be proficient in programming languages including Python, SQL, and R, have expertise in libraries such as HuggingFace, Transformers, Matplotlib, and Seaborn, be skilled in ML/Statistics with tools like scikit-learn, statsmodels, and PyTorch, understand data processing with Pandas, NumPy, and PySpark, and have experience working with cloud platforms like AWS, GCP, or Azure. This full-time permanent position is located in Bengaluru and is with the Merkle brand.,
Posted 1 week ago
4.0 - 9.0 years
10 - 20 Lacs
Noida
Work from Office
We're Hiring: Data Science Engineer (Python) Location: A-50, Second Floor, Sector-67, Noida-201301 Experience: 03 Years Company: Crosslynx US LLC Are you passionate about solving real-world problems using data and Python? Join our team at Crosslynx and work on innovative projects in AI, ML, and cloud-based analytics platforms. About Us Crosslynx US LLC is a global leader in Product Engineering and Cloud Engineering , delivering cutting-edge solutions across industries. With expertise in AI/ML, Embedded Systems, Cloud Applications, RF Design, and Quality Assurance , we empower enterprises worldwide. Our R&D and development centers are located in Milpitas (California, USA) and Noida (India) . Role Overview As a Data Science Engineer , you’ll be responsible for building scalable data pipelines, developing machine learning models, and integrating data-driven solutions into enterprise platforms. You’ll collaborate with cross-functional teams to deliver high-impact analytics and AI solutions. Key Responsibilities Develop and maintain Python-based data processing and analytics pipelines Work with structured and unstructured data from multiple sources and databases Build and deploy machine learning models using frameworks like scikit-learn, TensorFlow, PyTorch Implement data visualization dashboards using tools like Plotly, Dash, or Streamlit Ensure data integrity, security, and compliance across systems Collaborate with software engineers, data analysts, and product teams Write unit tests and debug data workflows for reliability and scalability Use version control tools like Git for collaborative development Required Skills Strong proficiency in Python and familiarity with Python testing frameworks (e.g., Pytest, Robot) Experience with ORM libraries and database schema design Understanding of multi-process architecture and event-driven programming Basic knowledge of HTML5, CSS3, JavaScript for dashboard integration Familiarity with templating engines (e.g., Jinja2) Knowledge of authentication/authorization across systems Strong debugging and unit testing skills Experience with Git/SVN for version control Preferred Skills Exposure to data science libraries like Pandas, NumPy, Matplotlib Experience with cloud platforms (AWS, GCP, Azure) Familiarity with CI/CD pipelines for model deployment Understanding of data security and compliance standards Knowledge of API integration and microservices architecture Perks & Benefits Healthy work-life balance Freedom to innovate and express ideas Opportunities to work with global clients Exposure to cutting-edge technologies in AI/ML, IoT, 5G, Cloud Ready to Apply? Send your resume to prabha.pandey@crosslynxus.com Let’s build intelligent solutions together!
Posted 1 week ago
3.0 - 8.0 years
12 - 22 Lacs
Noida, Greater Noida, Delhi / NCR
Hybrid
#GenerativeAI Experts ready for AI race, this is for you! Announcing Hexaware conducting Walkin Drive for AI Engineer and Lead Data Scientist (GenAI)-Noida Location-3rd Aug 2025(Sunday) Interested candidates share your CV at umaparvathyc@hexaware.com Open Positions: AI Engineer/Lead Data Scientist (GenAI) AI Engineer Experience- 3+years Lead Data Scientist (GenAI) Experience- 7+years Notice Period- 15 days/30days Max (who serving Notice Period) Walkin Drive Location- Noida Date of drive- 3rd Aug 2025(Sunday) Must have Experience: LLM, Advance RAG, NLP, transformer model, LangChain Technical Skill: 1. Strong Experience in Data Scientist (GENAI) 2. Proficiency with Generative AI models like GANs, VAEs, and transformers 3. Expertise with cloud platforms (AWS, Azure, Google Cloud) for deploying AI models 4. Strong Python Fast API experience, SDA based implementations for all the APIs 5. Knowledge of Agentic AI concepts and applications
Posted 1 week ago
0 years
3 - 5 Lacs
Coimbatore, Tamil Nadu, India
On-site
About ShellKode We are an innovation-first technology company focused on delivering cutting-edge solutions in Cloud, Data, and GenAI . As an AWS-exclusive partner, we serve startups and enterprises in accelerating their digital transformation journeys. At ShellKode, we build, ship, and scale solutions with purpose-driven AI. Who We Are Looking For We are hiring fresh graduates (Class of 2025) who are passionate about AI, Machine Learning, and Data Science . If you have a strong foundational understanding and are eager to work on real-world AI projects, join our fast-growing engineering team in Coimbatore . Preferred Qualifications Bachelor's Degree in Artificial Intelligence, Machine Learning, Data Science, or Computer Science with AI/ML specialization (Class of 2025). Strong academic projects or internships in AI, NLP, CV, or MLOps are a plus. Good knowledge of Python and foundational ML libraries like scikit-learn, pandas, numpy. Exposure to frameworks such as TensorFlow, PyTorch, or HuggingFace is a bonus. Understanding of cloud services (preferably AWS) is an added advantage. Solid understanding of algorithms, data structures, and model evaluation techniques. Skills: numpy,python,artificial intelligence,scikit-learn,aws,cloud services,data science,ai,pandas,ml,huggingface,data structures,algorithms,tensorflow,model evaluation,pytorch,machine learning
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Advisory Services Financial Services Office – Financial Services Risk Management (FSRM) – Credit Risk – Manager Description EY's Financial Services Office (FSO) is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. FSO practice provides integrated advisory services to financial institutions and other capital markets participants, including commercial banks, investment banks, broker-dealers, asset managers (traditional and alternative), insurance and energy trading companies, and the Corporate Treasury functions of leading Fortune 500 Companies. The service offerings provided by the FSO Advisory include: market, credit and operational risk management, regulatory advisory, quantitative advisory, structured finance transaction, actuarial advisory, technology enablement, risk and security, program advisory, and process & controls. Within EY’s FSO Advisory Practice, the Financial Services Risk Management (FSRM) group provides solutions that can help FSO clients to identify, measure, manage and monitor the market (trading book), credit (banking book), operational, and regulatory risks associated with their trading, asset-liability management, capital management and other capital markets activities. Within FSRM, the Credit Risk (CR) team assists clients to design and implement strategic and functional changes and regulatory changes across risk management within the banking book portfolio. Clients include large domestic and global financial institutions and banking organizations. Key Responsibilities Demonstrate deep technical capabilities and industry knowledge of financial products, in particular lending products Understand market trends and demands in the financial services sector and issues faced by clients by staying abreast of current business and industry trends relevant to the client's business Monitor progress, manage risk, and effectively communicate with key stakeholders regarding status, issues and key priorities to achieve expected outcomes Play an active role in mentoring junior consultants within the organization Required to review, analyse and concur with tasks completed by junior staff Flexibility to work across projects involving model audits, validation and development activities Qualifications, Certifications And Education Must-have: Postgraduate (masters in accounting, finance, economics, statistics or a related field) with at least 6 – 9 years of related work experience Complete, end-to-end understanding of credit risk model development, validation, audit and/or implementation for the banking book portfolio. Knowledge of Credit Risk and Risk Analytics techniques is desirable. Should have hands on experience in data preparation, manipulation and consolidation. Strong background in regulatory requirements such as IFRS 9, CCAR, CECL within model development/validation/audit domain Expertise in Stress Testing/DFAST PD/LGD/EAD models Strong documentation skills. Required to be adept in quickly grasping key details and summarizing them in a presentation or document. Should be able to take initiative and work independently with minimal supervision, if required Strong background in statistics and econometrics. Specially- Logistic regression, Linear regression. Strong technical skills, highly proficient in Advanced Python (Pandas, Numpy, ScikitLearn, Object Oriented Programming, Parallel Processing), SAS (SAS Certified Preferred), SQL, R, excel Good-to-have: Certifications such as FRM, CFA, PRM, SCR Proficiency in Java/C++ Experience in Data/Business Intelligence (BI) Reporting Good to have knowledge in Machine Learning models and its applications. Willingness to travel to meet client needs Previous project management experience EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Purpose of the role This role sits at the intersection of data science and revenue growth strategy, focused on developing advanced analytical solutions to optimize pricing, trade promotions, and product mix. The candidate will lead the end-to-end design, deployment, and automation of machine learning models and statistical frameworks that support commercial decision-making, predictive scenario planning, and real-time performance tracking. By leveraging internal and external data sources—including transactional, market, and customer-level data—this role will deliver insights into price elasticity, promotional lift, channel efficiency, and category dynamics. The goal is to drive measurable improvements in gross margin, ROI on trade spend, and volume growth through data-informed strategies. Key tasks & accountabilities Design and implement price elasticity models using linear regression, log-log models, and hierarchical Bayesian frameworks to understand consumer response to pricing changes across channels and segments. Build uplift models (e.g., Linear Regression, XGBoost for treatment effect) to evaluate promotional effectiveness and isolate true incremental sales vs. base volume. Develop demand forecasting models using ARIMA, SARIMAX, and Prophet, integrating external factors such as seasonality, promotions, and competitor activity, time-series clustering and k-means segmentation to group SKUs, customers, and geographies for targeted pricing and promotion strategies. Construct assortment optimization models using conjoint analysis, choice modeling, and market basket analysis to support category planning and shelf optimization. Use Monte Carlo simulations and what-if scenario modeling to assess revenue impact under varying pricing, promo, and mix conditions. Conduct hypothesis testing (t-tests, ANOVA, chi-square) to evaluate statistical significance of pricing and promotional changes. Create LTV (lifetime value) and customer churn models to prioritize trade investment decisions and drive customer retention strategies. Integrate Nielsen, IRI, and internal POS data to build unified datasets for modeling and advanced analytics in SQL, Python (pandas, statsmodels, scikit-learn), and Azure Databricks environments. Automate reporting processes and real-time dashboards for price pack architecture (PPA), promotion performance tracking, and margin simulation using advanced Excel and Python. Lead post-event analytics using pre/post experimental designs, including difference-in-differences (DiD) methods to evaluate business interventions. Collaborate with Revenue Management, Finance, and Sales leaders to convert insights into pricing corridors, discount policies, and promotional guardrails. Translate complex statistical outputs into clear, executive-ready insights with actionable recommendations for business impact. Continuously refine model performance through feature engineering, model validation, and hyperparameter tuning to ensure accuracy and scalability. Provide mentorship to junior analysts, enhancing their skills in modeling, statistics, and commercial storytelling. Maintain documentation of model assumptions, business rules, and statistical parameters to ensure transparency and reproducibility. Other Competencies Required Presentation Skills: Effectively presenting findings and insights to stakeholders and senior leadership to drive informed decision-making. Collaboration: Working closely with cross-functional teams, including marketing, sales, and product development, to implement insights-driven strategies. Continuous Improvement: Actively seeking opportunities to enhance reporting processes and insights generation to maintain relevance and impact in a dynamic market environment. Data Scope Management: Managing the scope of data analysis, ensuring it aligns with the business objectives and insights goals. Act as a steadfast advisor to leadership, offering expert guidance on harnessing data to drive business outcomes and optimize customer experience initiatives. Serve as a catalyst for change by advocating for data-driven decision-making and cultivating a culture of continuous improvement rooted in insights gleaned from analysis. Continuously evaluate and refine reporting processes to ensure the delivery of timely, relevant, and impactful insights to leadership stakeholders while fostering an environment of ownership, collaboration, and mentorship within the team. Business Environment Main Characteristics: Work closely with Zone Revenue Management teams. Work in a fast-paced environment. Provide proactive communication to the stakeholders. This is an offshore role and requires comfort with working in a virtual environment. GCC is referred to as the offshore location. The role requires working in a collaborative manner with Zone/country business heads and GCC commercial teams. Summarize insights and recommendations to be presented back to the business. Continuously improve, automate, and optimize the process. Geographical Scope: Europe Qualifications, Experience, Skills Level of educational attainment required: Bachelor or Post-Graduate in the field of Business & Marketing, Engineering/Solution, or other equivalent degree or equivalent work experience. MBA/Engg. in a relevant technical field such as Marketing/Finance. Extensive experience solving business problems using quantitative approaches. Comfort with extracting, manipulating, and analyzing complex, high volume, high dimensionality data from varying sources. Previous Work Experience Required 5-8 years of experience in the Retail/CPG domain. Technical Skills Required Data Manipulation & Analysis: Advanced proficiency in SQL, Python (Pandas, NumPy), and Excel for structured data processing. Data Visualization: Expertise in Power BI and Tableau for building interactive dashboards and performance tracking tools. Modeling & Analytics: Hands-on experience with regression analysis, time series forecasting, and ML models using scikit-learn or XGBoost. Data Engineering Fundamentals: Knowledge of data pipelines, ETL processes, and integration of internal/external datasets for analytical readiness. Proficient in Python (pandas, scikit-learn, statsmodels), SQL, and Power BI. Skilled in regression, Bayesian modeling, uplift modeling, time-series forecasting (ARIMA, SARIMAX, Prophet), and clustering (k-means). Strong grasp of hypothesis testing, model validation, and scenario simulation. And above all of this, an undying love for beer! We dream big to create future with more cheers .
Posted 1 week ago
5.0 - 7.0 years
6 - 8 Lacs
Kanpur
Work from Office
* Translate CUDA/C++ code into equivalent Python implementations using PyTorch and NumPy, ensuring logical and performance parity. * Analyze CUDA kernels and GPU-accelerated code for structure, efficiency, and function before translation. Work from home
Posted 1 week ago
2.0 years
12 - 24 Lacs
India
On-site
Job Description: We are seeking a highly skilled and passionate Data Science Trainer to join our team. The ideal candidate will have strong practical knowledge in data science tools and techniques, and a passion for teaching and mentoring students or professionals. Key Responsibilities: Design and deliver engaging training sessions on Data Science, Machine Learning, Python, SQL, Statistics, and Data Visualization tools like Tableau or Power BI. Develop course materials, hands-on labs, assessments, and real-time project-based learning modules. Teach both theoretical and practical aspects of data science, including data preprocessing, EDA, model building, evaluation, and deployment . Conduct live sessions, workshops, webinars , and one-on-one doubt-clearing sessions. Requirements: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or related field. Proven experience (2+ years) in data science or analytics roles. Proficiency in Python, Pandas, NumPy, Matplotlib, Scikit-learn, SQL, and relevant data science libraries. Knowledge of cloud platforms (AWS, Azure, GCP) and deployment tools is a plus. Experience with tools like Jupyter Notebook, Google Colab , or similar platforms. Strong communication, presentation, and mentoring skills. Prior teaching or training experience preferred (online/offline). Ability to break down complex topics into easy-to-understand concepts. Job Type: Full-time Pay: ₹100,000.00 - ₹200,000.00 per month Work Location: In person
Posted 1 week ago
1.0 years
3 - 3 Lacs
Saket
On-site
We are hiring for the role of Data Science and AI Trainer. This presents a great opportunity for you to join our team and be part of our organization. About: Techstack Academy is a professional training institute based in Delhi (with a branch in Saket) that offers career-oriented courses in technology and digital skills. It’s designed for students, freshers, and working professionals who want to upskill or switch careers in fields like: :- Training Institute offering online and offline classes. :- Focused on practical, job-ready skills. :- Known for Digital Marketing, Data Science, AI, Machine Learning, Full Stack Development, Python, Web Design, etc. Requirements: :- Designation- Data Science and AI Trainer :- Experience - Atleast 1 year as Data Science and AI Trainer :- Good communication and interaction skills. :- Know Python, Adv. Python, NumPy pandas, Matplotlib, Seaborn, Beautiful Soup, Adv. Excel, Stats, Power BI, Tableau, ML, AI, Generative AI, Generative LLM, Deep Learning. Interested candidates can connect via: Email :- mitali@techstack.in Contact :- +91 94533 23222 Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹32,000.00 per month Schedule: Day shift Work Location: In person
Posted 1 week ago
2.0 years
3 - 4 Lacs
Chennai
On-site
Job Title: Freelance Trainer - Data Science & Artificial Intelligence Job Location: Chennai Job Mode: Offline - Freelance Job Description: We are seeking a passionate and knowledgeable Data Science & AI Trainer to deliver hands-on training sessions to college students. The ideal candidate should have a strong foundation in data science, machine learning, and AI concepts, with the ability to simplify complex topics and engage learners through practical examples and projects. Responsibilities: Deliver in-depth training on Data Science, Machine Learning, Deep Learning , and AI tools and techniques. Develop or follow structured lesson plans, course materials, and lab exercises. Conduct hands-on coding sessions using Python and related libraries (Numpy, Pandas, Scikit-learn, Tensor Flow/PyTorch, etc.). Provide individual support, project guidance, and real-world application insights to students. Stay updated with the latest trends and advancements in AI & Data Science. Maintain a professional and supportive learning environment in the classroom. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, or related fields. Proven teaching/training experience in Data Science and AI . Strong programming skills in Python . Familiarity with data visualization tools (e.g., Matplotlib, Seaborn, Power BI, Tableau – optional). Good communication and presentation skills in English (Tamil proficiency is a plus). Ability to explain technical concepts in a simplified and engaging manner. Job Type: Freelance Contract length: 3 months Pay: ₹30,000.00 - ₹40,000.00 per month Experience: Data Science & AI Trainer: 2 years (Required) Location: Chennai, Tamil Nadu (Required) Willingness to travel: 25% (Preferred) Work Location: In person Application Deadline: 29/07/2025 Expected Start Date: 29/07/2025
Posted 1 week ago
3.0 years
0 Lacs
Chennai
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Edits new and existing applications. Implements, testing and debugging defined software components. Documents all development activity. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities Job Title Python Engineer 2 Job Summary: As part of the SPIDER team, the Python Engineer will be responsible for building multi-tier client-server applications, data processing, deployment on cloud-based technologies, and continuous improvement of existing solutions. Responsibilities: Develop and deploy applications on cloud-based environments following the full lifecycle of software development. Maintaining and continuous improvement of existing applications and solutions. Writing quality and efficient code following coding and security principles. Implement solutions based on project requirements and technical specifications. Identify technology and design issues and provide proactive communication. Minimum Requirements: Degree in Computer Science, or equivalent professional experience. 3 years of experience in Python Programming. 3+ years of relevant experience with Application development and deployment in cloud. Solid understanding of REST API's development and integration. Work experience with Python libraries such as Pandas, SciPy, NumPy, etc. Working experience in any one of the SQL or NoSQL databases. Experience with application deployment on cloud-bases environments such as AWS, etc. Experience with version control, Git preferred. Knowledge in OOP and modular application development and documentation. Good Problem Solving and debugging skills. Possess the ability to learn and work independently, along with strong communication, and a strong work ethic. Good to have: Experience with different distributions of Linux. Experience in Spark. Experience with container technologies such as Docker. Experience with CICD tools such as Concourse, Terraform. Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai
On-site
Mandatory Skills: 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description: We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities: Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
2.0 years
4 - 6 Lacs
Ahmedabad
On-site
Why Glasier Inc. For Your Dream Job? We're a passionate group of tech enthusiasts & creatives who live and breathe innovation. We're looking for energetic innovators, thinkers, and doers who thrive on learning, adapting quickly, and executing in real-time if you're a creative thinker with design, a marketer with a story to tell, or a passionate professional. Apply Now 01 Python Developer Openings: 01 Exp.: 2 - 2.5 Years Job Description: Design, build, and deploy ML models and algorithms. Preprocess and analyze large datasets for training and evaluation. Work with data scientists and engineers to integrate models into applications. Optimize model performance and accuracy. Stay up to date with AI/ML trends, libraries, and tools. Strong experience with Python and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Solid understanding of machine learning algorithms and principles. Experience working with data preprocessing and model deployment. Familiarity with cloud platforms (AWS, GCP, Azure) is a plus. PERKS & BENEFITS Working With Glasier Inc. We take care of our team members, so they can deliver their best work. Here are a few of the benefits and perks we offer to our employees: 5 Days Working Per Week Mentorship Mindfulness Flexible Working Hours International Exposures Dedicated Pantry Area Free Snacks & Drinks Open Work Culture Competitive Salary And Benefits Festival, Birthday & Work Anniversary Celebration Performance Appreciation Bonus & Rewards Employee Friendly Leave Policies Join our team now Send us an email hr@glasierinc.com Whats app on +91 95102 61901 Call at +91 95102 61901
Posted 1 week ago
1.0 years
0 Lacs
Gwalior
On-site
Job Title: Data Science Intern Company: Techieshubhdeep IT Solutions Pvt. Ltd. Location: 21 Nehru Colony, Thatipur, Gwalior, Madhya Pradesh Contact: +91 7880068399 About Us: Techieshubhdeep IT Solutions Pvt. Ltd. is a growing technology company specializing in IT services, software development, and innovative digital solutions. We are committed to nurturing talent and providing a platform for aspiring professionals to learn and excel in their careers. Role Overview: We are seeking a Data Science Intern who will assist our team in developing data-driven solutions, performing statistical analysis, and creating machine learning models to solve real-world business challenges. Key Responsibilities: Collect, clean, and preprocess structured and unstructured data. Perform exploratory data analysis (EDA) to identify trends and patterns. Assist in building, testing, and optimizing machine learning models. Work with large datasets and perform statistical modeling. Document processes, findings, and model performance. Collaborate with senior data scientists and software engineers on live projects. Required Skills & Qualifications: Currently pursuing or recently completed a degree in Computer Science, Data Science, Statistics, Mathematics, or related fields. Basic understanding of Python/R and libraries like NumPy, Pandas, Scikit-learn, Matplotlib, etc. Familiarity with SQL and database management. Strong analytical skills and problem-solving abilities. Good communication skills and willingness to learn. What We Offer: Hands-on training on real-world projects. Guidance from experienced industry professionals. Internship certificate upon successful completion. Potential for full-time employment based on performance. Job Types: Full-time, Internship, Fresher, Walk-In Pay: ₹5,000.00 - ₹15,000.00 per year Schedule: Day shift Monday to Friday Morning shift Ability to commute/relocate: Gwalior, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 1 year (Preferred) Data science: 1 year (Preferred) Language: Hindi (Preferred) English (Preferred) Work Location: In person
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough