Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Nielsen is seeking an organized, detail-oriented, team player, to join the Engineering team in the role of Machine learning Engineer. Nielsen's Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. The Software Engineer will be responsible for defining, developing, testing, analyze, and delivering technology solutions within Nielsen's Collections platforms. Qualifications Experience having led multiple projects leveraging LLMs, GenAI, and Prompt Engineering. Exposure to real-world MLOps deploying models into production adding features to products. Knowledge of working in a cloud environment. Strong understanding of LLMs, GenAI, Prompt Engineering, and Copilot. Bachelor's degree in Computer Science or equivalent degree. 4+ years of software experience Experience with Machine learning frameworks and models. The ML Engineer is expected to fully own the services that are built with the ML Scientists. This cuts across scalability, availability, having the metrics in place, alarms/alerts in place - and being responsible for the latency of the services. Data quality checks & onboarding the data onto the cloud for modeling purposes. Prompt Engineering, FT work, Evaluation, Data. End-end AI Solution architecture, latency tradeoffs, LLM Inference Optimization, Control Plane, Data Plate, and Platform Engineering. Comfort in Python and Java is highly desirable. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Company Description FormantAI pioneers cutting-edge, research-based AI solutions to drive business transformation. We translate complex AI research into powerful, practical tools that deliver tangible results for organizations. At FormantAI, we responsibly deploy advanced AI, aligning our innovative capabilities with your specific business needs to ensure impactful outcomes. Partner with us to leverage the forefront of AI innovation and redefine success in your industry. About the job Position: GenAI Engineering Intern Location: Remote Duration: 2–6 Months Stipend: 50,000 - 60,000 Start Date: Immediate Role Description This is a paid remote internship role for a GenAI Engineering Intern. The GenAI Engineering Intern will assist in developing and enhancing AI models, performing data analysis, contributing to research projects, and writing code. Daily tasks include collaborating with the engineering team to implement AI solutions, running experiments, and documenting findings. This role provides an opportunity to work in a dynamic research environment, gaining hands-on experience with AI technologies. Qualifications Strong programming skills in Python, C++, or Java Experience with machine learning frameworks like TensorFlow, PyTorch Basic understanding of LLMs (GPT, Claude, Llama) and Natural Language Processing (NLP) Interest in optimizing AI inference, query processing, and API integrations Exposure to machine learning frameworks like Hugging Face or LangChain is a plus Strong communication skills and ability to work collaboratively Enthusiasm for learning and applying new technologies
Posted 1 week ago
4.0 years
4 - 9 Lacs
Gurgaon
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Nielsen is seeking an organized, detail-oriented, team player, to join the Engineering team in the role of Machine learning Engineer. Nielsen's Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. The Software Engineer will be responsible for defining, developing, testing, analyze, and delivering technology solutions within Nielsen's Collections platforms. Qualifications Experience having led multiple projects leveraging LLMs, GenAI, and Prompt Engineering. Exposure to real-world MLOps deploying models into production adding features to products. Knowledge of working in a cloud environment. Strong understanding of LLMs, GenAI, Prompt Engineering, and Copilot. Bachelor's degree in Computer Science or equivalent degree. 4+ years of software experience Experience with Machine learning frameworks and models. The ML Engineer is expected to fully own the services that are built with the ML Scientists. This cuts across scalability, availability, having the metrics in place, alarms/alerts in place - and being responsible for the latency of the services. Data quality checks & onboarding the data onto the cloud for modeling purposes. Prompt Engineering, FT work, Evaluation, Data. End-end AI Solution architecture, latency tradeoffs, LLM Inference Optimization, Control Plane, Data Plate, and Platform Engineering. Comfort in Python and Java is highly desirable. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
11.0 years
0 Lacs
Hyderābād
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're seeking a highly skilled AI/ML Platform Engineer to architect and build a modern, scalable, and secure Data Science and Analytical Platform. This pivotal role will drive end-to-end (E2E) model lifecycle management, establish robust platform governance, and create the foundational infrastructure for developing, deploying, and managing Machine Learning models across both on-premise and hybrid cloud environments. Responsibilities* Lead the architecture and design for building scalable, resilient, and secure distributed applications ensuring compliance with organizational technology guidelines, security standards, and industry best practices like 12-factor principles and well-architected framework guidelines. Actively contribute to hands-on coding, building core components, APIs and microservices while ensuring high code quality, maintainability, and performance. Ensure adherence to engineering excellence standards and compliance with key organizational metrics such as code quality, test coverage and defect rates. Integrate secure development practices, including data encryption, secure authentication, and vulnerability management into the application lifecycle. Work on adopting and aligning development practices with CI/CD best practices to enable efficient build and deployment of the application on the target platforms like VMs and/or Container orchestration platforms like Kubernetes, OpenShift etc. Collaborate with stakeholders to align technical solutions business requirements, driving informed decision-making and effective communication across teams. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: FullStack Bigdata Experience Range* 11+ Years Foundational Skills* Microservices & API Development: Strong proficiency in Python, building performant microservices and REST APIs using frameworks like FastAPI and Flask . API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar, e.g., Kong, Envoy) for managing and securing API traffic, including JWT/OAuth2 based authentication . Observability & Monitoring: Proven ability to monitor, log, and troubleshoot model APIs and platform services using tools such as Prometheus, Grafana , or the ELK/EFK stack . Policy & Governance: Proficiency with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing governance policies. MLOps Expertise: Solid understanding of MLOps capabilities , including ML model versioning, registry, and lifecycle automation using tools like MLflow, Kubeflow , or custom metadata solutions. Multi-Tenancy: Experience designing and implementing multi-tenant architectures for shared model and data infrastructure. Containerization & Orchestration: Strong knowledge of Docker and Kubernetes for containerization and orchestration. CI/CD & GitOps: Familiarity with CI/CD tools and GitOps practices for automated deployments and infrastructure management. Hybrid Cloud Deployments: Understanding of hybrid deployment strategies across on-premise virtual machines and public cloud platforms ( AWS, Azure, GCP ). Data science workbench understanding: Basic understanding of the requirements for data science workloads (Distributed training frameworks like Apache Spark, Dash, and IDE’s like Jupyter notebooks abd VScode) Desired Skills* Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server . Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Performs Continuous Integration and Continuous Development (CI-CD) activities. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Extensive hands on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment) Experience with model deployment, scoring and monitoring for batch and real-time on various different technologies and platforms. Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration. Experience in automation for deployment using Ansible Playbooks, scripting. Experience with developing and building RESTful API services in an efficient and scalable manner. Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs) Experience designing and building full stack solutions utilizing distributed computing or multi-node architecture for large datasets (terabytes to petabyte scale) Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training Hands on experience working in a Cloud Platform (AWS/Azure/GCP) to support the Data Science Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Hyderabad
Posted 1 week ago
0 years
5 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate/FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 24, 2025, 4:29:40 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Purpose: Define the AI Research Team structure, vision, hiring plan, and guidelines to build a world-class AI foundation for HDFC Mobile & Net Banking. Job Responsibilities: Responsible for deploying ML models to production, setting up Smart SOC automation, Firebase/Play monitoring, crash-free sessions, and containerized AI infra. Develop NLP systems to parse and understand statements in Hindi, Tamil, Marathi, Bengali and convert into insights using custom LLMs. Ensure every deployed model meets audit, compliance, and cybersecurity standards. Collaborate with Security, Risk, Audit, and RBI compliance Keyskills required: Min 7 Yrs of exp is required Python, TensorFlow, PyTorch, FastAPI - DevOps (Kubernetes, Docker), CI/CD for ML - Cloud ML pipelines (GCP, Azure, AWS) - Familiarity with real-time logging and incident systems NLP- HuggingFace, IndicNLP, BERT family models - Tokenization, NER, sentence parsing for regional text - Fast training & inference for multilingual environments ML explainability, fairness, adversarial defense - Secure model deployment (threat models, RBAC) - Integration with audit logs, version tracking
Posted 1 week ago
12.0 years
0 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 27741 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Architect Description: Area(s) of responsibility Job Title: Generative AI Technical Architect Role Overview: Generative AI Architect will design, develop, and implement advanced generative AI solutions that drive business impact. This role offers the opportunity to work at the forefront of AI innovation. The Architect will lead the end-to-end architecture, design, and deployment of scalable generative AI systems. Responsibilities include conceptualizing solutions, selecting models/frameworks, overseeing development, and integrating AI capabilities into platforms. Collaboration with stakeholders to translate complex requirements into high-performance AI solutions is key. Key Responsibilities: Design GenAI Solutions: Lead architecture of generative AI systems, including LLM selection, RAG, and fine-tuning. Azure AI Expertise: Build scalable AI solutions using Azure AI services. Python Development: Write efficient, maintainable Python code for data processing, automation, and APIs. Model Optimization: Enhance model performance, scalability, and cost-efficiency. Data Strategy: Design data pipelines for training/inference using Azure data services. Integration & Deployment: Integrate models into enterprise systems. Implement MLOps, CI/CD (Azure DevOps, GitHub, Jenkins), and containerization (Docker, Kubernetes). Technical Leadership: Guide teams on AI development and deployment best practices. Innovation: Stay updated on GenAI trends. Drive PoCs and pilot implementations. Collaboration: Work with cross-functional teams to align AI solutions with business goals. Communicate technical concepts to non-technical stakeholders. Required Skills & Qualifications: 12–16 years in IT, with 3+ years in GenAI architecture. Technical Proficiency: Azure AI services Python, TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex LLMs, transformers, diffusion models Prompt engineering, RAG, vector DBs (Pinecone, Weaviate, Chroma) MLOps, CI/CD, Kubernetes RESTful API development Architecture: Cloud, microservices, design patterns Problem-Solving: Strong analytical and creative thinking Communication: Clear articulation of complex concepts Teamwork: Agile collaboration and project leadership Desirable: Azure AI certifications Experience with AWS LLM fine-tuning Open-source contributions or AI/ML publications
Posted 1 week ago
5.0 years
4 - 6 Lacs
Chennai
On-site
Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Summary To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Senior Specialist - Indirect Procurement Sourcing in Chennai, India. The Senior Specialist will be based in Chennai. Will be responsible for Indirect procurement operations, specialized in handling MRO / EDM / Facilities / Construction / New Building Projects Procurement, RFQ to support factory & GBS operation, execution of the strategic sourcing process and global policy compliance, setup goals and lead team to drive to achieve them, coach and develop talents. What a typical day looks like: Typically requires an Engineering degree in a related field. A minimum of 5 years of material and manufacturing experience, preferably from Manufacturing Industry (Automobile, Electronics Manufacturing) Demonstrates expert functional, technical and people and/or process management skills as well as customer (external and internal) relationship skills. Demonstrates detailed expertise in very complex functional/technical area or broad breadth of knowledge in multiple areas; understands the strategic impact of the function across sites Ability to read, analyze, and interpret the most complex documents. Ability to respond effectively to the most sensitive inquiries or complaints. Ability to write speeches and articles using original or innovative techniques or style. Ability to make effective and persuasive speeches and presentations on controversial or complex topics to top management, public groups, and/or boards of directors Ability to work with mathematical concepts such as probability and statistical inference to practical situations Ability to define problems, collect data, establish facts and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables The experience we’re looking to add to our team: 5 years+ working experience in manufacturing Sector (not service/consultant/trading company) Familiar with international company culture Dedicated role in supply chain management, including 3 years plus working experience in indirect procurement of MRO / EDM Facilities / Construction Projects Financial knowledge and cost management sense Working knowledge in ERP systems What you’ll receive for the great work you provide Health Insurance PTO #RA01 Job Category Global Procurement & Supply Chain Required Skills: Optional Skills: Flex pays for all costs associated with the application, interview or offer process, a candidate will not be asked for any payment related to these costs. Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).
Posted 1 week ago
0.0 - 1.0 years
5 - 6 Lacs
Ahmedabad
On-site
Position - 02 Job Location - Ahmedabad Qualification - Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field, Relevant certifications or course completion (Coursera, edX, etc.) will be an advantage Years of Exp - 0 to 1 year About us Bytes Technolab is a full-range web application Development Company, establishedin the year 2011, having international presence in the USA and Australia and India. Bytes exhibiting excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services ever since its inception. Roles & responsibilities Support development and fine-tuning of Large Language Models (LLMs) using open-source or proprietary models (e.g., OpenAI, HuggingFace, LLaMA). Build and optimize Computer Vision pipelines for tasks such as object detection, image classification, and OCR. Design and implement data preprocessing pipelines, including handling structured and unstructured data. Assist in training, evaluation, and deployment of ML/DL models in staging or production environments. Write clean, scalable, and well-documented code for research and experimentation purposes. Collaborate with senior data scientists, ML engineers, and product teams on AI projects. Skills required Strong foundation in Neural Networks, Deep Learning, and ML algorithms. Hands-on experience with Python and libraries such as TensorFlow, PyTorch, OpenCV, or HuggingFace Transformers. Familiarity with LLM architectures (e.g., GPT, BERT, LLaMA) and their fine-tuning or inference techniques. Basic understanding of Computer Vision concepts and real-world use cases. Experience working with data pipelines and handling large datasets (e.g., Pandas, NumPy, data loaders). Knowledge of model evaluation techniques and metrics. Good to Have Experience using Hugging Face, LangChain, OpenCV, or YOLO for CV tasks. Familiarity with Prompt Engineering and Retrieval-Augmented Generation (RAG). Understanding of NLP concepts such as tokenization, embeddings, and vector search. Exposure to ML model deployment (Flask, FastAPI, Streamlit, or AWS/GCP/Azure). Participation in ML competitions (e.g., Kaggle) or personal projects on GitHub. Soft Skills Strong analytical and problem-solving skills. Willingness to learn and work in a collaborative team environment. Good communication and documentation habits.
Posted 1 week ago
3.0 years
1 - 4 Lacs
Indore
On-site
Job Title: AI/ML Engineer (Python + AWS + REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview: Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share resume to hr@advantal.ne
Posted 1 week ago
6.0 years
0 Lacs
Visakhapatnam
On-site
Job Title: Machine Learning Engineer – 3D Graphics Location: Visakhapatnam, UAE Experience: 6+years Job Type: Full-Time Role: We are seeking a highly skilled and innovative Machine Learning Engineer with 3D Graphics expertise . In this role, you will be responsible for developing and optimizing 3D mannequin models using machine learning algorithms, computer vision techniques, and 3D rendering tools. You will collaborate with backend developers, data scientists, and UI/UX designers to create realistic, scalable, and interactive 3D visualization modules that enhance the user experience. Key Responsibilities: 3D Mannequin Model Development: Design and develop 3D mannequin models using ML-based body shape estimation. Implement pose estimation, texture mapping, and deformation models. Use ML algorithms to adjust measurements for accurate sizing and fit. Machine Learning & Computer Vision: Develop and fine-tune ML models for body shape recognition, segmentation, and fitting. Implement pose detection algorithms using TensorFlow, PyTorch, or OpenCV. Use GANs or CNNs for realistic 3D texture generation. 3D Graphics & Visualization: Create interactive 3D rendering pipelines using Three.js, Babylon.js, or Unity. Optimize mesh processing, lighting, and shading for real-time rendering. Use GPU-accelerated techniques for rendering efficiency. Model Optimization & Performance: Optimize inference pipelines for faster real-time rendering. Implement multi-threading and parallel processing for high performance. Utilize cloud infrastructure (AWS/GCP) for distributed model training and inference. Collaboration & Documentation: Collaborate with UI/UX designers for seamless integration of 3D models into web and mobile apps. Maintain detailed documentation for model architecture, training processes, and rendering techniques. Key Skills & Qualifications: Experience: 5+ years in Machine Learning, Computer Vision, and 3D Graphics Development. Technical Skills: Proficiency in Django, Python, TensorFlow, PyTorch, and OpenCV. Strong expertise in 3D rendering frameworks: Three.js, Babylon.js, or Unity. Experience with 3D model formats (GLTF, OBJ, FBX). Familiarity with Mesh Recovery, PyMAF, and SMPL models. ML & Data Skills: Hands-on experience with GANs, CNNs, and RNNs for texture and pattern generation. Experience with 3D pose estimation and body measurement algorithms. Cloud & Infrastructure: Experience with AWS (SageMaker, Lambda) or GCP (Vertex AI, Cloud Run). Knowledge of Docker and Kubernetes for model deployment. Graphics & Visualization: Knowledge of 3D rendering engines with shader programming. Experience in optimization techniques for rendering large 3D models. Soft Skills: Strong problem-solving skills and attention to detail. Excellent collaboration and communication skills. Interested candidates can send their updated resume to: careers@onliestworld.com Job Type: Full-time
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required Up to 10% Clearance Required None What You Will Do Design, train, and fine-tune advanced foundational models (text, audio, vision) using healthcare-and other relevant datasets, focusing on accuracy and context relevance. Collaborate with cross-functional teams (Business, engineering, IT) to seamlessly integrate AI/ML technologies into our solution offerings. Deploy, monitor, and manage AI models in a production environment, ensuring high availability, scalability, and performance. Continuously research and evaluate the latest advancements in AI/ML and industry trends to drive innovation. Develop and maintain comprehensive documentation for AI models, including development, training, fine-tuning, and deployment procedures. Provide technical guidance and mentorship to junior AI engineers and team members. Collaborate with stakeholders to understand business needs and translate them into technical requirements for model fine-tuning and development. Select and curate appropriate datasets for fine-tuning foundational models to address specific use cases. Ensure AI solutions can seamlessly integrate with existing systems and applications. What You Will Need Bachelors or master’s in computer science, Artificial Intelligence, Machine Learning, or a related field. 4 to 6 years of hands-on experience in AI/ML, with a demonstrable track record of training and deploying LLMs and other machine learning models. Strong proficiency in Python and familiarity with popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers, etc.). Practical experience deploying and managing AI models in production environments, including expertise in serving and inference frameworks (Triton, TensorRT, VLLM, TGI, etc.). Experience in Voice AI applications, a solid understanding of healthcare data standards (FHIR, HL7, EDI) and regulatory compliance (HIPAA, SOC2) is preferred. Excellent problem-solving and analytical abilities, capable of tackling complex challenges and evaluating multiple factors. Exceptional communication and collaboration skills, enabling effective teamwork in a dynamic environment. Worked on a minimum of 2 AI/LLM projects from the beginning to the end with proven value for business. What Would Be Nice To Have Experience with cloud computing platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes) is a plus. Familiarity with MLOps practices for continuous integration, continuous deployment (CI/CD), and automated monitoring of AI models. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Havas CSA is seeking a Data Scientist with 2-4 years of experience to contribute to advanced analytics and predictive modelling initiatives. The ideal candidate will combine strong statistical knowledge with practical business understanding to help develop and implement models that drive customer value and business growth. Responsibilities: Implement and maintain customer analytics models including CLTV prediction, propensity modelling, and churn prediction Support the development of customer segmentation models using clustering techniques and behavioural analysis Assist in building and maintaining survival models to analyze customer lifecycle events Work with large-scale datasets using BigQuery and Snowflake Develop and validate machine learning models using Python and cloud-based ML platforms, specifically BQ ML, ModelGarden and Amazon Bedrock Help transform model insights into actionable business recommendations Collaborate with analytics and activation teams to implement model outputs Present analyses to stakeholders in clear, actionable formats Qualifications: Bachelor's or master’s degree in Statistics, Mathematics, Computer Science, or related quantitative field 1-2 years’ experience in applied data science, preferably in marketing/retail Experience in developing and implementing machine learning models Strong understanding of statistical concepts and experimental design Ability to communicate technical concepts to non-technical audiences Familiarity with agile development methodologies Technical Skills: Advanced proficiency in: SQL and data warehouses (BigQuery, Snowflake) Python for statistical modeling Machine learning frameworks (scikit-learn, TensorFlow) Statistical analysis and hypothesis testing Data visualization tools (Matplotlib, Seaborn) Version control systems (Git) Understanding of Google Cloud Function and Cloud Run Experience with: Customer lifetime value modeling RFM analysis and customer segmentation Survival analysis and hazard modeling A/B testing and causal inference Feature engineering and selection Model validation and monitoring Cloud computing platforms (GCP/AWS/Azure) Key Projects & Deliverables Support development and maintenance of CLTV models Contribute to customer segmentation models incorporating behavioral and transactional data Implement survival models to predict customer churn Support the development of attribution models for marketing effectiveness Help develop recommendation engines for personalized customer experiences Assist in creating automated reporting and monitoring systems Soft Skills Strong analytical and problem-solving abilities Good communication and presentation skills Business acumen Collaborative team player Strong organizational skills Ability to translate business problems into analytical solutions Growth Opportunities Work on innovative data science projects for major brands Develop expertise in cutting-edge ML technologies Learn from experienced data science leaders Contribute to impactful analytical solutions Opportunity for career advancement We offer competitive compensation, comprehensive benefits, and the opportunity to work with leading brands while solving complex analytical challenges. Join our team to grow your career while making a significant impact through data-driven decision making. Contract Type : Permanent Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.
Posted 1 week ago
0 years
0 Lacs
Bhopal, Madhya Pradesh, India
Remote
Internship Opportunity – Computer Vision Engineer Location: Remote Duration: 3 to 6 months Company: Logiclens Solutions ***Note: This is an unpaid internship. Post Internship we may provide full time opportunity if the candidate performs well.*** About Us: Logiclens Solutions is a cutting-edge AI and video analytics company delivering real-time surveillance intelligence to businesses. We specialize in computer vision applications such as object detection, facial recognition, apparatus detection, people tracking, and automated reporting through AI-driven dashboards. Role: Computer Vision Intern Key Responsibilities: Assist in designing and implementing computer vision pipelines using OpenCV and other libraries. Build and optimize object detection and image/video processing models. Integrate machine learning models into scalable web services using FastAPI or Flask . Develop and maintain frontend dashboards using React.js , Tailwind CSS , HTML/CSS . Work with RESTful APIs and manage seamless backend-frontend communication. Handle database operations using MongoDB and MySQL . Deploy applications using Docker containers on AWS (EC2, S3, etc.) . Contribute to CI/CD pipelines using GitHub Actions or GitLab CI/CD. Collaborate using Git for version control and code reviews. Required Skills: Computer Vision: OpenCV (image/video processing, object detection) Languages: Python, JavaScript Web Frameworks: FastAPI, Flask, Node.js, Express.js Frontend: React.js, HTML, CSS, Tailwind CSS Databases: MongoDB, MySQL DevOps & Deployment Cloud: AWS (EC2, S3, etc.) Containers: Docker CI/CD: Familiarity with GitHub Actions, GitLab CI/CD Version Control: Git & GitHub API: RESTful API design and integration Bonus Skills: Strong understanding of cloud-based deployments (especially AWS) Exposure to computer vision model training and optimization Experience with ML model inference pipelines What We Offer: Real-world exposure to AI-driven projects in surveillance and analytics Opportunity to work with a skilled tech team in production-level environments Certificate, letter of recommendation, and potential full-time opportunity based on performance
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. We believe in the power of diversity and inclusion and cultivate a workplace culture of belonging that views uniqueness as a competitive edge and builds a community that enables our people to push the limits of innovation to make great products that create value and improve people's lives. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Senior Specialist - Indirect Procurement Sourcing in Chennai, India. The Senior Specialist will be based in Chennai. Will be responsible for Indirect procurement operations, specialized in handling MRO / EDM / Facilities / Construction / New Building Projects Procurement, RFQ to support factory & GBS operation, execution of the strategic sourcing process and global policy compliance, setup goals and lead team to drive to achieve them, coach and develop talents. What a typical day looks like: Typically requires an Engineering degree in a related field. A minimum of 5 years of material and manufacturing experience, preferably from Manufacturing Industry (Automobile, Electronics Manufacturing) Demonstrates expert functional, technical and people and/or process management skills as well as customer (external and internal) relationship skills. Demonstrates detailed expertise in very complex functional/technical area or broad breadth of knowledge in multiple areas; understands the strategic impact of the function across sites Ability to read, analyze, and interpret the most complex documents. Ability to respond effectively to the most sensitive inquiries or complaints. Ability to write speeches and articles using original or innovative techniques or style. Ability to make effective and persuasive speeches and presentations on controversial or complex topics to top management, public groups, and/or boards of directors Ability to work with mathematical concepts such as probability and statistical inference to practical situations Ability to define problems, collect data, establish facts and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables The experience we’re looking to add to our team: 5 years+ working experience in manufacturing Sector (not service/consultant/trading company) Familiar with international company culture Dedicated role in supply chain management, including 3 years plus working experience in indirect procurement of MRO / EDM Facilities / Construction Projects Financial knowledge and cost management sense Working knowledge in ERP systems What you’ll receive for the great work you provide Health Insurance PTO #RA01 Site Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We celebrate diversity and do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).
Posted 1 week ago
2.0 years
12 - 28 Lacs
Coimbatore, Tamil Nadu, India
On-site
Experience: 3 to 10 Location : Coimbatore Notice Period: Immediate Joiners are Preferred. Note: Minimum 2 years experience into core Gen AI 𝗞𝗲𝘆 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀: Design, develop, and fine-tune Large Language Models (LLMs) for various in-house applications. Implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Build and manage data pipelines for processing, transforming, and feeding structured/unstructured data into AI models. Ensure scalability, performance, and security of AI-driven solutions in production environments. Collaborate with cross-functional teams, including data engineers, software developers, and product managers. Conduct experiments and evaluations to improve AI system accuracy and efficiency. Stay updated with the latest advancements in AI/ML research, open-source models, and industry best practices. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 & 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 Strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases (e.g., Pinecone, ChromaDB, Weaviate, OpenSearch, FAISS). Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience in Python web frameworks such as FastAPI, Django, or Flask. Experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes). Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications. Strong understanding of vector search, embedding models, and hybrid retrieval techniques. Experience with optimizing inference and serving AI models in real-time production systems. 𝗡𝗶𝗰𝗲-𝘁𝗼-𝗛𝗮𝘃𝗲 𝗦𝗸𝗶𝗹𝗹𝘀 Experience with multi-modal AI (text, image, audio). Familiarity with privacy-preserving AI techniques and responsible AI frameworks. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation. Skills: pytorch,rag architectures,opensearch,weaviate,docker,llm fine-tuning,chromadb,apache airflow,lora,python,hybrid retrieval techniques,django,gcp,crewai,opean ai,hugging face,gen ai,pinecone,faiss,aws,autogpt,embedding models,flask,fastapi,llm apis,deepspeed,vector search,peft,langchain,azure,spark,kubernetes,ai gen,tensorflow,real-time production systems,langgraph,kafka
Posted 1 week ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Full Stack + AI/ML Position: Full Stack + AI/ML Experience: 2–5 Years Location: Jaipur/Gurgaon Type: Full-Time About the Role: About the Role: We’re looking for a Full Stack Developer with a strong foundation in AI/ML to help us build intelligent, scalable, and user-centric products. You’ll work at the intersection of development and data, building web platforms and integrating machine learning solutions into real-world applications. Key Responsibilities: Develop and maintain full-stack applications using React.js / Next.js and Python (Django/Flask) Integrate machine learning models into production-grade systems Collaborate with data scientists to build APIs for ML outputs Manage backend logic, RESTful APIs, and database architecture (SQL/MongoDB) Deploy scalable services using AWS / GCP / Azure Optimize application performance, data pipelines, and ML inference processes Required Skills: Proficiency in JavaScript (React.js) and Python (Django or Flask) Hands-on experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Understanding of REST APIs , microservices , and cloud deployment Familiarity with data structures , algorithm design , and model integration Exposure to DevOps tools , version control (Git), and CI/CD pipelines Nice to Have: Experience working with OpenAI APIs, Langchain, or similar LLM frameworks Background in building analytics dashboards or AI-driven web apps Knowledge of containerization (Docker/Kubernetes) Why Join Us? Work on high-impact AI projects from ideation to deployment Be part of a fast-growing, innovation-driven tech team Flexible work culture, ownership, and growth opportunities
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: AI/ML Engineer Location : Hyderabad Onsite Experience : 5+ years Role Overview We are hiring a Mid-Level AI/ML Engineer with 5+ years of experience in designing, developing, and deploying AI/ML solutions. The role involves integrating LLMs , Agentic AI , RAG pipelines , and anomaly detection into cloud/on-prem platforms, enabling natural language interfaces and intelligent automation. Candidates must be hands-on with Python ML stack, own MLOps lifecycle, and be capable of translating cybersecurity problems into scalable ML solutions. Key Responsibilities LLM & Chatbot Integration : Build conversational AI using LLMs with context-awareness, domain adaptation, and natural language interaction. Retrieval-Augmented Generation (RAG) : Implement vector search with semantic retrieval to ground LLM responses with internal data. Agentic AI : Create autonomous agents to execute multi-step actions using APIs, tools, or reasoning chains for automated workflows. Anomaly Detection & UEBA : Develop ML models for user behaviour analytics, threat detection, and alert tuning. NLP & Insights Generation : Transform user queries into actionable security insights, reports, and policy recommendations. MLOps Ownership : Manage end-to-end model lifecycle – training, validation, deployment, monitoring, versioning. Required Skills Strong Python experience with ML frameworks: TensorFlow, PyTorch, scikit-learn . Hands-on with LLMs (OpenAI, HuggingFace, etc.), prompt engineering, fine-tuning, and inference optimization. Experience implementing RAG using FAISS , Pinecone , or similar. Familiarity with LangChain , agentic frameworks , and multi-agent orchestration. Solid understanding of MLOps : Docker, CI/CD, deployment on cloud/on-prem infra. Security-conscious development practices and ability to work with structured/unstructured security data. Preferred Bachelor’s degree in computer science and more preferably Data Science or AI related fields. Experience with cybersecurity use cases: CVE analysis, behaviour analytics, compliance, log processing. Knowledge of open-source LLMs (LLaMA, Mistral, etc.) and cost-efficient deployment methods. Background in chatbots, Rasa, or custom NLP-driven assistants. Exposure to agent tools (LangChain Agents, AutoGPT-style flows) and plugin integration.
Posted 1 week ago
8.0 years
36 - 60 Lacs
Pune, Maharashtra, India
On-site
About Velsera Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Design and implement data integration strategies to unify and streamline diverse data sources, creating a consistent and cohesive data landscape for AI model development. Develop efficient and programmatic methods for synthesizing large volumes of data, extracting relevant features, and preparing data for AI model training and validation. Leverage advanced feature engineering techniques and quantitative methods, including statistical modeling, machine learning, deep learning, and generative AI, to implement, validate, and optimize AI models for accuracy, reliability, and performance. Simplify data presentation to help stakeholders easily grasp insights and make informed decisions. Maintain a deep understanding of the latest advancements in AI and generative AI, including various model architectures, training methodologies, and evaluation metrics. Identify opportunities to leverage generative AI to securely and ethically address business needs, optimize existing processes, and drive innovation. Contribute to project management processes, providing regular status updates, and ensuring the timely delivery of high-quality AI solutions. Primarily responsible for contributing to project delivery and maximizing business impact through effective AI solution architecture and implementation. Occasionally contribute technical expertise during pre-sales engagements and support internal operational improvements as needed. Requirements What do you bring to the table? A bachelor's or master's degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Engineering) is required. The ideal candidate will have a strong background in designing and implementing end-to-end AI/ML pipelines, including feature engineering, model training, and inference. Experience with Generative AI pipelines is needed. 8+ years of experience in AI/ML development, with at least 3+ years in an AI architecture role. Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Benefits Flexible Work & Time Off - Embrace hybrid work models and enjoy the freedom of unlimited paid time off to support work-life balance Health & Well-being - Access comprehensive group medical and life insurance coverage, along with a 24/7 Employee Assistance Program (EAP) for mental health and wellness support Growth & Learning - Fuel your professional journey with continuous learning and development programs designed to help you upskill and grow Recognition & Rewards - Get recognized for your contributions through structured reward programs and campaigns Engaging & Fun Work Culture - Experience a vibrant workplace with team events, celebrations, and engaging activities that make every workday enjoyable & Many More..
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Designation: Assistant Manager Experience: 5 to 8 years Location: Chennai, Tamil Nadu, India (CHN) Job Description: +5 years of experience working in web, product, marketing, or other related analytics fields to solve for marketing/product business problems +4 years of experience in designing and executing experiments (A/B and multivariate) with a deep understanding of the stats behind hypothesis testing Proficient in alternative A/B testing methods like DiD, Synthetic control and other causal inference techniques +5 years of technical proficiency in SQL, Python or R and data visualization tools like tableau +5 years of experience in manipulating and analyzing large complex datasets (e.g. clickstream data), constructing data pipelines (ETL) and working on big data technologies (e.g., Redshift, Spark, Hive, BigQuery) and solutions from cloud platforms and visualization tools like Tableau +3 years of experience in web analytics, analyzing website traffic patterns and conversion funnels +5 years of experience in building ML models (eg: regression, clustering, trees) for personalization applications Demonstrate ability to drive strategy, execution and insights for AI native experiences across the development lifecycle (ideation, discovery, experimentation, scaling) Outstanding communication skills with both technical and non-technical audiences Ability to tell stories with data, influence business decisions at a leadership level, and provide solutions to business problems Ability to manage multiple projects simultaneously to meet objectives and key deadlines Responsibilities: Drive measurement strategy and lead E2E process of A/B testing for areas of web optimization such as landing pages, user funnel, navigation, checkout, product lineup, pricing, search and monetization opportunities Analyze web user behavior at both visitor and session level using clickstream data by anchoring to key web metrics and identify user behavior through engagement and pathing analysis Leverage AI/GenAI tools for automating tasks and building custom implementations Use data, strategic thinking and advanced scientific methods including predictive modeling to enable data-backed decision making for Intuit at scale Measure performance and impact of various product releases Demonstrate strategic thinking and systems thinking to solve business problems and influence strategic decisions using data storytelling. Partner with GTM, Product, Engineering, Design, Engineering teams to drive analytics projects end to end Build models to identify patterns in traffic and user behavior to inform acquisition strategies and optimize for business outcomes Skills: 5 to 8 years in the DA domain web, product, marketing, A/B testing methods like DiD, Synthetic control, constructing data pipelines (ETL), big data technologies (e.g., Redshift, Spark, Hive, BigQuery),SQL, Python or R and tableau, web analytics, analyzing website traffic patterns and conversion funnels, ML models (eg: regression, clustering, trees), Managerial skills Job Snapshot Updated Date 24-07-2025 Job ID J_3934 Location Chennai, Tamil Nadu, India Experience 5 - 8 Years Employee Type Permanent
Posted 1 week ago
0.0 - 25.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Staff AI Engineer Bangalore, Karnataka Opportunity: Get Well is seeking a highly skilled and proficient Staff Engineer - AI Engineering to drive the development and implementation of advanced AI solutions. This role focuses on the technical execution of AI projects, leveraging expertise in Large Language Models (LLMs), multimodal AI systems, and agentic systems to create innovative healthcare solutions. You will work closely with AI engineering teams, product teams, and cross-functional stakeholders to translate technological capabilities into impactful healthcare solutions, serving as a key technical liaison. This position reports directly to the VP, Product Development, working collaboratively across multiple teams to advance the organization's AI capabilities and strategic objectives. Responsibilities: AI Solution Development Design, develop, and implement AI solutions leveraging unimodal and multimodal Large Language Models (LLMs) and agentic systems. Develop and integrate agentic systems using frameworks like CrewAI, LangChain, and LangGraph to enable autonomous decision-making and workflow automation. Enhance product ecosystems by incorporating emerging AI technologies to improve functionality and user experience. Execute model training, fine-tuning, and optimization of inference engines for high-performance AI systems. Ensure technical excellence in AI solution design and implementation. Evaluate and integrate emerging AI technologies into existing systems. Cross-Functional Collaboration Serve as primary technical liaison between AI engineering teams and product teams Translate technical capabilities into product requirements and user-centric solutions Help facilitate communication and alignment across engineering, product, and clinical teams Drive collaborative innovation and knowledge sharing Infrastructure and Optimization Optimize cloud-based AI infrastructure for scalability, performance, and reliability. Implement best practices for efficient AI system deployment and maintenance, including agentic workflows. Performance and Quality Assurance Establish and maintain rigorous testing and validation protocols for AI models Drive continuous improvement in AI model accuracy, efficiency, and robustness. Monitoring and Observability Implement and manage monitoring solutions for AI systems using tools such as Langfuse, Prometheus, Grafana, or equivalent platforms to ensure operational health and performance. Analyze observability data to identify and resolve issues in AI model behavior, agentic system performance, and system reliability. Develop proactive monitoring strategies to detect anomalies and ensure reliability of AI deployments. Compliance and Ethical AI Ensure AI solutions adhere to healthcare regulatory standards and data privacy requirements (e.g., HIPAA, GDPR) Develop and enforce ethical AI guidelines to mitigate biases Uphold high standards for secure handling of sensitive information, including ePHI and PHI, per federal, state, and local regulations. Requirements: Master's degree or higher in Computer Science, Artificial Intelligence, Machine Learning, or a closely related technical field Total 8-12 years experience with minimum 3+ years of technical leadership experience in AI/ML technologies Proven track record of delivering complex AI solutions in healthcare or technologically advanced domains Deep technical proficiency in: Large Language Model (LLM) architectures Multimodal AI system design Agentic system development using frameworks like CrewAI, LangChain, and LangGraph. Machine learning model training and fine-tuning Cloud computing infrastructure (AWS preferred) AI/ML frameworks and development tools (e.g., TensorFlow, PyTorch, Hugging Face) AI monitoring and observability tools such as Langfuse, Prometheus, Grafana, or equivalent platforms. Strong programming skills in Python, C++, or other relevant languages. Experience in agile and adaptive project management methodologies Exceptional problem-solving and analytical skills, with a focus on delivering scalable solutions. Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Collaborative mindset with experience working in cross-functional teams. Ability to mentor junior engineers and contribute to team development. Knowledge of healthcare technology landscapes and regulatory environments (e.g., HIPAA, GDPR) is a plus. Awareness of emerging AI technologies, including agentic systems, MCP, A2A protocol and their potential applications. Adaptability to work in fast-evolving technological environments. Commitment to adhering to organizational information security policies and protecting sensitive information, including ePHI and PHI, in accordance with Federal, State, and local regulations. About Get Well: Now part of the SAI Group family, Get Well is redefining digital patient engagement by putting patients in control of their personalized healthcare journeys, both inside and outside the hospital. Get Well is combining high-tech AI navigation with high-touch care experiences driving patient activation, loyalty, and outcomes while reducing the cost of care. For almost 25 years, Get Well has served more than 10 million patients per year across over 1,000 hospitals and clinical partner sites, working to use longitudinal data analytics to better serve patients and clinicians. AI innovator SAI Group led by Chairman Romesh Wadhwani is the lead growth investor in Get Well. Get Well's award-winning solutions were recognized again in 2024 by KLAS Research and AVIA Marketplace. Learn more at Get Well and follow-us on LinkedIn and Twitter. Get Well is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age or veteran status. About SAI Group: SAIGroup commits to $1 Billion capital, an advanced AI platform that currently processes 300M+ patients, and 4000+ global employee base to solve enterprise AI and high priority healthcare problems. SAIGroup - Growing companies with advanced AI; https://www.cnbc.com/2023/12/08/75-year-old-tech-mogul-betting-1-billion-of-his-fortune-on-ai-futu re.html Bio of our Chairman Dr. Romesh Wadhwani: Team - SAIGroup (Informal at Romesh Wadhwani - Wikipedia) TIME Magazine recently recognized Chairman Romesh Wadhwani as one of the Top 100 AI leaders in the world - Romesh and Sunil Wadhwani: The 100 Most Influential People in AI 2023 | TIME
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision To Excel In This Role, You Will Need 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a Computer Vision Intern to assist in building and refining our image recognition pipeline. The role will start with dataset management—image collection, annotation validation, dataset cleaning, and preprocessing. Once the foundational data work is complete, you’ll get hands-on exposure to model training, augmentation, and evaluation, contributing directly to our production-ready pipeline. For one in the Seat: Responsibilities Organize, clean, and preprocess large-scale retail image datasets. Validate and manage annotations (bounding boxes, class labels, segmentation masks if applicable) using tools like Roboflow or CVAT or LabelImg. Apply augmentation techniques and prepare datasets for training. Support in training YOLOv5/YOLOv8-based models on custom datasets. Run model evaluations (Precision, Recall, F1 Score, SKU-level accuracy). Collaborate with the product team to improve real-world inference quality. Document the dataset pipeline and share insights for improving data quality. Who we're looking for: Must Have: Basic understanding of Computer Vision concepts (Object Detection, Classification) Familiarity with Python (OpenCV, Pandas, NumPy) Knowledge of image annotation tools (Roboflow, LabelImg, CVAT, etc.) Ability to manage and organise large datasets Good to have: Experience with YOLOv5 or YOLOv8 (Training, Inference, Fine-tuning) Exposure to image augmentation techniques (Albumentations, etc.) Understanding of retail/commercial shelf datasets or product detection problems Previous internship or project experience in computer vision is a plus
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision To Excel In This Role, You Will Need 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Job Description About Us We are a fast-growing trade finance technology company dedicated to transforming the way businesses access working capital. Our teams are solving complex problems at the intersection of finance, data, and technology. We are looking for a Senior Data Engineer to join our team and help build scalable, reliable, and high-performance data infrastructure that supports both our Analytics and Machine Learning teams. What You’ll Do Design, build, and maintain robust, scalable, and efficient data pipelines (batch and real-time) to support ML model training, inference, and business intelligence Partner closely with the product team to deliver the datasets, pipelines, and tooling needed for advanced analytics Develop and optimize our data lake, data warehouse, and feature stores to ensure accuracy, consistency, and accessibility of data across teams Implement data quality frameworks and monitoring solutions to ensure trust in the data Lead the adoption of best practices in data modeling, ETL/ELT, orchestration, and cloud-based data platforms (e.g., GCP, Azure). Contribute to the architecture and design of the company's next-generation data platform to enable real-time insights and machine learning at scale. Mentor junior engineers and collaborate across cross-functional teams including Product, Engineering, and AI Location: Full Remote What We’re Looking For 5+ years of hands-on data engineering experience in cloud-native environments. Proficient in building ETL/ELT pipelines using tools such as Spark/PySpark Strong experience with SQL, Python, and distributed data systems (e.g., Delta Lake) Solid understanding of data architecture patterns (data lake, lakehouse, warehouse) and data governance Experience working with machine learning teams (e.g., building feature pipelines, model scoring pipelines, real-time inference) is highly desirable Strong familiarity with BI tools (e.g., PowerBI) and supporting self-service analytics Prior experience in FinTech, Trade Finance, or Financial Services is a strong plus Excellent problem-solving, communication, and stakeholder management skills Nice To Have Familiarity with MLOps practices and tooling Exposure to regulatory compliance and risk modeling in the finance domain Experience in Anti-Money Laundering in the trade space Why Join Us? Be part of a mission-driven team reshaping global trade finance Work on cutting-edge data problems supporting both real-time decision-making and machine learning Competitive compensation, equity, and benefits. Flexible, collaborative, and innovative work culture
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough