Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
30 Lacs
India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, Generative AI, LLM, Kubernetes, Machine Learning, Python Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
3.0 years
0 Lacs
India
On-site
What You’ll Do ● Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills ● Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. ● Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. ● Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. ● Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput ● Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it ● Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need ● 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) ● Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. ● 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines ● Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. ● Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. ● Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. ● Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. ● Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). ● Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. ● A customer-centric approach — you think about how your work affects end users and product experience, not just model performance ● A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed ● The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket
Posted 1 month ago
2.0 years
30 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, Generative AI, LLM, Kubernetes, Machine Learning, Python Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role Overview: As a Python Developer Intern at Arcitech AI, you will play a crucial role in our advancements in software development, AI, and integrative solutions. This entry-level position offers the opportunity to work on cutting-edge projects and contribute to the growth of the company. You will be challenged to develop Python applications, collaborate with a dynamic team, and optimize code performance, all while gaining valuable experience in the industry. Responsibilities Assist in designing, developing, and maintaining Python applications focused on backend and AI/ML components under senior engineer guidance. Help build and consume RESTful or GraphQL APIs integrating AI models and backend services, following established best practices. Containerize microservices (including AI workloads) using Docker and support Kubernetes deployment and management tasks. Implement and monitor background jobs with Celery (e.g., data processing, model training/inference), including retries and basic alerting. Integrate third-party services and AI tools via webhooks and APIs (e.g., Stripe, Razorpay, external AI providers) in collaboration with the team. Set up simple WebSocket consumers using Django Channels for real-time AI-driven and backend features. Aid in configuring AWS cloud infrastructure (EC2, S3, RDS) as code, assist with backups, monitoring via CloudWatch, and support AI workload deployments. Write unit and integration tests using pytest or unittest to maintain ≥ 80% coverage across backend and AI codebases. Follow Git branching strategies and contribute to CI/CD pipeline maintenance and automation for backend and AI services. Participate actively in daily tech talks, knowledge-sharing sessions, code reviews, and team collaboration focused on backend and AI development. Assist with implementing AI agent workflows and document retrieval pipelines using LangChain and LlamaIndex (GPT Index) frameworks. Maintain clear and up-to-date documentation of code, experiments, and processes. Participate in Agile practices including sprint planning, stand-ups, and retrospectives. Demonstrate basic debugging and troubleshooting skills using Python tools and log analysis. Handle simple data manipulation tasks involving CSV, JSON, or similar formats. Follow secure coding best practices and be mindful of data privacy and compliance. Exhibit strong communication skills, a proactive learning mindset, and openness to feedback. Required Qualifications Currently pursuing a Bachelor’s degree in Computer Science, Engineering, Data Science, or related scientific fields. Solid foundation in Python programming with familiarity in common libraries (NumPy, pandas, etc.). Basic understanding of RESTful/GraphQL API design and consumption. Exposure to Docker and at least one cloud platform (AWS preferred). Experience or willingness to learn test-driven development using pytest or unittest. Comfortable with Git workflows and CI/CD tools. Strong problem-solving aptitude and effective communication skills. Preferred (But Not Required) Hands-on experience or coursework with AI/ML frameworks such as TensorFlow, PyTorch, or Keras. Prior exposure to Django web framework and real-time WebSocket development (Django Channels). Familiarity with LangChain and LlamaIndex (GPT Index) for building AI agents and retrieval-augmented generation workflows. Understanding of machine learning fundamentals (neural networks, computer vision, NLP). Background in data analysis, statistics, or applied mathematics.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Data Scientist Associate Senior within the Asset and Wealth Management team at JPMorgan Chase, you will play a key role as an experienced member of our Data Science Team. You are responsible for addressing business problems through data analysis, developing models, and deploying these models to production environments on AWS or Azure. Job Responsibilities Collaborate with all of JPMorgan’s lines of business and functions to delivery software solutions. Develop and experiment high quality machine learning models, services and platforms to make huge technology and business impact. Design and implement highly scalable and reliable data processing pipelines and perform analysis and insights to drive and optimize business result. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience BE/B.Tech, ME/MS or PhD degree in Computer Science, Statistics, Mathematics or Machine learning related field. Solid programming skills with Python Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Expert in at least one of the following areas: Natural Language Processing, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Experience in using GenAI (OpenAI or other models ) to solve business problem. Knowledge of machine learning frameworks: Tensorflow and Pytorch Experience in training/ inference/ML ops on public cloud (AWS/GCP/Azure) Strong analytical and critical thinking skills. Preferred Qualifications, Capabilities, And Skills Knowledge of Asset and Wealth management business is added advantage . ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.
Posted 1 month ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Company Description Stockwell Solar Services Pvt Ltd (SSSPL), founded by IITians in 2017, specialises in Solar and BESS OPEX/RESCO and CAPEX models. We currently operate or are constructing over 100 MW of solar assets across 300+ sites, with 600 MW of projects under execution, showcasing our commitment to sustainable energy. Join us to lead the transition to clean energy. Role Description This is an on-site, full-time role for a Strategic Role-CEO's Office at Stockwell Solar Services Pvt Ltd, located in Jaipur. The role involves coordinating with various departments, preparing reports, and supporting strategic initiatives led by the CEO. Role : - Assist & coordinate the strategy planning exercise. - To monitor tasks delegated by the CEO to ensure that the task is achieved by the agreed deadlines. - Coordinating cross-functional teams to ensure project deliverables. - External & Internal interface on behalf of the CEO. - Helping in business presentations & tie up with internal & external stakeholders. - Business data analysis. - Assist the CEO with inputs and data required for making strategic decisions. Qualifications -The ideal candidate should have abilities in business acumen, strategy formulation and P&L understanding; data comprehension & inference development; project management & team working. -Experience in the Solar/Power Industry is a Plus.
Posted 1 month ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role Gartner is looking for passionate and motivated Lead Data Engineers who are excited to foray into new technologies and help build / maintain data driven components for realizing business needs. This role is in Gartner Product Delivery Organization (PDO). PDO Data Engineering teams are high velocity agile teams responsible for developing and maintaining components crucial to customer-facing channels, data science teams, and reporting & analysis. These components include but are not limited to Spark jobs, REST APIs, AI/ML model training & inference, MLOps / devops pipelines, data transformation & quality pipelines, data lake & data catalogs, data streams etc. What You Will Do Ability to lead and execute a mix of small/medium sized projects simultaneously Owns success, takes responsibility for successful delivery of solutions from development to production. Mentor and guide team members Explore and create POC of new technologies / frameworks Should have significant experience working directly with Business users in problem solving Excellent Communication and Prioritization skills. Should be able to interact and coordinate well with other developers / teams to resolve operational issues. Should be self-motivated and a fast learner to ramp up quickly with a fair amount of help from team members. Must be able to estimate development tasks with high accuracy and deliver on time with high quality while following coding guidelines and best practices. Identify systemic operational issues and resolve them. What you will need 6+ years of post-college experience in data engineering, API development or related fields Must have Demonstrated experience in data engineering, data science, or machine learning. Experience working with data platforms – building and maintaining ETL flows and data stores for ML and reporting applications. Skills to transform data, prepare it for analysis, and analyze it – including structured and unstructured data Ability to transform business needs into technical solutions Demonstrated experience of cloud platforms (AWS, Azure, GCP, etc.) Experience with languages such as Python, Java, SQL Experience with tools such as Apache Spark, Databricks, AWS EMR Experience with Kanban or Agile Scrum development Experience with REST API development Experience collaboration tools such as Git, Jenkins, Jira, Confluence Experience with Data modeling and Database schema / table design Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:99715 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 month ago
7.5 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Operations Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Machine Learning Engineer/MLOps Expert, you will engage in the operationalization of Machine Learning Models that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready ML system, ensuring high-quality standards are met. Roles & Responsibilities: - Continuously evaluate and improve existing processes to enhance efficiency. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team skills and capabilities. - Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - ML Pipeline Development: Design, build, and maintain scalable pipelines for model training to support our AI initiatives. - Model Deployment & Serving: Deploy machine learning models as robust, secure services – containerize models with Docker and serve them via FastAPI on AWS – ensuring low-latency predictions for marketing applications. Manage Batch inference and Realtime inference. - CI/CD Automation: Implement continuous integration and delivery (CI/CD) pipelines for ML projects. Automate testing, model validation, and deployment workflows using tools like GitHub Actions to accelerate delivery. - Model Lifecycle Management: Orchestrate the end-to-end ML lifecycle, including versioning, packaging, and registering models. Maintain a model repository/registry (MLflow or similar) for reproducibility and governance from experimentation through production. Experience on MLFlow and Airflow is mandatory - Monitoring & Optimization: Monitor model performance, data drift, and system health in production. Set up alerts and dashboards and proactively initiate model retraining or tuning to sustain accuracy and efficiency over time. - Must To Have Skills: Proficiency in Machine Learning Operations. - Strong understanding of cloud-based AI services and deployment strategies. - Should have Multi Cloud skills - Experience with Machine learning frameworks - Ability to implement and optimize machine learning models for production environments. Additional Information: - The candidate should have minimum 7.5 years of experience in Machine Learning Operations. - This position is based at our Bengaluru office. - A 15 years full time education is required.
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Experience: 5+ Years (Experience in Data Science) Title: Martech ML Engineer (THIS ROLE IS INDIVIDUAL CONTRIBUTER ROLE) Working hours: Flexible Hours (Presence from 9 PM IST to 12 AM IST is a must) Masscomcorp is seeking an experienced ML Engineer with a strong background in User Acquisition (UA) for Mobile Gaming and AdTech . The ideal candidate will leverage data-driven insights to optimize user acquisition strategies, enhance campaign performance, and maximize return on investment (ROI). You should have hands-on experience in Python, SQL, and statistical modelling , along with a deep understanding of marketing analytics, attribution modelling, and programmatic advertising. Here’s a quick breakdown of the key responsibilities and skills for this role: Key Responsibilities: Analyze large datasets from multiple sources (e.g., mobile games, ad networks, MMPs) to drive UA strategy and improve campaign effectiveness. Develop predictive models and A/B testing frameworks to optimize ad spend, bidding strategies, and targeting. Implement LTV (Lifetime Value) models, and cohort analysis to inform marketing decisions. Work closely with marketing, UA, product, and engineering teams to translate business questions into analytical solutions. Use SQL to extract and manipulate data from databases and create dashboards to track key UA performance metrics. Utilize Python for automation, machine learning, and data processing to improve efficiency in UA campaigns. Collaborate with AdTech partners and MMPs (e.g., Appsflyer, Adjust) to ensure accurate attribution tracking and measurement. Stay updated on industry trends in AdTech, gaming UA, and programmatic advertising to recommend new strategies. Interact and collaborate with data engineers, Business stakeholders as and when required. What to Expect: This is an individual contributor role, focused on hands-on work. The role involves close collaboration with data science and ML teams, as well as development of in-house systems. Required Technical Skills: At least 3+ years of experience in User Acquisition, AdTech, or Gaming analytics. Hands-on experience with AdTech platforms, MMPs (Mobile Measurement Partners), and UA tools. Demonstrate an a relevant project implementation in User Acquisition in Mobile Marketing/AdTech Space Experience with marketing analytics, campaign performance optimization, and attribution modeling. Familiarity with A/B testing methodologies, statistical significance, and causal inference techniques. Ability to use statistics to understand behavior of systems and/or players. Ability to communicate complex data insights to non-technical stakeholders. Must be thorough and experienced with programming in Python and SQL. Prior experience in Apache Spark with ML is a plus Team player with excellent organizational, communication and interpersonal skills. Why Masscom: Generous paid time off (PTO), vacation, and holidays Permanent Work from Home Flexible working hours Group Health Insurance (Family Floater) 5 days a week
Posted 1 month ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About The Role As a Machine Learning Operation Engineer, you will work on deploying, scaling, and optimizing backend algorithms, robust and scalable data ingestion pipelines, machine learning services, and data platforms to support analysis on vast amounts of text and analytics data. You will apply your technical knowledge and Big Data analytics on Onclusives billions of online content data points to solve challenging marketing problems. ML Ops Engineers are integral to the success of Onclusive. Your Responsibilities Design and build scalable machine learning services and data platforms. Utilize benchmarks, metrics, and monitoring to measure and improve services. Manage system currently processing data on the order of tens of millions of jobs per day. Research, design, implement and validate cutting-edge algorithms to analyze diverse sources of data to achieve targeted outcomes. Work with data scientists and machine learning engineers to implement ML, AI, and NLP techniques for article analysis and attribution. Deploy, manage, and optimize inference services on autoscaling fleets with GPUs and specialized inference you are : A degree (BS, MS, or Ph.D.) in Computer Science or a related field, accompanied by hands-on experience. Proficiency in Python, showcasing your understanding of Object-Oriented Programming (OOP) principles. Solid knowledge of containerisation (Docker preferable). Experience working with Kubernetes. Experience in Infrastructure as Code (IAC) for AWS, with a preference for Terraform. Knowledge of Version Control Systems (VCS), particularly Git and GitHub, alongside familiarity with CI/CD, preferably GitHub Actions. Understanding of release management, embracing rigorous testing, validation, and quality assurance protocols. Good understanding of ML principles. Data Engineering experience (airflow, dbt, meltano) is highly desired. Exposure to deep learning tech-stacks like Torch / Tensorflow, and we can offer : We are a global fast growing company which offers a variety of opportunities for you to develop your skill set and career. In exchange for your contribution, we can offer you : Competitive salary and benefits. Hybrid working in a team that is passionate about the work we deliver and supporting the development of those that we work with. A company focus on wellbeing and work life balance including initiatives such as flexible working and mental health support. We want the best talent available, regardless of race, religion, gender, gender reassignment, sexual orientation, marital status, pregnancy, disability or age. (ref:hirist.tech)
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
About Us We’re an early-stage startup building LLM-native products that turn unstructured documents into intelligent, usable insights. We work with RAG pipelines, multi-cloud LLMs, and fast data processing — and we’re looking for someone who can build, deploy, and own these systems end-to-end. Key Responsibilities: RAG Application Development: Design and build end-to-end Retrieval-Augmented Generation (RAG) pipelines using LLMs deployed on Vertex AI and AWS Bedrock , integrated with Quadrant for vector search. OCR & Multimodal Data Extraction: Use OCR tools (e.g., Textract) and vision-language models (VLMs) to extract structured and unstructured data from PDFs, images, and multimodal content. LLM Orchestration & Agent Design: Build and optimize workflows using LangChain , LlamaIndex , and custom agent frameworks. Implement autonomous task execution using agent strategies like ReAct , Function Calling , and tool-use APIs . API & Streaming Interfaces: Build and expose production-ready APIs (e.g., with FastAPI) for LLM services, and implement streaming outputs for real-time response generation and latency optimization. Data Pipelines & Retrieval: Develop pipelines for ingestion, chunking, embedding, and storage using Quadrant and PostgreSQL , applying hybrid retrieval techniques (dense + keyword search), rerankers, GraphRAG. Serverless AI Workflows: Deploy serverless ML components (e.g., AWS Lambda, GCP Cloud Functions) for scalable inference and data processing. MLOps & Model Evaluation: Deploy, monitor, and iterate on AI systems with lightweight MLOps workflows (Docker, MLflow, CI/CD). Benchmark and evaluate embeddings, retrieval strategies, and model performance. Qualifications: Strong Python development skills (must-have). LLMs: Claude and Gemini models Experience building AI agents and LLM-powered reasoning pipelines. Deep understanding of embeddings, vector search, and hybrid retrieval techniques. Experience with Quadrant DB Experience designing multi-step task automation and execution chains. Streaming: Ability to implement and debug LLM streaming and async flows Knowledge of memory and context management strategies for LLM agents (e.g., vector memory, scratchpad memory, episodic memory). Experience with AWS Lambda for serverless AI workflows and API integrations. Bonus: LLM fine-tuning, multimodal data processing, knowledge graph integration, or advanced AI planning techniques. Prior experience at startups only ( not IT services or Enterprises) and short notice period Who You Are 2–4 years of real-world AI/ML experience, ideally with production LLM apps Startup-ready: fast, hands-on, comfortable with ambiguity Clear communicator who can take ownership and push features end-to-end Available to join immediately Why Join Us? Founding-level role with high ownership Build systems from scratch using the latest AI stack Fully remote, async-friendly, fast-paced team
Posted 1 month ago
6.0 years
0 Lacs
India
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance : Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data : Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing. Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job: As a Senior SDK Developer, you will be a part of the team that works on the development of the new and cutting edge Tether AI SDK. Developer-Facing SDKs & APIs Tether is committed to delivering world-class developer experiences through robust and intuitive SDKs. You will design, build, and maintain modular, versioned SDKs that abstract complex backend logic into clean, usable interfaces — enabling seamless integration with Tether’s platform across various client environments. Performance & Reliability at Scale SDKs must be fast, lightweight, and reliable — even when performing heavy and demanding operations. You’ll design resilient logic (retry policies, offline handling, batching) and contribute to the scalability of platform-facing interfaces and services powering the SDKs. Security-First Engineering You’ll embed best-in-class security practices directly into the SDK architecture, including secure communication, encrypted storage and rigorous input validation. Your work will help ensure safe integration pathways for all developers working with the Tether ecosystem. +6 years of experience working with Nodejs/JavaScript in production environments. Proven track record in designing and maintaining developer-facing SDKs (npm packages, API clients, or instrumentation libraries) Strong understanding of modular architecture, versioning strategies, and semantic API design Have actively participated in the development of a complex platform Ability to quickly learn new technologies Good understanding of security practices Nice to have Familiar with Peer-to-Peer technologies (Kademlia, bittorent, libp2p) Comfortable with high-availability concepts Rust or C++ skills are a plus Familiar with AI domain applications (RAG, Agents, Inference, AI SDKs) Familiarity with real-time data delivery (NodeJS/other streaming)
Posted 1 month ago
3.0 - 4.0 years
10 - 15 Lacs
Hyderabad, Telangana, India
On-site
At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand. Responsibilities Lead the CV and DS Team Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant). Your work will include formulation and development of a Machine Learning models to solve the underlying problem. You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo Statistical data modelling and machine learning research. Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes Conduct applied research to model complex dependencies, statistical inference and predictive modelling Technological conception, design and implementation of new features Quality assurance of the software through planning, creation and execution of tests Work with a cross-functional team to define, build, test, and deploy applications Requirements Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields. 3-4 years of experience with computer vision and data science. Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques). Experience building production-quality and well-engineered Computer Vision and Data Science products. Experience in image processing, algorithms and neural networks. Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform Solid Python engineering skills and experience with Python, Tensorflow, Docker Cooperative and independent work, analytical mindset, and willingness to take responsibility Fluency in English, both written and spoken. Skills:- Natural Language Processing (NLP), Computer Vision, TensorFlow, Docker, Forecasting, Predictive modelling, Image Processing, Algorithms and Machine Learning (ML)
Posted 1 month ago
0 years
15 - 25 Lacs
Bengaluru, Karnataka, India
On-site
Manage total pricing procedure and ensure timely response to market conditions. Support developing the pricing strategy formulation to remain competitive and enhance profitability. Analyze competition and industry trend. Develop pricing strategy across various product lines to position the products based on value and competitive situation. Develop Methodology for calculating List Price, Price Floor, Price ceiling for various product lines within various market segments in relation to the value. Maintain corporate price list and periodically update appropriately Develop tools for estimating cost for quotes for new products. Transition the organization from cost plus pricing model to value pricing model Develop value pricing model and implement it for all new products. Define approval standards and processes Perform financial evaluation to assess pricing action effectiveness Lead the Price increase process/change management process for the organization. Work with sales, management, and product managers to implement the Price changes into the market and to product Business cases for new pricing proposals Bespoke pricing proposals with authority matrix and compliance Conduct training on pricing to sales teams Propose new models and product features to improve gross margin and increase revenue Conduct field research including competition analysis, industry analysis , trend tracking and develop Insights based on inference Develop a methodology to identify margin leakages and recommend approaches of improvement Perform partnering with buyers, product managers and sales department to ensure integrated profit maximizing approach to market Analyse financial impact of price approach in view of overall history as well as profitability of customer Performance Indicators Top line revenue growth Improved Margins Average revenue per contract Customer acquisition cost Lifetime value 8 yrs overall experience with at least 3 years in a similar role Graduation in a relevant stream In depth knowledge of pricing strategies, processes, initiatives and creating pricing process documentation. Experience in SaaS pricing models and Value based pricing Proficiency in Data Mining Good understanding of the business model Numerical data Analytical mind with a strategic ability Strong attention to detail. Understanding of Financial Statements Excellent communication, negotiation and stakeholder management skills Skills:- Pricing Strategy, Pricing management and Revenue growth
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Location: Remote Job Summary : We are looking for a highly motivated AI/ML Engineer with hands-on experience in building applications using LangChain and large language models (LLMs). The ideal candidate will have a strong foundation in machine learning, natural language processing (NLP), and prompt engineering, along with a passion for solving real-world problems using cutting-edge AI technologies. Key Responsibilities : - Design, develop, and deploy AI-powered applications using LangChain and LLM frameworks. - Build and optimize prompt chains, memory modules, and tools for conversational agents. - Integrate third-party APIs, vector databases (like Pinecone, FAISS, or Weaviate), and knowledge bases into AI workflows. - Train, fine-tune, or fine-control LLMs for custom use cases. - Collaborate with product, backend, and data science teams to deliver AI-driven solutions. - Implement evaluation metrics and testing frameworks for model performance and response quality. - Stay current with advancements in generative AI, LLMs, and LangChain ecosystem. Requirements : - Bachelors or Masters degree in Computer Science, Artificial Intelligence, or a related field. - 3+ years of experience in machine learning, NLP, or related AI/ML fields. - Proficiency in Python and libraries such as Hugging Face Transformers, LangChain, OpenAI, etc. - Experience with vector stores and retrieval-augmented generation (RAG). - Strong understanding of LLM architecture, prompt engineering, and inference pipelines. - Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps workflows.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Ema is building the next generation AI technology to empower every employee in the enterprise to be their most creative and productive. Our proprietary tech allows enterprises to delegate most repetitive tasks to Ema, the AI employee. We are founded by ex-Google, Coinbase, Okta executives and serial entrepreneurs. We’ve raised capital from notable investors such as Accel Partners, Naspers, Section32 and a host of prominent Silicon Valley Angels including Sheryl Sandberg (Facebook/Google), Divesh Makan (Iconiq Capital), Jerry Yang (Yahoo), Dustin Moskovitz (Facebook/Asana), David Baszucki (Roblox CEO) and Gokul Rajaram (Doordash, Square, Google). Our team is a powerhouse of talent, comprising engineers from leading tech companies like Google, Microsoft Research, Facebook, Square/Block, and Coinbase. All our team members hail from top-tier educational institutions such as Stanford, MIT, UC Berkeley, CMU and Indian Institute of Technology. We’re well funded by the top investors and angels in the world. Ema is based in Silicon Valley and Bangalore, India. This will be a hybrid role where we expect employees to work from office three days a week. Who You Are We're looking for an innovative and passionate Machine Learning Engineers to join our team. You are someone who loves solving complex problems, enjoys the challenges of working with huge data sets, and has a knack for turning theoretical concepts into practical, scalable solutions. You are a strong team player but also thrive in autonomous environments where your ideas can make a significant impact. You love utilizing machine learning techniques to push the boundaries of what is possible within the realm of Natural Language Processing, Information Retrieval and related Machine Learning technologies. Most importantly, you are excited to be part of a mission-oriented high-growth startup that can create a lasting impact. You Will Conceptualize, develop, and deploy machine learning models that underpin our NLP, retrieval, ranking, reasoning, dialog and code-generation systems. Implement advanced machine learning algorithms, such as Transformer-based models, reinforcement learning, ensemble learning, and agent-based systems to continually improve the performance of our AI systems. Lead the processing and analysis of large, complex datasets (structured, semi-structured, and unstructured), and use your findings to inform the development of our models. Work across the complete lifecycle of ML model development, including problem definition, data exploration, feature engineering, model training, validation, and deployment. Implement A/B testing and other statistical methods to validate the effectiveness of models. Ensure the integrity and robustness of ML solutions by developing automated testing and validation processes. Clearly communicate the technical workings and benefits of ML models to both technical and non-technical stakeholders, facilitating understanding and adoption. Ideally, You'd Have A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Proven industry experience in building and deploying production-level machine learning models. Deep understanding and practical experience with NLP techniques and frameworks, including training and inference of large language models. Deep understanding of any of retrieval, ranking, reinforcement learning, and agent-based systems and experience in how to build them for large systems. Proficiency in Python and experience with ML libraries such as TensorFlow or PyTorch. Excellent skills in data processing (SQL, ETL, data warehousing) and experience working with large-scale data systems. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practices. Familiarity with cloud platforms like GCP or Azure. Familiarity with the latest industry and academic trends in machine learning and AI, and the ability to apply this knowledge to practical projects. Good understanding of software development principles, data structures, and algorithms. Excellent problem-solving skills, attention to detail, and a strong capacity for logical thinking. The ability to work collaboratively in an extremely fast-paced, startup environment. Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.
Posted 1 month ago
6.0 years
0 Lacs
India
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance : Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data : Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing. Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job: As a Senior Software Developer, you will be a part of the team that building desktop and mobile AI apps on top of new and cutting edge Tether SDK. Responsibilities: AI-Driven Desktop Integration You will develop and maintain backend services and APIs that power AI-enhanced desktop applications. These services support intelligent features like local inference, contextual awareness, and model interaction, tailored specifically for Electron-based or hybrid clients. Platform-Aware API Design Collaborating closely with desktop and React Native teams, you will shape API contracts that reflect platform constraints and performance considerations — ensuring native-like responsiveness and cross-platform consistency. Scalable Model Invocation & Resource Management You’ll contribute to backend services that handle concurrent model invocations, manage GPU/CPU workloads, and intelligently queue or throttle requests based on system constraints — ensuring smooth AI on-device performance. +6 years of experience working with Nodejs/JavaScript. Experience with Desktop app development (Electron, Tauri, other) Experience working with React Native or bridging backend systems into mobile/desktop hybrid stacks Experience optimizing performance and resource usage on desktop/mobile clients Have actively participated in the development of a complex platform Ability to quickly learn new technologies Good understanding of security practices Nice to have Familiarity with secure inter-process communication Familiar with Peer-to-Peer technologies (Kademlia, bittorent, libp2p) C++/Swift/Kotlin skills are a plus Familiar with AI/Agentic domain applications (RAG, AI SDKs) Familiarity with real-time data delivery (NodeJS/other streaming)
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn
Posted 1 month ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science.
Posted 1 month ago
10.0 years
7 - 9 Lacs
Hyderābād
On-site
Summary At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. About the Role Position Title: Assoc. Dir. DDIT US&I AI Architect (Commercial) Location – Hyd-India #LI Hybrid About the Role At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. Your responsibilities include but are not limited to: Commercial Architecture Strategy: Define and drive the reference architecture for commercial data and AI products, ensuring alignment with enterprise standards and business priorities. Cross-Product Integration: Architect how data products (e.g., HCP 360, engagement data platforms, real-world data assets) connect with AI products (e.g., field force recommendations, predictive models, generative AI copilots) and downstream tools. Modular, Scalable Design: Ensure architecture promotes reuse, scalability, and interoperability across multiple markets, brands, and data domains within the commercial landscape. Stakeholder Alignment: Partner with commercial product managers, data science teams, platform engineering, and global/local stakeholders to guide solution design, delivery, and lifecycle evolution. Data & AI Lifecycle Enablement: Support the full lifecycle of data and AI—from ingestion and transformation to model training, inference, and monitoring—within compliant and secure environments. Governance & Compliance: Ensure architecture aligns with GxP, data privacy, and commercial compliance requirements (e.g., consent management, data traceability). Innovation & Optimization: Recommend architectural improvements, modern technologies, and integration patterns to support personalization, omnichannel engagement, segmentation, targeting, and performance analytics. What you’ll bring to the role: Proven ability to lead cross-functional architecture efforts across business, data, and technology teams. Good understanding of security, compliance, and privacy regulations in a commercial pharma setting. Experience with pharmaceutical commercial ecosystems and data (e.g., IQVIA, Veeva, Symphony). Familiarity with customer data platforms (CDPs), identity resolution, and marketing automation tools. Desirable Requirements: Bachelor's or master’s degree in computer science, Engineering, Data Science, or a related field. 10+ years of experience in enterprise or solution architecture, with significant experience in commercial functions (preferably in pharma or life sciences). Strong background in data platforms, pipelines, and governance (e.g., Snowflake, Databricks, CDP, Salesforce integration). Hands-on experience integrating solutions across Martech, CRM, and omnichannel systems. Strong knowledge of AI/ML architectures, particularly those supporting commercial use cases (recommendation engines, predictive analytics, NLP, LLMs). Exposure to GenAI applications in commercial (e.g., content generation, intelligent assistants). Understanding of global-to-local deployment patterns and data sharing requirements Commitment to Diversity & Inclusion: Novartis embraces diversity, equal opportunity, and inclusion. We are committed to building diverse teams, representative of the patients and communities we serve, and we strive to create an inclusive workplace that cultivates bold innovation through collaboration and empowers our people to unleash their full potential. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No
Posted 1 month ago
8.0 years
4 - 8 Lacs
Gurgaon
On-site
JOB DESCRIPTION AI Lead - Innovation & Product Development About Us KPMG is a dynamic and forward-thinking Professional service firm committed to leveraging cutting-edge artificial intelligence to create transformative products and solutions. We are building a team of passionate innovators who thrive on solving complex challenges and pushing the boundaries of what's possible with AI. Job Summary We are seeking an experienced and visionary AI Lead to spearhead our AI innovation and product development. The ideal candidate will be a hands-on leader with a strong background in solution architecture, a proven track record in developing AI-based products, and deep expertise in Generative AI applications, including Agentic AI. This role requires a comprehensive understanding of AI models, frameworks, and Agentic AI, along with exposure to GPU infrastructure, to design, build, and deploy scalable AI solutions. You will drive our AI strategy, lead cross-functional teams, and transform complex ideas into tangible, market-ready products, with a strong understanding of enterprise requirements from a professional services perspective. Key Responsibilities Strategic Leadership & Innovation: o Define and drive the AI innovation roadmap, identifying emerging trends in AI, Generative AI and Agentic AI. o Lead research, evaluation, and adoption of new AI models, algorithms, and frameworks. o Foster a culture of continuous learning, experimentation, and innovation. AI Product Development & Management: o Lead end-to-end development of AI-based products, from ideation to deployment and optimization. o Collaborate with product managers, designers, and stakeholders to translate business requirements into viable AI solutions. o Ensure successful delivery of high-quality, scalable, and performant AI products. o Client Engagement & Solutioning: Work with multiple clients to understand requirements, design tailored AI solutions, develop proofs-of-concept (POCs), and ensure successful implementation in a professional services context. Solution Architecture & Design: o Design robust, scalable, and secure AI solution architectures across multi-cloud platforms and on-premise infrastructure. o Provide technical guidance and architectural oversight for AI initiatives, focusing on optimizing for GPU infrastructure . o Evaluate and recommend AI technologies, tools, and infrastructure, including Large Language Models (LLMs) and Small Language Models (SLMs) on cloud and on-premise. Team Leadership & Mentorship: o Lead, mentor, and grow a team of talented AI engineers, data scientists, and machine learning specialists. o Conduct code reviews and ensure adherence to coding standards and architectural principles. o Promote collaboration and knowledge sharing. Technical Expertise & Implementation: o Hands-on experience in developing and deploying Generative AI applications (e.g., LLMs, RAG, GraphRags , image generation, code generation), including Agentic AI and Model Context Protocol (MCP). o Proficiency with Agentic AI orchestration frameworks such as LangChain, LlamaIndex, and/or similar tools. o Experience with leading LLM providers and models including OpenAI, Llama, Anthropic, and others. o Familiarity with AI-powered tools and platforms such as Microsoft Copilot, GitHub Copilot etc. o Strong understanding of various machine learning models (deep learning, supervised, unsupervised, reinforcement learning). o Experience with large datasets, ensuring data quality, feature engineering, and efficient data processing for AI model training. o Deep understanding of GPU infrastructure, for AI model training or/ and inference. Qualifications Bachelor's or Master's degree in Computer Science, AI, ML, Data Science, or a related quantitative field. 8+ years in AI/ML development, with at least 3 years in a leadership or lead architect role. Mandatory: Proven experience in leading the development and deployment of AI-based products and solutions. Mandatory: Extensive hands-on experience with Generative AI models and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs, etc.), including practical application of Agentic AI. Proficiency with Agentic AI orchestration frameworks such as LangChain, LlamaIndex, and/or similar tools. Experience in leveraging and integrating various LLM providers and models, including but not limited to OpenAI, Llama, and Anthropic. Familiarity with AI-powered development tools and platforms such as Microsoft Copilot, GitHub Copilot, and other code generation/assistance tools. Strong understanding of solution architecture principles for large-scale AI systems, including multi-cloud platforms and on-premise deployments. Mandatory: Exposure to and understanding of GPU infrastructure, especially NVIDIA, for AI workloads. Experience with Large Language Models (LLMs) and Small Language Models (SLMs) on both cloud and on-premise environments. Proficiency in programming languages such as Python, with strong software engineering fundamentals. Familiarity with MLOps practices, including model versioning, deployment, monitoring, and retraining. Mandatory: Demonstrated industry exposure to professional services, with a proven track record of working with multiple clients to solution requirements, conduct POCs, and understand enterprise-level needs. Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to diverse audiences. Strong problem-solving abilities and a strategic mindset. What We Offer Opportunity to work on cutting-edge AI technologies and shape the future of our products. A collaborative and innovative work environment. Competitive salary and benefits package. Professional development and growth opportunities. The chance to make a significant impact on our business and our customers. If you are a passionate AI leader with a drive for innovation and a desire to build groundbreaking AI products, we encourage you to apply!
Posted 1 month ago
3.0 - 5.0 years
6 - 11 Lacs
Thiruvananthapuram
On-site
Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow , PyTorch , scikit-learn , or Keras . Hands-on exposure to self-hosted or managed LLMs , supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy , NLTK , Hugging Face Transformers , and OpenCV , contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django , Flask , or Node.js , and API development (REST or GraphQL). Front-end development experience with React , Angular , or Vue.js , with a working understanding of responsive design and state management. Development and optimization of data storage solutions , using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached . Working knowledge of microservices and serverless patterns , participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark , and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse , using tools such as Airflow , dbt , BigQuery , or Snowflake . Understanding of CI/CD , containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines , including setting up IAM roles , configuring TLS/SSL , and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices , model versioning, and deployment pipelines using MLflow , FastAPI , or AWS SageMaker . Configuration and management of cloud services such as AWS EC2 , RDS , S3 , Load Balancers , and WAF , supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Preferred Skills: Key : Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,100,000.00 per year Schedule: Day shift Monday to Friday
Posted 1 month ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role Have you ever wondered why it's taking so long for an earner to be matched to your trip, how the price is determined for your trip, or how an earner is picked from the many around you? If so, the Mobility Marketplace Health Science team is for you! The Marketplace Health Science team at Uber plays a pivotal role in monitoring marketplace performance, detecting issues in real time, and driving solutions through algorithmic and data-driven interventions. Our work is essential to maintaining Uber's market leadership and delivering reliable experiences to riders and earners. We are seeking experienced data scientists who thrive on solving complex problems at scale. The ideal candidate brings a strong foundation in causal inference, experimentation and analytics, along with a deep understanding of marketplace dynamics and metric trade-offs. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses about whether marketplace levers such as Rider and Driver Pricing, Matching, Surge etc are functioning appropriately through a deep understanding of the data, our customers, and our business. Define how our teams measure success by developing Key Performance Indicators and other user/business metrics, in close partnership with Product and other subject areas such as engineering, operations, and marketing. Collaborate with applied scientists and engineers to build and improve the availability, integrity, accuracy, and reliability of our models, tables etc. Design and develop algorithms to increase the speed and accuracy with which we react to marketplace changes. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritization of product, growth, and optimization initiatives. Basic Qualifications Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 6+ years of experience as a Data Scientist, Product Analyst, Senior Data Analyst, or other types of data analysis-focused functions. Deep understanding of core statistical concepts such as hypothesis testing, regression, and causal inference Advanced SQL expertise. Experience with either Python or R for data analysis. Knowledge of experimental design and analysis (A/B, Switchbacks, Synthetic Control, Diff in Diff, etc.). Experience with exploratory data analysis, statistical analysis and testing, and model development. Proven track record to wrangle large datasets, extract insights from data, and summarize learnings/takeaways. Experience with Excel and some dashboarding/data visualization (i.e., Tableau, Mixpanel, Looker, or similar). Preferred Qualifications Proven aptitude toward Data Storytelling and Root Cause Analysis using data. Excellent communication skills across technical, non-technical, and executive audiences. Have a growth mindset; love solving ambiguous, ambitious, and impactful problems. Ability to work in a self-guided manner. Ability to deliver on tight timelines and prioritize multiple tasks while maintaining quality and detail.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough