Jobs
Interviews

895 Summarization Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job purpose: Design, develop, and deploy end-to-end AI/ML systems, focusing on large language models (LLMs), prompt engineering, and scalable system architecture. Leverage technologies such as Java/Node.js/NET to build robust, high-performance solutions that integrate with enterprise systems. Who You Are: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. PhD is a plus. 10+ years of experience in AI/ML development, with at least 2 years working on LLMs or NLP. Proven expertise in end-to-end system design and deployment of production-grade AI systems. Hands-on experience with Java/Node.js/.NET for backend development. Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers). Key Responsibilities: 1. Model Development & Training: Design, train, and fine-tune large language models (LLMs) for tasks such as natural language understanding, generation, and classification. Implement and optimize machine learning algorithms using frameworks like TensorFlow, PyTorch, or Hugging Face. 2. Prompt Engineering: Craft high-quality prompts to maximize LLM performance for specific use cases, including chatbots, text summarization, and question-answering systems. Experiment with prompt tuning and few-shot learning techniques to improve model accuracy and efficiency. 3. End-to-End System Design: Architect scalable, secure, and fault-tolerant AI/ML systems, integrating LLMs with backend services and APIs. Develop microservices-based architectures using Java/Node.js/.NET for seamless integration with enterprise applications. Design and implement data pipelines for preprocessing, feature engineering, and model inference. 4. Integration & Deployment: Deploy ML models and LLMs to production environments using containerization (Docker, Kubernetes) and cloud platforms (AWS/Azure/GCP). Build RESTful or GraphQL APIs to expose AI capabilities to front-end or third-party applications. 5. Performance Optimization: Optimize LLMs for latency, throughput, and resource efficiency using techniques like quantization, pruning, and model distillation. Monitor and improve system performance through logging, metrics, and A/B testing. 6. Collaboration & Leadership: Work closely with data scientists, software engineers, and product managers to align AI solutions with business objectives. Mentor junior engineers and contribute to best practices for AI/ML development. What will excite us: Strong understanding of LLM architectures and prompt engineering techniques. Experience with backend development using Java/Node.js (Express)/.NET Core. Familiarity with cloud platforms (AWS, Azure, GCP) and DevOps tools (Docker, Kubernetes, CI/CD). Knowledge of database systems (SQL, NoSQL) and data pipeline tools (Apache Kafka, Airflow). Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced, collaborative environment. What will excite you: Lead AI innovation in a fast-growing, technology-driven organization. Work on cutting-edge AI solutions, including LLMs, autonomous AI agents, and Generative AI applications. Engage with top-tier enterprise clients and drive AI transformation at scale. Location: Ahmedabad

Posted 4 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Over 12 years of extensive experience in AI/ML , with a proven track record of architecting and delivering enterprise-scale machine learning solutions across the Retail and FMCG domains . Demonstrated ability to align AI strategy with business outcomes in areas such as customer experience, dynamic pricing, demand forecasting, assortment planning, and inventory optimization. Deep expertise in Large Language Models (LLMs) and Generative AI , including OpenAI’s GPT family , ChatGPT , and emerging models like DeepSeek . Adept at designing domain-specific use cases such as intelligent product search, contextual recommendation engines, conversational commerce assistants, and automated customer engagement using Retrieval-Augmented Generation (RAG) pipelines. Strong hands-on experience developing and deploying advanced ML models using modern data science stacks including: Python (advanced programming with focus on clean, scalable codebases) TensorFlow and Scikit-learn (for deep learning and classical ML models) NumPy , Pandas (for data wrangling, transformation, and statistical analysis) SQL (for structured data querying, feature engineering, and pipeline optimization) Expert-level understanding of Deep Learning architectures (CNNs, RNNs, Transformers, BERT/GPT), and Natural Language Processing (NLP) techniques such as entity recognition, text summarization, semantic search, and topic modeling – with practical application in retail-focused scenarios like product catalog enrichment, personalized marketing, and voice/text-based customer interactions. Strong data engineering proficiency , with experience designing robust data pipelines, building scalable ETL workflows, and integrating structured and unstructured data from ERP, CRM, POS, and social media platforms. Proven ability to operationalize ML workflows through automated retraining, version control, and model monitoring. Significant experience deploying AI/ML solutions at scale on cloud platforms such as AWS (SageMaker, Bedrock) , Google Cloud Platform (Vertex AI) , and Azure Machine Learning . Skilled in designing cloud-native architectures for low-latency inference, high-volume batch scoring, and streaming analytics. Familiar with containerization (Docker), orchestration (Kubernetes), and CI/CD for ML (MLOps). Ability to lead cross-functional teams , translating technical concepts into business impact, and collaborating with marketing, supply chain, merchandising, and IT stakeholders. Comfortable engaging with executive leadership to influence digital and AI strategies at an enterprise level.

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bangalore North, India | Posted on 07/29/2025 Share job via email Share this job with your network Job Information Job Type Full time Date Opened 07/29/2025 Project Code PRJ000 Industry IT Services Work Experience 5- 10 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 About Us

We are a team of cloud enthusiasts, keen and spirited to make the latest cloud technologies work for you.
Rapyder is an agile, innovative company that makes Cloud work for you. With a young, passionate team and expertise in Cloud Computing Solutions, Big Data, Marketing & Commerce, DevOps, and Managed Services, Rapyder is the leading provider of Strategic Cloud Consulting. Solutions provided by Rapyder are seamless, secure, and scalable. With headquarters in Bangalore, sales & Support offices located in Delhi, and Mumbai, we ensure optimal technology solutions to reduce costs, streamline business processes and gain business advantages for our customers.
Budget 0 Job Description Position: AI/ML Solution Architect (GenAI Specialist) Team: ML/GenAI Team Number of Resources Needed: 1 Years of Experience: 3-5 Years Educational Background: BCA, MCA, B.Tech/B.E, M.Tech/ME Type: Full-time Expected Joining Date: ASAP Role Summary: We are looking for an AI/ML Solution Architect with expertise in Generative AI (GenAI) to help design, build, and deploy cutting-edge AI solutions. Whether working in the rapid-paced innovation cycle of a startup or scaling production-grade GenAI systems for enterprise clients, you will serve as a key enabler of value-driven, secure, and scalable AI deployments. Key Responsibilities: Collaborate with sales teams to understand customer requirements and provide expert guidance on ML/Generative AI solutions across a wide range of use cases: content generation, document automation, chatbots, virtual assistants, summarization, search, personalization, and more. Evaluate and integrate open-source LLMs (e.g., LLaMA, Mistral, Falcon), commercial APIs (e.g., GPT-4, Claude, Gemini), and cloud-native GenAI platforms (e.g., AWS Bedrock, Azure OpenAI). Design and deliver compelling Solution and SoW and demonstrations of our Generative AI offerings to both technical and non-technical audiences. Design Retrieval-Augmented Generation (RAG), prompt engineering strategies, vector database integrations, and model fine-tuning where required. Translate business objectives into technical roadmaps, collaborating closely with product managers, engineers, and data scientists. Create prototypes and proof-of-concepts (PoCs) to validate solution feasibility and performance. Provide technical mentorship, best practices, and governance guidance across teams and clients. Skills & Experience Required: Educational Background: Master’s/Bachelor’s degree in computer science, Engineering, or a related field. (e.g., BCA, MCA, B.Tech/B.E, M.Tech/ME) 3-5 years of experience in AI/ML development/solutioning, with at least 1-2 years in Generative AI/NLP applications. Strong command of transformers, LLMs, embeddings, and NLP methods. Proficiency with LangChain, LlamaIndex, Hugging Face Transformers, and cloud AI tools (SageMaker, Bedrock, Azure OpenAI, Vertex AI). Experience with vector databases like FAISS, Pinecone, or Weaviate. Familiarity with MLOps practices, including model deployment, monitoring, and retraining pipelines. Skilled in Python, with working knowledge of APIs, Docker, CI/CD, and RESTful services. Experience in building solutions in both agile startup environments and structured enterprise settings. Preferred/Bonus Skills: Certifications (e.g., AWS Machine Learning Specialty, Azure AI Engineer). Exposure to multimodal AI (text, image, audio/video) and Agentic AI. Experience with data privacy, responsible AI, and model interpretability frameworks. Familiarity with enterprise security, scalability, and governance standards. Soft Skills: Entrepreneurial mindset with a bias for action and rapid prototyping. Strong communication and stakeholder management skills. Comfortable navigating ambiguity in startups and structured processes in enterprises. Team player with a passion for continuous learning and AI innovation. What We Offer: The flexibility and creativity of a startup-style team with the impact and stability of enterprise-scale work. Opportunities to work with cutting-edge GenAI tools and frameworks. Collaborative environment with cross-functional tech and business teams. Career growth in a high-demand, high-impact AI/ML domain.

Posted 4 days ago

Apply

0.0 - 3.0 years

12 - 24 Lacs

Chennai, Tamil Nadu

On-site

We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI DEVELOPER LLM SPECIALIST Full-time. Company Description RMgX is a Gurgaon based digital product innovation & consulting firm. Here at RMgX, we design and build elegant, data-driven digital solutions for complex business problems. At the core of solutions crafted by us is a very strong user experience practice to deeply understand the goals and emotions of business and end-users. RMgX is driven by a passion for quality and we strongly believe in our people and their capabilities. Duties And Responsibilities Design, develop, and deploy AI solutions using Large Language Models (LLMs) such as GPT, LLaMA, Claude, or Mistral. Fine-tune and customize pre-trained LLMs for business-specific use cases. Build and maintain NLP pipelines for classification, summarization, semantic search, etc. Build and maintain vector database pipelines using Milvus, Pinecone, etc. Collaborate with cross-functional teams to integrate LLM-based features into applications. Analyze and improve model performance using appropriate metrics. Stay up-to-date with AI/ML research and integrate new techniques as appropriate. Work Experience 12 years of experience in AI/ML development with specific focus on NLP and LLM-based applications. Skills, Abilities & Knowledge Strong hands-on experience in Python and AI/ML libraries (HuggingFace Transformers, LangChain, PyTorch, TensorFlow, etc. Proficiency in working with closed-source models via APIs (e.g , OpenAI, Gemini). Understanding of prompt engineering, embeddings, and vector databases like FAISS, Milvus or Pinecone. Experience in deploying models using REST APIs, Docker, and cloud platforms (AWS/GCP/Azure). Familiarity with MLOps and version control tools (Git, MLflow, etc. Knowledge of LLMOps platforms such as LangSmith, Weights & Biases is a plus. Strong problem-solving skills, a keen eye for detail, and ability to work in an agile setup. Qualifications Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Additional Information Perks And Benefits Flexible working hours. Saturday and Sundays are fixed off. Health Insurance and Personal Accident Insurance. BYOD (Bring Your Own Device) Benefit. Laptop Buyback Scheme. (ref:hirist.tech)

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Title : Data Scientist Experience : 6 to 10 Years Location : Noida, Bangalore, Pune Employment Type : Full-time Job Summary We are seeking a highly skilled and experienced Data Scientist with a strong background in Natural Language Processing (NLP), Generative AI, and Large Language Models (LLMs). The ideal candidate will be proficient in Python and have hands-on experience working with both Google Cloud Platform (GCP) and Amazon Web Services (AWS). You will play a key role in designing, developing, and deploying AI-driven solutions to solve complex business problems. Key Responsibilities Design and implement NLP and Generative AI models for use cases such as chatbots, text summarization, question answering, and information extraction. Fine-tune and deploy Large Language Models (LLMs) using frameworks such as Hugging Face Transformers or LangChain. Conduct experiments, evaluate model performance, and implement improvements for production-scale solutions. Collaborate with cross-functional teams including product managers, data engineers, and ML engineers. Deploy and manage ML models on cloud platforms (GCP and AWS), using services such as Vertex AI, SageMaker, Lambda, Cloud Functions, etc. Build and maintain ML pipelines for training, validation, and deployment using CI/CD practices. Communicate complex technical findings in a clear and concise manner to both technical and non-technical stakeholders. Required Skills Strong proficiency in Python and common data science/ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Proven experience in Natural Language Processing (NLP) techniques (NER, sentiment analysis, embeddings, topic modeling, etc.). Hands-on experience with Generative AI and LLMs (e.g., GPT, BERT, T5, LLaMA, Claude, Gemini). Experience with LLMOps, prompt engineering, and fine-tuning pre-trained language models. Experience with GCP (BigQuery, Vertex AI, Cloud Functions, etc.) and/or AWS (SageMaker, S3, Lambda, etc.). Familiarity with containerization (Docker), orchestration (Kubernetes), and model deployment best practices (ref:hirist.tech)

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm with a team of over 125,000 professionals in more than 30 countries. Driven by curiosity, agility, and the desire to create lasting value for clients, we serve leading enterprises worldwide, including the Fortune Global 500. Our purpose is the relentless pursuit of a world that works better for people, and we achieve this through our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant, Research Data Scientist. We are looking for candidates with relevant experience in Text Mining/Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. The ideal candidate should have full cycle experience in at least one large-scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, technology & analytic solutioning, implementation, and change management. Experience in Hadoop, including development in the map-reduce framework, is also desirable. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems, and working with database teams to deliver large-scale text analytic solutions. Responsibilities: - Develop transformative AI/ML solutions to address clients" business requirements - Manage project delivery involving data pre-processing, model training and evaluation, and parameter tuning - Manage stakeholder/customer expectations and project documentation - Research cutting-edge developments in AI/ML with NLP/NLU applications in various industries - Design and develop solution algorithms within tight timelines - Interact with clients to collect and synthesize requirements for effective analytics/text mining roadmap - Work with digital development teams to integrate algorithms into production applications - Conduct applied research on text analytics and machine learning projects, file patents, and publish papers Qualifications: Minimum Qualifications/Skills: - MS in Computer Science, Information Systems, or Computer Engineering - Relevant experience in Text Mining/Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms Technology: - Open Source Text Mining paradigms (NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene) and cloud-based NLU tools (DialogFlow, MS LUIS) - Statistical Toolkits (R, Weka, S-Plus, Matlab, SAS-Text Miner) - Strong Core Java experience, programming in the Hadoop ecosystem, and distributed computing concepts - Proficiency in Python/R programming; Java programming skills are a plus Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG, with text analytics experience on large structured and unstructured data - Knowledge of AI Methodologies (ML, DL, NLP, Neural Networks, Information Retrieval, NLG, NLU) - Familiarity with Natural Language Processing & Statistics concepts, especially in their application - Ability to conduct client research to enhance analytics agenda Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, and Machine learning/Deep learning methods - UI development paradigms for Text Mining Insights Visualization - Experience with Linux, Windows, GPU, Spark, Scala, and deep learning frameworks Methodology: - Social Network modeling paradigms, tools & techniques - Text Analytics using NLP tools like Support Vector Machines and Social Network Analysis - Previous experience with Text analytics implementations using open source packages or SAS-Text Miner - Strong prioritization, consultative mindset, and time management skills Job Details: - Job Title: Principal Consultant - Primary Location: India-Gurugram - Schedule: Full-time - Education Level: Master's/Equivalent - Job Posting Date: Oct 4, 2024, 12:27:03 PM - Unposting Date: Ongoing - Master Skills List: Digital - Job Category: Full Time,

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

Remote

🧠 Job Title: Engineering Manager Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 7–12 Years Compensation: Competitive salary + ESOPs + Performance-based bonuses 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing AI-first startups, building next-gen conversational intelligence and real-time agent assist tools for sales teams globally. We’re transforming how enterprise sales happens across industries like BFSI, real estate, retail, and telecom with a GenAI-powered platform that combines multilingual transcription, NLP, real-time nudges, knowledge base integration, and performance analytics—all in one. Our clients include some of the biggest names in India, MENA, and SEA. We’re backed by marquee venture capitalists, 30+ angel investors, and operators from top AI, SaaS, and B2B companies. Our founding team comes from IITs, IIMs, BITS Pilani, and global enterprise AI firms. Now, we’re looking for a high-caliber Engineering Manager to help lead the next phase of our engineering evolution. If you’ve ever wanted to build and scale real-world AI systems for global use cases—this is your shot. 🎯 Role Overview As Engineering Manager at Darwix AI, you will be responsible for leading and managing a high-performing team of backend, frontend, and DevOps engineers. You will directly oversee the design, development, testing, and deployment of new features and system enhancements across Darwix’s AI-powered product suite. This is a hands-on technical leadership role , requiring the ability to code when needed, conduct architecture reviews, resolve blockers, and manage the overall engineering execution. You’ll work closely with product managers, data scientists, QA teams, and the founders to deliver on roadmap priorities with speed and precision. You’ll also be responsible for building team culture, mentoring developers, improving engineering processes, and helping the organization scale its tech platform and engineering capacity. 🔧 Key Responsibilities1. Team Leadership & Delivery Lead a team of 6–12 software engineers (across Python, PHP, frontend, and DevOps). Own sprint planning, execution, review, and release cycles. Ensure timely and high-quality delivery of key product features and platform improvements. Solve execution bottlenecks and ensure clarity across JIRA boards, product documentation, and sprint reviews. 2. Architecture & Technical Oversight Review and refine high-level and low-level designs proposed by the team. Provide guidance on scalable architectures, microservices design, performance tuning, and database optimization. Drive migration of legacy PHP code into scalable Python-based microservices. Maintain technical excellence across deployments, containerization, CI/CD, and codebase quality. 3. Hiring, Coaching & Career Development Own the hiring and onboarding process for engineers in your pod. Coach team members through 1:1s, OKRs, performance cycles, and continuous feedback. Foster a culture of ownership, transparency, and high-velocity delivery. 4. Process Design & Automation Drive adoption of agile development practices—daily stand-ups, retrospectives, sprint planning, documentation. Ensure production-grade observability, incident tracking, root cause analysis, and rollback strategies. Introduce quality metrics like test coverage, code review velocity, time-to-deploy, bug frequency, etc. 5. Cross-functional Collaboration Work closely with the product team to translate high-level product requirements into granular engineering plans. Liaise with QA, AI/ML, Data, and Infra teams to coordinate implementation across the board. Collaborate with customer success and client engineering for debugging and field escalations. 🔍 Technical Skills & Stack🔹 Primary Languages & Frameworks Python (FastAPI, Flask, Django) PHP (legacy services; transitioning to Python) TypeScript, JavaScript, HTML5, CSS3 Mustache templates (preferred), React/Next.js (optional) 🔹 Databases & Storage: MySQL (primary), PostgreSQL MongoDB, Redis Vector DBs: Pinecone, FAISS, Weaviate (RAG pipelines) 🔹 AI/ML Integration: OpenAI APIs, Whisper, Wav2Vec, Deepgram Langchain, HuggingFace, LlamaIndex, LangGraph 🔹 DevOps & Infra: AWS EC2, S3, Lambda, CloudWatch Docker, GitHub Actions, Nginx Git (GitHub/GitLab), Jenkins (optional) 🔹 Monitoring & Testing: Prometheus, Grafana, Sentry PyTest, Selenium, Postman ✅ Candidate Profile👨💻 Experience 7–12 years of total engineering experience in high-growth product companies or startups. At least 2 years of experience managing teams as a tech lead or engineering manager. Experience working on real-time data systems, microservices architecture, and SaaS platforms. 🎓 Education: Bachelor’s or Master’s degree in Computer Science or related field. Preferred background from Tier 1 institutions (IITs, BITS, NITs, IIITs). 💼 Traits We Love: You lead with clarity, ownership, and high attention to detail. You believe in building systems—not just shipping features. You are pragmatic and prioritize team delivery velocity over theoretical perfection. You obsess over latency, clean interfaces, and secure deployments. You want to build a high-performing tech org that scales globally. 🌟 What You’ll Get Leadership role in one of India’s top GenAI startups Competitive fixed compensation with performance bonuses Significant ESOPs tied to company milestones Transparent performance evaluation and promotion framework A high-speed environment where builders thrive Access to investor and client demos, roadshows, GTM huddles, and more Annual learning allowance and access to internal AI/ML bootcamps Founding-team-level visibility in engineering decisions and product innovation 🛠️ Projects You’ll Work On Real-time speech-to-text engine in 11 Indian languages AI-powered live nudges and agent assistance in B2B sales Conversation summarization and analytics for 100,000+ minutes/month Automated call scoring and custom AI model integration Multimodal input processing: audio, text, CRM, chat Custom knowledge graph integrations across BFSI, real estate, retail 📢 Why This Role Matters This is not just an Engineering Manager role. At Darwix AI, every engineering decision feeds directly into how real sales teams close deals. You’ll see your work powering real-time customer calls, nudging field reps in remote towns, helping CXOs make hiring decisions, and making a measurable impact on enterprise revenue. You’ll help shape the core technology platform of a company that’s redefining how humans and machines interact in sales. 📩 How to Apply Email your resume, GitHub/portfolio (if any), and a few lines on why this role excites you to: 📧 people@darwix.ai Subject: Application – Engineering Manager – [Your Name] If you’re a technical leader who thrives on velocity, takes pride in mentoring developers, and wants to ship mission-critical AI systems that power revenue growth across industries, this is your stage . Join Darwix AI. Let’s build something that lasts.

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm with over 125,000 employees in more than 30 countries. We are driven by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients, including Fortune Global 500 companies. Our purpose is the relentless pursuit of a world that works better for people, and we serve leading enterprises with deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Senior Principal Consultant, Research Data Scientist. The ideal candidate should have experience in Text Mining, Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. It is desirable to have full-cycle experience in at least one Large Scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation, and Change Management. Experience in Hadoop, including development in the map-reduce framework, is also required. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems and working with database teams to deliver large-scale text analytic solutions. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools. Responsibilities include developing transformative AI/ML solutions, managing project delivery, stakeholder/customer expectations, project documentation, project planning, and staying updated on industrial and academic developments in AI/ML with NLP/NLU applications. The role also involves conceptualizing, designing, building, and developing solution algorithms, interacting with clients to collect requirements, and conducting applied research on text analytics and machine learning projects. Qualifications we seek: Minimum Qualifications/Skills: - MS in Computer Science, Information systems, or Computer engineering - Systems Engineering experience with Text Mining/NLP tools, Data sciences, Big Data, and algorithms Technology: - Proficiency in Open Source Text Mining paradigms like NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud-based NLU tools such as DialogFlow, MS LUIS - Exposure to Statistical Toolkits like R, Weka, S-Plus, Matlab, SAS-Text Miner - Strong Core Java experience, Hadoop ecosystem, Python/R programming skills Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG - Solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks - Understanding of NLP & Statistics concepts, applications like Sentiment Analysis, NLP, etc. Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, Machine learning/Deep learning methods - UI development paradigms, Linux, Windows, GPU Experience, Spark, Scala - Deep learning frameworks like TensorFlow, Keras, Torch, Theano Methodology: - Social Network modeling paradigms - Text Analytics using NLP tools, Text analytics implementations This is a full-time position based in India-Noida. The candidate should have a Master's degree or equivalent education level. The job posting was on Oct 7, 2024, and the unposting date is ongoing. The primary skills required are digital, and the job category is full-time.,

Posted 4 days ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description Expectations We are looking for a Software/Cloud Engineer with a proven track record in applying AI to real-world problems. You will be responsible for designing, developing, and maintaining AI-driven solutions integrated with Oracle SaaS applications across various industries. The ideal candidate will possess a clear understanding of the strategic value of data and AI in enhancing business operations. We require team members who are adept at working hands-on with Oracle Cloud products, developing and showcasing custom demonstrations that leverage Oracle's Generative AI, Vision, Language, and other OCI AI services, and leading proof-of-concept projects to meet customer business needs. Experience with Oracle's AI and Cloud capabilities is an advantage alongside strong curiosity, technical expertise, and exceptional communication abilities. Responsibilities Design, develop, and implement end-to-end scalable AI-driven applications and features that act as standalone/independent solutions or enhance the capabilities of Oracle Fusion Cloud Applications. Collaborate closely with Application Sales Representatives, Solution Architects, and internal/external customers to understand specific business pain points application needs. Create highly customized, compelling, and reusable AI-infused demonstrations of Fusion applications, highlighting the power of Oracle’s AI capabilities by leveraging Oracle AI Services (e.g., Vision, Language, Speech, Anomaly Detection), OCI Generative AI (for LLMs, RAG, embeddings, summarization), or Oracle AI Studio. Integrate diverse systems using REST APIs and other integration patterns. Data Handling - data preparation, feature engineering, and working with various data sources. Act as an AI subject matter expert during customer engagements, workshops, and proof-of-concept initiatives. Responsibilities Work experience Experience Level: At least few years of dedicated experience in AI/ML applications development or a closely related field. While 3+ years is ideal, we recognize the evolving nature of AI and encourage candidates with strong foundational skills and demonstrable passion. Good understanding of core AI, Machine Learning and Deep Learning concepts, algorithms, and methodologies. 5+ years of Software Development with excellent proficiency in Python being essential. Good experience with relational databases (preferrable Oracle Database) and 3-5 years SQL experience in querying large complex data set. Familiarity with Oracle Fusion Cloud Applications (ERP, SCM, HCM, CX) data models and functionalities would be an advantage. Familiarity with Oracle AI Studio as a development environment is a big advantage Prior experience (or willing to learn) with one or more of the following technologies/platforms is an advantage: OCI Object Storage, OCI Data Integration, Oracle Database, Oracle Analytics Cloud, UI Path. Continuously learn and adapt to evolving AI technologies and Oracle product enhancements. To be successful in this role, you will ideally have Can compose ideas in a clear and concise manner written and/or spoken in English, as the role will support United States based Sales team Team player who can work well with others Good organizational and planning skills with a demonstrated ability to manage projects to completion Aptitude to learn new technologies/techniques quickly and efficiently Strong analytical skills Demonstrates ability to explore different alternatives and options to resolve technical challenges Self-motivated and self-starter Bachelor’s degree in computer science or equivalent technical experience What We Offer Oracle is a very successful, profitable and leading international IT provider providing an environment that enables employees to learn, grow and be successful. Specifically related to the Pre-Sales role in the Pre-Sales Centre we provide: An environment that is focused on continuous learning Ample opportunity to train on new products and to develop new personal skills A combination of deploying technical knowledge and sales abilities A challenging and interesting work environment with the possibility for interaction with colleagues, customers and partners A fun and varied job Excellent possibilities to develop yourself and your career Attractive salary and benefits Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Expectations We are looking for a Software/Cloud Engineer with a proven track record in applying AI to real-world problems. You will be responsible for designing, developing, and maintaining AI-driven solutions integrated with Oracle SaaS applications across various industries. The ideal candidate will possess a clear understanding of the strategic value of data and AI in enhancing business operations. We require team members who are adept at working hands-on with Oracle Cloud products, developing and showcasing custom demonstrations that leverage Oracle's Generative AI, Vision, Language, and other OCI AI services, and leading proof-of-concept projects to meet customer business needs. Experience with Oracle's AI and Cloud capabilities is an advantage alongside strong curiosity, technical expertise, and exceptional communication abilities. Responsibilities Design, develop, and implement end-to-end scalable AI-driven applications and features that act as standalone/independent solutions or enhance the capabilities of Oracle Fusion Cloud Applications. Collaborate closely with Application Sales Representatives, Solution Architects, and internal/external customers to understand specific business pain points application needs. Create highly customized, compelling, and reusable AI-infused demonstrations of Fusion applications, highlighting the power of Oracle’s AI capabilities by leveraging Oracle AI Services (e.g., Vision, Language, Speech, Anomaly Detection), OCI Generative AI (for LLMs, RAG, embeddings, summarization), or Oracle AI Studio. Integrate diverse systems using REST APIs and other integration patterns. Data Handling - data preparation, feature engineering, and working with various data sources. Act as an AI subject matter expert during customer engagements, workshops, and proof-of-concept initiatives. Responsibilities Work experience Experience Level: At least few years of dedicated experience in AI/ML applications development or a closely related field. While 3+ years is ideal, we recognize the evolving nature of AI and encourage candidates with strong foundational skills and demonstrable passion. Good understanding of core AI, Machine Learning and Deep Learning concepts, algorithms, and methodologies. 5+ years of Software Development with excellent proficiency in Python being essential. Good experience with relational databases (preferrable Oracle Database) and 3-5 years SQL experience in querying large complex data set. Familiarity with Oracle Fusion Cloud Applications (ERP, SCM, HCM, CX) data models and functionalities would be an advantage. Familiarity with Oracle AI Studio as a development environment is a big advantage Prior experience (or willing to learn) with one or more of the following technologies/platforms is an advantage: OCI Object Storage, OCI Data Integration, Oracle Database, Oracle Analytics Cloud, UI Path. Continuously learn and adapt to evolving AI technologies and Oracle product enhancements. To be successful in this role, you will ideally have Can compose ideas in a clear and concise manner written and/or spoken in English, as the role will support United States based Sales team Team player who can work well with others Good organizational and planning skills with a demonstrated ability to manage projects to completion Aptitude to learn new technologies/techniques quickly and efficiently Strong analytical skills Demonstrates ability to explore different alternatives and options to resolve technical challenges Self-motivated and self-starter Bachelor’s degree in computer science or equivalent technical experience What We Offer Oracle is a very successful, profitable and leading international IT provider providing an environment that enables employees to learn, grow and be successful. Specifically related to the Pre-Sales role in the Pre-Sales Centre we provide: An environment that is focused on continuous learning Ample opportunity to train on new products and to develop new personal skills A combination of deploying technical knowledge and sales abilities A challenging and interesting work environment with the possibility for interaction with colleagues, customers and partners A fun and varied job Excellent possibilities to develop yourself and your career Attractive salary and benefits Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description Expectations We are looking for a Software/Cloud Engineer with a proven track record in applying AI to real-world problems. You will be responsible for designing, developing, and maintaining AI-driven solutions integrated with Oracle SaaS applications across various industries. The ideal candidate will possess a clear understanding of the strategic value of data and AI in enhancing business operations. We require team members who are adept at working hands-on with Oracle Cloud products, developing and showcasing custom demonstrations that leverage Oracle's Generative AI, Vision, Language, and other OCI AI services, and leading proof-of-concept projects to meet customer business needs. Experience with Oracle's AI and Cloud capabilities is an advantage alongside strong curiosity, technical expertise, and exceptional communication abilities. Responsibilities Design, develop, and implement end-to-end scalable AI-driven applications and features that act as standalone/independent solutions or enhance the capabilities of Oracle Fusion Cloud Applications. Collaborate closely with Application Sales Representatives, Solution Architects, and internal/external customers to understand specific business pain points application needs. Create highly customized, compelling, and reusable AI-infused demonstrations of Fusion applications, highlighting the power of Oracle’s AI capabilities by leveraging Oracle AI Services (e.g., Vision, Language, Speech, Anomaly Detection), OCI Generative AI (for LLMs, RAG, embeddings, summarization), or Oracle AI Studio. Integrate diverse systems using REST APIs and other integration patterns. Data Handling - data preparation, feature engineering, and working with various data sources. Act as an AI subject matter expert during customer engagements, workshops, and proof-of-concept initiatives. Responsibilities Work experience Experience Level: At least few years of dedicated experience in AI/ML applications development or a closely related field. While 3+ years is ideal, we recognize the evolving nature of AI and encourage candidates with strong foundational skills and demonstrable passion. Good understanding of core AI, Machine Learning and Deep Learning concepts, algorithms, and methodologies. 5+ years of Software Development with excellent proficiency in Python being essential. Good experience with relational databases (preferrable Oracle Database) and 3-5 years SQL experience in querying large complex data set. Familiarity with Oracle Fusion Cloud Applications (ERP, SCM, HCM, CX) data models and functionalities would be an advantage. Familiarity with Oracle AI Studio as a development environment is a big advantage Prior experience (or willing to learn) with one or more of the following technologies/platforms is an advantage: OCI Object Storage, OCI Data Integration, Oracle Database, Oracle Analytics Cloud, UI Path. Continuously learn and adapt to evolving AI technologies and Oracle product enhancements. To be successful in this role, you will ideally have Can compose ideas in a clear and concise manner written and/or spoken in English, as the role will support United States based Sales team Team player who can work well with others Good organizational and planning skills with a demonstrated ability to manage projects to completion Aptitude to learn new technologies/techniques quickly and efficiently Strong analytical skills Demonstrates ability to explore different alternatives and options to resolve technical challenges Self-motivated and self-starter Bachelor’s degree in computer science or equivalent technical experience What We Offer Oracle is a very successful, profitable and leading international IT provider providing an environment that enables employees to learn, grow and be successful. Specifically related to the Pre-Sales role in the Pre-Sales Centre we provide: An environment that is focused on continuous learning Ample opportunity to train on new products and to develop new personal skills A combination of deploying technical knowledge and sales abilities A challenging and interesting work environment with the possibility for interaction with colleagues, customers and partners A fun and varied job Excellent possibilities to develop yourself and your career Attractive salary and benefits Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

What is Blend? Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role? We are looking for a forward-thinking Data & AI Engineer with 1–3 years of experience in data engineering and a passion for using modern AI tools to accelerate development workflows. The ideal candidate is proficient in Python, SQL, PySpark, and has experience working in on-premise big data environments (e.g., Spark, Hadoop, Hive, HDFS). This role is ideal for someone eager to blend traditional data engineering practices with AI-augmented software development, helping us build high-performance pipelines and deliver faster, smarter solutions. What you’ll be doing? Develop and maintain robust ETL/ELT pipelines using Python, SQL, and PySpark. Work with on-premise big data platforms such as Spark, Hadoop, Hive, and HDFS. Optimize and troubleshoot workflows to ensure performance, reliability, and quality. Use AI tools to assist with code generation, testing, debugging, and documentation. Collaborate with data scientists, analysts, and engineers to support data-driven use cases. Maintain up-to-date documentation using AI summarization tools. Apply AI-augmented software engineering practices, including automated testing, code reviews, and CI/CD. Identify opportunities for automation and process improvement across the data lifecycle. What do we need from you? 1–3 years of hands-on experience as a Data Engineer or in a similar data-focused engineering role. Proficiency in Python for data manipulation, automation, and scripting. Solid understanding of SQL and relational database design. Experience building distributed data processing solutions with PySpark. Familiarity with on-premise big data ecosystems, including Hadoop, Hive, HDFS. Active use of AI development tools, such as: GitHub Copilot, Windsurf, Cursor – for intelligent code assistance ChatGPT or similar – for testing support, refactoring, and documentation AI-based testing frameworks or custom scripts Familiar with Git and CI/CD pipelines. Strong analytical skills and a mindset for automation and innovation. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks: Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats: Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards: We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications.

Posted 5 days ago

Apply

25.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Welo Data works with technology companies to provide datasets that are high-quality, ethically sourced, relevant, diverse, and scalable to supercharge their AI models. As a Welocalize brand, WeloData leverages over 25 years of experience in partnering with the world’s most innovative companies and brings together a curated global community of over 500,000 AI training and domain experts to offer services that span: ANNOTATION & LABELLING: Transcription, summarization, image and video classification and labeling. ENHANCING LLMs: Prompt engineering, SFT, RLHF, red teaming and adversarial model training, model output ranking. DATA COLLECTION & GENERATION: From institutional languages to remote field audio collection. RELEVANCE & INTENT: Culturally nuanced and aware, ranking, relevance, and evaluation to train models for search, ads, and LLM output. Want to join our Welo Data team? We bring practical, applied AI expertise to projects. We have both strong academic experience and a deep working knowledge of state-of-the-art AI tools, frameworks, and best practices. Help us elevate our clients' Data at Welo Data. MAIN PURPOSE OF JOB As an Operations Specialist, you will play a pivotal role in the day-to-day operations of the company, driving revenue and ensuring that customer commitments are met on time while maintaining the highest quality standards. This position will require a hands-on approach, strategic thinking, and exceptional organizational skills. You will be an early member of our operations team and have the opportunity to shape our most critical operational processes. MAIN DUTIES: Build and drive operational processes to ensure day-to-day delivery of customer commitments. Manage and oversee various aspects of daily operations including inventory, procurement, logistics, billing/invoicing, ticketing, timekeeping, project management, and customer service. Assist Project teams with planning, scoping, requirements gathering and validation with client. Create an effective feedback loop between the front line, product, strategy, and customers. Collaborate with cross-functional teams, including Customer Operations, Product Operations, Data Analytics, HR, Finance, Talent/Procurement, Product Managers, and more to achieve company objectives and KPIs. Conduct periodic audits to ensure compliance with standards and regulations. Provide support in the onboarding and training of new employees. Analyze operational data and metrics to identify areas for improvement. Participate in process optimization projects and come up with creative solutions to bottlenecks. Assist in financial budgeting and reporting. Support multiple squads on rotation asneeded. REQUIREMENTS Advanced English skills. Bachelor’s degree in an analytics-heavy major (e.g., Engineering or Economics) and/or a graduate degree in Operations, Engineering, Economics, or Business. Minimum of 2-4 years of experience in an operations role and/or a top-tier consulting firm. Excellent communication skills, both verbal and written. Strong organizational and multitasking skills. An action-oriented mindset that balances creative problem-solving with the scrappiness to ultimately deliver results. Proficiency in Microsoft Office Suite, with strong Excel skills. Analytical, planning, and process improvement capability. Other relevant skills Experience with reading SQL or demonstrated analytical skills Experience with resource management tools (e.g. Workday) Analytical Thinking Time Management Attention to Detail Team Collaboration Process Improvement Flexibility & Adaptability

Posted 5 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. We are seeking a skilled software developer to join our IBM Software team. As part of our team, you will be responsible for developing and maintaining high-quality software products, working with a variety of technologies and programming languages. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities AI Development and Implementation: Design, develop, and deploy cutting-edge AI solutions using GenAI technologies. Implement and optimize Large Language Models (LLMs) for natural language processing (NLP) applications. Leverage Retrieval-Augmented Generation (RAG) techniques to enhance AI-driven insights and decision-making. Integration to Enterprise Software: Integrate AI models/Agents with Enterprise Software tools to generate high-quality data driven AI use-cases. Develop custom AI algorithms and Agents to integrate novel features in to the product. Solution Optimization and Integration: Fine-tune NLP models to align with domain-specific needs and improve accuracy. Collaborate with cross-functional teams to integrate AI solutions into enterprise workflows. Mentorship and Leadership: Mentor junior developers on AI best practices, tools, and frameworks. Lead brainstorming sessions to explore innovative AI applications and solutions. Research and Innovation: Stay updated with the latest advancements in AI, GenAI, LLMs, and RAG to continuously enhance systems. Conduct proof-of-concept projects to evaluate emerging technologies for business use cases. Preferred Education Bachelor's Degree Required Technical And Professional Expertise Programming Expertise: Proficient in Python with demonstrated experience in AI and ML development. AI Agents, GenAI and LLMs: Strong expertise in GenAI concepts, including the design, deployment, and optimization of Large Language Models (LLMs), Designing AI Agents and deployments. NLP Skills: Advanced knowledge of NLP techniques for text analysis, summarization, and contextual understanding. RAG Expertise: Hands-on experience with Retrieval-Augmented Generation (RAG) to enhance knowledge retrieval systems. Tools and Frameworks: Proficiency with AI frameworks like TensorFlow, PyTorch, Hugging Face, or similar. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. Preferred Technical And Professional Experience DevOps and MLOps: Knowledge of deploying AI models in production environments using MLOps best practices. Data Engineering: Familiarity with data preparation, preprocessing pipelines, and working with unstructured datasets. Agile Methodologies: Experience working in Agile development environments for iterative delivery. Soft Skills: Strong problem-solving skills, attention to detail, and ability to communicate complex concepts to technical and non-technical stakeholders.

Posted 5 days ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: AI/ML Developer Location: Remote - India Type: Full-Time The company is a global Elite ServiceNow partner, helping top enterprises solve complex challenges through digital transformation. With 3,000+ successful projects, it delivers cutting-edge GenAI, advisory, and managed services across industries. They have been recognised as the 2024 ServiceNow Worldwide Customer Workflow Partner of the Year. As an AI/ML Developer you will work on real-world AI applications, from building ML models to deploying NLP solutions and developing intelligent automation. You will design and deploy machine learning models for predictive analytics, classification, and clustering. Leverage LLMs for conversational AI, text generation, and summarization. Requirements: 7+ years of AI/ML development experience. Strong Python, SQL, and ML framework skills. Experience with LLMs. Experience with cloud platforms and NLP tools. Education: Bachelor's in Computer Science, AI/ML, Data Science, or related field. Join us, If you're passionate about AI and ready to work on impactful projects, apply now and help shape the future of enterprise AI.

Posted 5 days ago

Apply

2.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Position Title: Automation & Workflow Engineer (No-Code + API) Location: Netaji Subhash Place (NSP), Delhi Experience Required: Minimum 2 Years Compensation: ₹9 – 12 LPA Joining Timeline: Immediate to within 30 days preferred Note: Only candidates currently residing in Delhi-NCR will be considered. About the Role We are seeking a skilled and experienced Automation & Workflow Engineer to join our team. The ideal candidate will have hands-on expertise with no-code platforms such as Make.com (formerly Integromat) , Zapier , or n8n , and a strong understanding of API-based integrations. In this role, you will be responsible for designing and implementing automated business workflows to enhance operational efficiency across various departments. Key Responsibilities Design and implement automation workflows using Make.com and other no-code platforms. Integrate tools such as Google Sheets , Gupshup (WhatsApp) , and internal APIs to support business processes. Develop structured automation flows for procurement , finance , and operations that enable real-time alerts, data synchronization, and automated reminders. Connect and manage third-party APIs (e.g., Shopify , Razorpay , Google Calendar , Airtable , etc.). Utilize routers, filters, iterators, aggregators, and robust error handling in automation workflows. Set up custom webhooks and manage authentication methods (API Keys, OAuth2, etc.). Automate communication through WhatsApp, Slack, and Email based on business events and triggers. Collaborate with cross-functional teams (operations, finance, warehouse) to identify and automate manual processes. Leverage OpenAI/ChatGPT APIs to automate message generation, summarization, and intelligent responses. Document workflows, prepare diagrams, and troubleshoot and resolve automation-related issues. Required Qualifications Minimum of 2 years of hands-on experience with Make.com, Zapier, or n8n. Strong understanding of APIs, HTTP requests, and authentication protocols (API Keys, OAuth2). Proficiency in Google Sheets , including complex formulas and logical functions. Experience with WhatsApp integration via Gupshup , Twilio , or WATI . Familiarity with OpenAI/ChatGPT APIs for automation use cases. Basic scripting knowledge in JavaScript or Python for custom functions. Proficient in using tools such as Postman for API testing and integration debugging. Excellent skills in process mapping , workflow documentation, and issue resolution. Additional Information Work Mode: On-site (NSP, Delhi) Eligibility: Only candidates residing in Delhi-NCR will be shortlisted Opportunity: A high-impact role for individuals passionate about automation, looking to work in a fast-growing and tech-driven environment.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Job Title: AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Apply Now Job Title AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. 🔧 Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. 🧪 Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents Apply Now

Posted 5 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description You are a strategic thinker passionate about driving solutions in “Data Science ”. You have found the right team. As a Data Science professional within our “ Asset Management team” , you will spend each day defining, refining and delivering set goals for our firm The Asset Management Data Science team is focused on enhancing and facilitating various steps in the investment process ranging from financial analysis and portfolio management to client services and advisory. You will utilize a large collection of textual data including financial documents, analyst reports, news, meeting notes and client communications along with more typical structured datasets. You will apply the latest methodologies to generate actionable insights to be directly consumed by our business partners. About Are you excited about using data science and machine learning to make a real impact in the asset management industry? Do you enjoy working with cutting-edge technologies and collaborating with a team of dedicated professionals? If so, the Data Science team at JP Morgan Asset Management could be the perfect fit for you. Here’s why: Real-World Impact: Your work will directly contribute to improving investment process and enhancing client experiences and operational process, making a tangible difference in our asset management business. Collaborative Environment: Join a team that values collaboration and teamwork. You’ll work closely with business stakeholders and technologists to develop and implement effective solutions. Continuous Learning: We support your professional growth by providing opportunities to learn and experiment with the latest data science and machine learning techniques. Job Responsibilities Collaborate with internal stakeholders to identify business needs and develop NLP/ML solutions that address client needs and drive transformation. Apply large language models (LLMs), machine learning (ML) techniques, and statistical analysis to enhance informed decision-making and improve workflow efficiency, which can be utilized across investment functions, client services, and operational process. Collect and curate datasets for model training and evaluation. Perform experiments using different model architectures and hyperparameters, determine appropriate objective functions and evaluation metrics, and run statistical analysis of results. Monitor and improve model performance through feedback and active learning. Collaborate with technology teams to deploy and scale the developed models in production. Deliver written, visual, and oral presentation of modeling results to business and technical stakeholders. Stay up-to-date with the latest research in LLM, ML and data science. Identify and leverage emerging techniques to drive ongoing enhancement. Required Qualifications, Capabilities, And Skills Advanced degree (MS or PhD) in a quantitative or technical discipline or significant practical experience in industry. Minimum of 4 years of experience in applying NLP, LLM and ML techniques in solving high-impact business problems, such as semantic search, information extraction, question answering, summarization, personalization, classification or forecasting. Advanced python programming skills with experience writing production quality code Good understanding of the foundational principles and practical implementations of ML algorithms such as clustering, decision trees, gradient descent etc. Hands-on experience with deep learning toolkits such as PyTorch, Transformers, HuggingFace. Strong knowledge of language models, prompt engineering, model finetuning, and domain adaptation. Familiarity with latest development in deep learning frameworks. Ability to communicate complex concepts and results to both technical and business audiences. Preferred Qualifications, Capabilities, And Skills Prior experience in an Asset Management line of business Exposure to distributed model training, and deployment Familiarity with techniques for model explainability and self validation About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.

Posted 5 days ago

Apply

2.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Responsibilities · • Design and implement automation workflows using Make.com (Integromat) with tools like Google Sheets, WhatsApp (via Gupshup), and internal APIs. · • Build structured procurement, operations, and finance workflows for reminders, alerts, and data syncing. · • Develop and manage complex logic involving filters, routers, iterators, aggregators, and error handling in Make.com. · • Integrate 3rd-party APIs (e.g., BusyBuy, Razorpay, Shopify, Gupshup, Google Calendar, Airtable) using custom HTTP modules or native integrations. · • Set up custom webhooks and manage token refresh logic where required. · • Implement real-time notifications across WhatsApp, Email, Slack, etc., based on business triggers. · • Collaborate with operations, warehouse, and finance leads to map processes and automate high-leverage workflows. · • Use OpenAI or ChatGPT API to embed basic AI into workflows (e.g., message summarization, smart follow-ups). · • Document each scenario and process clearly with diagrams and recovery logic. · • Monitor workflows and debug issues quickly with minimal downtime. Requirements · • 2+ years experience with Make.com (or Zapier/n8n) for workflow automation. · • Strong understanding of HTTP requests, APIs, authentication methods (API keys, OAuth2). · • Proficiency with Google Sheets and formula logic for operational workflows. · • Hands-on experience with WhatsApp Business API (via Gupshup, Twilio, or WATI). · • Ability to design complex automation logic using routers, filters, and iterators in Make.com. · • Working knowledge of OpenAI (ChatGPT API) integration within automation tools. · • Basic data modeling using Google Sheets, Airtable, or similar tools. · • Comfortable with API documentation, Postman testing, and integration troubleshooting. · • Strong logical thinking and ability to map processes end-to-end. · • Basic working knowledge of JavaScript or Python for custom logic blocks or API edge cases. · • Excellent communication and self-managed execution skills.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. We are seeking a skilled software developer to join our IBM Software team. As part of our team, you will be responsible for developing and maintaining high-quality software products, working with a variety of technologies and programming languages. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities AI Development and Implementation: Design, develop, and deploy cutting-edge AI solutions using GenAI technologies. Implement and optimize Large Language Models (LLMs) for texts, image and other data. Leverage Retrieval-Augmented Generation (RAG) techniques to enhance AI-driven insights and decision-making. Enterprise Software Integration: Integrate AI models with Enterprise Software to build high-quality AI use-cases. Develop custom AI algorithms and AI Agents. Solution Optimization and Integration: Fine-tune NLP models to align with domain-specific needs and improve accuracy. Collaborate with cross-functional teams to integrate AI solutions into enterprise workflows. Mentorship and Leadership: Mentor junior developers on AI best practices, tools, and frameworks. Lead brainstorming sessions to explore innovative AI applications and solutions. Research and Innovation: Stay updated with the latest advancements in AI, GenAI, LLMs, and RAG to continuously enhance systems. Conduct proof-of-concept projects to evaluate emerging technologies for business use cases. Preferred Education Master's Degree Required Technical And Professional Expertise 5+ years of experience. Programming Expertise Proficient in Python with demonstrated experience in AI and ML development. Knowledge of vector database. AI Agents, GenAI and LLMs: Strong expertise in GenAI concepts, including the design, deployment, and optimization of Large Language Models (LLMs) and design AI Agents. NLP Skills: Advanced knowledge of NLP techniques for text analysis, summarization, and contextual understanding. RAG Expertise: Hands-on experience with Retrieval-Augmented Generation (RAG) to enhance knowledge retrieval systems. Tools and Frameworks: Proficiency with AI frameworks like TensorFlow, PyTorch, Hugging Face, or similar. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. Preferred Technical And Professional Experience DevOps and MLOps: Knowledge of deploying AI models in production environments using MLOps best practices. Data Engineering: Familiarity with data preparation, preprocessing pipelines, and working with unstructured datasets. Agile Methodologies: Experience working in Agile development environments for iterative delivery. Soft Skills: Strong problem-solving skills, attention to detail, and ability to communicate complex concepts to technical and non-technical stakeholders.

Posted 5 days ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Overview: We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business value. Key Responsibilities: Analyze structured and unstructured data to identify trends, patterns, and opportunities. Propose and validate AI/ML use cases based on business data and stakeholder needs. Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. Ensure data quality, model interpretability, and ethical AI practices. Document experiments, share findings, and contribute to knowledge sharing within the team Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 3–4 years of hands-on experience in data science and machine learning. Proficient in Python and ML libraries Experience with data wrangling, feature engineering, and model evaluation. Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). Strong communication and storytelling skills with a data-driven mindset.

Posted 5 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

` About the Company Candidate will be responsible for designing and developing applications for automating business processes, language understanding systems using NLP for text representation techniques more efficiently. About the Role The candidate will be required to conceive, design, develop NLP applications, develop application for document automation using python, pytorch and should be familiar with advanced libraries used in python and other programming languages. Responsibilities Develop and deploy applications like document automation, summarization, creation of user interface, query based chatbot. Understand business objectives and develop AI/ML models that help to achieve the same, along with metrics to track their progress. Qualifications Bachelor’s/master’s degree in computer science or engineering with a focus on language processing. At least 7+ years of experience with exposure to NLP and relevant projects. Required Skills Experience with AI/ML platforms, frameworks, and libraries. Knowledge in relevant programming languages, development tools, databases. Proficiency in programming in Python, Pytorch, tensorflow. Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling. Capable of writing and building components to integrate into new or existing systems. Documentation experience for complex software components. Experience in implementing product lifecycle - design, development, quality, deployment, maintenance. Ready to work within a collaborative environment with teams. Creative thinking for identifying new opportunities. Preferred Skills Experience in projects which required working with natural language data such as nltk (Python), Apache OpenNLP or GATE. Knowledge of advanced desktop and web interface development, chatbot support interfaces etc.

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites  Experience in AI and Generative AI Development  Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP).  Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis.  Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT).  Experience in ingestion and cleaning of data  Feature Engineering and Data Engineering  Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets.  Experience in Model Training and Optimization  Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face.  Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI).  Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise  OpenAI, Hugging Face  MLOps tools  Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understa

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies