Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Location - Hyderabad Experience - 1-5 Years We are seeking an experienced Machine Learning Engineers specializing in Generative AI to join our core AI team. The ideal candidate will be responsible for designing, developing, and deploying cutting-edge generative AI solutions, with a focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and Intelligent agent systems. Key Responsibilities Design and implement RAG-based solutions to enhance LLM capabilities with external knowledge sources Develop and optimize LLM fine-tuning strategies for specific use cases and domain adaptation Create robust evaluation frameworks for measuring and improving model performance Build and maintain agentic workflows for autonomous AI systems Collaborate with cross-functional teams to identify opportunities and implement AI solutions Required Qualifications Bachelor's or Master's degree in Computer Science, or related technical field 3+ years of experience in Machine Learning/AI engineering Strong programming skills in Python and experience with ML frameworks (PyTorch, TensorFlow) Practical experience with LLM deployments and fine-tuning Experience with vector databases and embedding models Familiarity with modern AI/ML infrastructure and cloud platforms (AWS, GCP, Azure) Strong understanding of RAG architectures and implementation Preferred Qualifications Experience with popular LLM frameworks (Langchain, LlamaIndex, Transformers) Knowledge of prompt engineering and chain-of-thought techniques Experience with containerization and microservices architecture Background in NLP and deep learning Background in Reinforcement Learning Contributions to open-source AI projects Experience with ML ops and model deployment pipelines Skills And Competencies Strong problem-solving and analytical skills Excellent communication and collaboration abilities Experience with agile development methodologies Ability to balance multiple projects and priorities Strong focus on code quality and best practices Understanding of AI ethics and responsible AI development Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3072581
Posted 4 days ago
6.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: Technical Sales Representative – GenAI Solutions Location: Indore (Hybrid) Employment Type: Full-Time Company: Ccube – Full-Stack Partner for Apps, Data, and AI About Us Ccube is a fast-growing consulting firm delivering real-world outcomes with Apps, Data Engineering, and Generative AI. We work with startups, scale-ups, and enterprises to turn ideas into working solutions—often in as little as 30-60-90 days. Our expertise spans full-stack product development, AI/ML, and cloud-native platforms. As we scale our presence in the US, we are looking for a Technical Sales Representative to help us bring our GenAI capabilities to more businesses. Role Overview We’re seeking a technically fluent and sales-driven individual to join our team as a Technical Sales Representative for Generative AI solutions. In this role, you will be responsible for identifying, engaging, and closing business opportunities with companies that can benefit from GenAI Agents, LLMs, Retrieval-Augmented Generation (RAG), and related technologies. This is a high-impact role that sits at the intersection of business development, technology, and client success. Key Responsibilities Lead GenAI Sales: Prospect, pitch, and close deals with US-based clients across industries, focusing on Generative AI use cases Client Discovery & Demos: Conduct needs analysis, present tailored demos, and consult on AI use cases and technical feasibility Translate Tech to Business Value: Explain complex AI concepts (LLMs, RAG, vector databases, agent frameworks) in a business-first language CRM Management: Own and maintain the sales pipeline via tools like HubSpot or Salesforce Collaboration: Work closely with Ccube’s technical team to scope solutions, draft proposals, and align on project delivery Feedback Loop: Capture market insights and customer feedback to help shape Ccube’s product and service roadmap Qualifications 3–6 years of experience in B2B technical sales, consulting, or solution engineering roles Strong knowledge of Generative AI concepts such as LLMs, RAG pipelines, vector databases (e.g., Pinecone, Weaviate), and open-source agent frameworks Comfortable presenting to both technical and non-technical stakeholders Familiarity with CRM tools like HubSpot, Salesforce, or Pipedrive Strong written and verbal communication skills Self-starter with a high degree of ownership and entrepreneurial mindset Nice to Have Previous experience selling AI/ML or data platform solutions Exposure to industries such as fintech, healthcare, retail, or manufacturing Familiarity with the full AI/ML lifecycle, including data engineering, MLOps, and cloud platforms (AWS, Azure, GCP) Why Join Ccube? Be at the forefront of real-world GenAI adoption Work in a collaborative, no-bureaucracy environment Accelerate your growth as part of a high-performance, high-trust team Competitive compensation with performance-based incentives Powered by JazzHR 7CgDEE4Jmp
Posted 4 days ago
2.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Summary: We are looking for a skilled and motivated AI Developer with hands-on experience in Retrieval-Augmented Generation (RAG) techniques. The ideal candidate should have a deep understanding of NLP, LLMs (like GPT, LLaMA, or similar), vector databases, and integration of retrieval mechanisms into generative models to create intelligent, context-aware systems. Key Responsibilities: Design and develop AI-powered applications using RAG-based architectures. Integrate large language models (LLMs) with retrieval systems such as vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant). Fine-tune and evaluate language models for domain-specific tasks. Implement document parsing, chunking, and embedding generation using NLP techniques. Create end-to-end pipelines for document ingestion, semantic search, and context-aware generation. Optimize performance and accuracy of RAG systems. Collaborate with cross-functional teams including data engineers, product managers, and frontend/backend developers. Key Requirements: Bachelor's/Master's degree in Computer Science, Artificial Intelligence, or a related field. 2+ years of experience in AI/ML development, with at least 1 year working on RAG or related LLM-based applications. Strong programming skills in Python and experience with libraries like LangChain, Hugging Face Transformers, PyTorch, etc. Hands-on experience with vector databases like FAISS, Pinecone, or Qdrant. Good understanding of semantic search, embeddings, and prompt engineering. Familiarity with APIs from OpenAI, Cohere, Hugging Face, etc. Knowledge of cloud services (AWS, Azure, GCP) is a plus. Strong problem-solving skills and a passion for innovation in generative AI.
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description It’s an exciting time to be at Infoblox. Named a Top 25 Cyber Security Company by The Software Report and one of Inc. magazine’s Best Workplaces for 2020, Infoblox is the leader in cloud-first networking and security services. Our solutions empower organizations to take full advantage of the cloud to deliver network experiences that are inherently simple, scalable, and reliable for everyone. Infoblox customers are among the largest enterprises in the world and include 70% of the Fortune 500, and our success depends on bright, energetic, talented people who share a passion for building the next generation of networking technologies—and having fun along the way. We are looking for a Data Engineer II to join our Cloud Engineering team in Pune, India, reporting to the senior manager of Software Engineering. In this role, you will develop platforms and products for Infoblox’s SaaS product line delivering next level networking for our customers. You will work closely with data scientists and product teams to curate and refine data powering our latest cloud products. Come join our growing Cloud Engineering team and help us build world class solutions. You are the ideal candidate if you are passionate about the nexus between data and computer science and driven to figure out how best to represent and summarize data in a way that informs good decisions and drives new products. What you’ll do: Curate large-scale data from a multitude of sources into appropriate sets for research and development for the data scientists, threat analysts, and developers across the company Design, test, and implement storage solutions for various consumers of the data, especially data warehouses like ClickHouse and OpenSearch Design and implement mechanisms to monitor data sources over time for changes using summarization, monitoring, and statistical methods Design, develop, and maintain APIs that enable seamless data integration and retrieval processes for internal and external applications, and ensure these APIs are scalable, secure, and efficient to support high-volume data interactions Leverage computer science algorithms and constructs, including probabilistic data structures, to distill large data into sources of insight and enable future analytics Convert prototypes into production data engineering solutions through disciplined software engineering practices, Spark optimizations, and modern deployment pipelines Collaborate on design, implementation, and deployment of applications with the rest of Software Engineering Support data scientists and Product teams in building, debugging, and deploying Spark applications that best leverage data Build and maintain tools for automation, deployment, monitoring, and operations Create test plans, test cases, and run tests with automated tools What you’ll bring: 5+ years of experience in software development with programming languages such as Golang, C, C++, C#, or Java Expertise in Big Data, including MapReduce, Spark Streaming, Kafka, Pub-Sub, and In-memory Database Experience with NoSQL databases such as OpenSearch/Clickhouse Good exposure in application performance tuning, memory management, and scalability Ability to design highly scalable distributed systems using different open-source technologies Experience in microservices development and container-based software using Docker/Kubernetes and other container technologies is a plus Experience with AWS, GCP, or Azure is a plus Experience building high-performance algorithms Bachelor’s degree in computer science, computer engineering, or electrical engineering is required, master’s degree preferred What success looks like: After six months, you will… Complete onboarding by demonstrating knowledge of the Data Lake and associated technologies and CI/CD processes by deploying ETL pipelines for curation and warehousing to production Complete rotations on support/duty where you will gain experience with the different systems, tools, and processes in the Data Lake by resolving reported issues Contribute to the team's velocity by participating in Scrum and driving stories to completion After About a Year, You Will… Be an expert on the Data Lake target state architecture and drive Engineering design and grooming sessions for new feature development Apply coding best practices and provide in depth-reviews on the team’s pull requests Be a thought leader in one or more domains of the Data Lake, driving development and mentoring teammates in this domain We’ve got you covered: Our holistic benefits package includes coverage of your health, wealth, and wellness—as well as a great work environment, employee programs, and company culture. We offer a competitive salary and benefits package, including a Provident Fund with company matches and generous paid time off to help you balance your life. We have a strong culture and live our values every day—we believe in transparency, curiosity, respect, and above all, having fun while delighting our customers. Speaking of a great work environment, here are just a few of the perks you may enjoy depending on your location… Delicious and healthy snacks and beverages Electric vehicle charging stations A courtyard and amenities like an onsite gym, table tennis, pool table, play area, etc. Newly remodeled offices with state-of-the-art amenities Why Infoblox? We’ve created a culture that embraces diversity, equity, and inclusion and rewards innovation, curiosity, and creativity. We achieve remarkable results by working together in a supportive environment that focuses on continuous learning and embraces change. So, whether you’re a software engineer, marketing manager, customer care pro, or product specialist, you belong here, where you will have the opportunity to grow and develop your career. Check out what it’s like to be a Bloxer . We think you’ll be excited to join our team.
Posted 4 days ago
2.0 - 3.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Experience: 2-3 years Location: Nagpur Employment Type: Full-time Job Summary: We are seeking a skilled Python Developer with hands-on experience in implementing cutting-edge Generative AI solutions and developing machine learning models. The ideal candidate will also possess strong knowledge of FastAPI, REST APIs, RDBMS (especially MySQL), and vector databases such as ChromaDB. Key Responsibilities: Design, develop, and deploy scalable Python applications for AI/ML use cases. Implement and fine-tune Gen AI models (e.g., LLMs, transformers) for real-world applications. Develop RESTful APIs using FastAPI and integrate with front-end and third-party services. Create, train, and maintain machine learning models for NLP, recommendation, or classification tasks. Design database schemas and manage data pipelines using MySQL and vector databases (e.g., ChromaDB). Collaborate with cross-functional teams to translate business requirements into technical solutions. Ensure performance, security, and scalability of developed solutions. Required Skills: Strong proficiency in Python with experience in AI/ML libraries (e.g., TensorFlow, PyTorch, Hugging Face). Experience with Gen AI models (e.g., GPT, Gemini, LLaMA, Claude, etc.). Proficient in FastAPI, REST API development, and API security best practices. Solid understanding of RDBMS concepts and practical experience with MySQL. Experience with vector databases like ChromaDB, FAISS, or Weaviate. Familiarity with data handling, ETL pipelines, and model deployment practices. Preferred Qualifications: Experience with Docker, Kubernetes, or cloud platforms (AWS, GCP, Azure). Exposure to MLOps pipelines and monitoring tools. Understanding of prompt engineering and retrieval-augmented generation (RAG).
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
The Kardex Group is one of the world’s leading manufacturers of dynamic storage, retrieval and distribution systems. With over 2,500 employees worldwide, we develop and manufacture logistics solutions that are used in many different sectors such as industrial manufacturing, retail and administration. Kardex India Pvt Ltd is seeking a motivated self-starter to join our New Business Team in the role of Territory Sales Person to be based remotely at Ahmedabad/ Gujarat in India. The purpose of the roles is: - Develop the Market for Kardex Products and solutions in the West region of India - Reach and exceed sales targets in the territory and relevant segments - Create, qualify and develop leads and close sales according to Kardex Sales process. - Fully utilize the Kardex CRM tool to track all leads and opportunities - Actively contribute to the growth of Kardex in Western part of the Indian market Major task and responsibilities: TARGETS Net Sales Offers (value) Order intake (units/solutions/value) Net Sales Others to be elaborated during induction process Customers Giving proactive support to existing customers Identify and develop new customer for Kardex Solutions Follow the Kardex industry segment focus and develop solutions in these targeted segments Internal Forecast precision (Bookings/Net Sales) Deliver tasks within agreed time Defined reporting delivered on time Follow the Kardex sales process using the Miller Heiman sales methodology Report all sales activities via the Kardex CRM tool RESPONSIBILITIES Reach and exceed agreed sales volume Support and develop territory in lead generations, qualification, and order intake Customer visits Develop solution and value proposition for customer Offer making, contract and price negotiations. Initiate and participate in business development projects Initiate, implement and follow up sales campaigns Reporting Sales Force/ CRM updated weekly Monthly forecasting/weekly forecast update Other sales reporting requested. REQUIREMENTS Education: Tertiary education in related field. Minimum 5-7 years of experience in Intralogistics with high exposure to wholesale, retail, e-commerce, 3PL, electronics and/or Bio-Pharmaceutical industries. Multi-year experience of high level and complex B2B sales, with solution selling. Commercial background with good technical understanding or vice versa. Formally trained in sales and key account management. Experience in development and negotiation of complex contracts. Good understanding of logistical processes, and software supported working processes. Experienced in using Strategic Selling Framework like Miller Heiman, SPIN selling and CRM, e.g. SalesForce Experienced in solution selling at high level. Creative and solution oriented. Patient, persistent and enduring working style. Behaviours required to perform this role: Able to extract diagnostic data in order to ascertain root cause of reported fault. Logical and forward thinking Logical approach to fault analysis/problems. Able to follow laid down procedures and policies. Able to evaluate situations and respond appropriately. Self-motivated, self-disciplined and maintain positive attitude. Able to cope with varying levels of stress and pressure. Able to make decisions/judgements. Able to work beyond working hours if required (on weekends and public holidays).
Posted 4 days ago
0 years
0 Lacs
Surat, Gujarat, India
On-site
Atologist Infotech is looking for an experienced and forward-thinking AI Engineer to help shape and scale our AI Agent Framework. This is a hands-on engineering role focused on building intelligent, modular agent systems that can reason, plan, and interact autonomously in real-world applications. You will work alongside a highly collaborative team to develop systems that leverage LLMs, contextual memory, and tool integration—delivering AI-native applications, not just traditional ML pipelines. Key Responsibilities AI Agent System Development Design and implement agent-oriented systems that support task decomposition, memory handling, and contextual planning. Framework Engineering Develop and extend our in-house AI Agent Framework with reusable components like tools, memory modules, and orchestration logic. Contextual Intelligence Build and integrate vector search, semantic retrieval, and memory systems to enable long-term, goal-driven agent behaviour. Natural Language Interfaces Connect LLMs (e.g., OpenAI, Anthropic) and NLU/NLP layers to enable natural task inputs and autonomous reasoning capabilities. API Engineering Develop RESTful APIs using FastAPI, Flask, or Django to expose agent capabilities and integrate with other products. System Architecture Design scalable, event-driven microservice architectures tailored for AI-native workloads and agent frameworks. Ethical AI Prioritize responsible AI design with user safety, transparency, and privacy at the core. Required Skills & Qualifications Education Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or a related field. Python Proficiency Strong software engineering skills in Python (3.x) with deep understanding of async programming, architecture design, and typing. AI/LLM Experience Practical experience using LLMs for reasoning, prompt engineering, chaining, or building agent-like applications. Frameworks & Tools FastAPI, Flask, or Django LangChain, LlamaIndex, or other agent frameworks Vector stores (e.g., FAISS, Weaviate, Pinecone) PostgreSQL, Redis, and event queues (e.g., Celery, RabbitMQ) Version Control & Testing Proficient with Git workflows and writing tests using Pytest or similar. Bonus Skills (Nice-to-Have) Experience with autonomous agents, tool-calling, or memory-driven task systems Familiarity with cognitive architectures or symbolic reasoning Contributions to open-source AI frameworks or tooling
Posted 4 days ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation.
Posted 4 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree or equivalent practical experience. 8 years of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript). 3 years of experience in a technical leadership role, overseeing projects, with 3 years of experience in a people management, supervision/team leadership role. Preferred qualifications: Master’s degree or PhD in Engineering, Computer Science, or a related technical field. 3 years of experience working in a matrixed organization. Knowledge of areas like Machine Learning (ML), Search Infra, Search Features. About The Job Like Google's own ambitions, the work of a Software Engineer goes way beyond just Search. Software Engineering Managers have not only the technical expertise to take on and provide technical leadership to major projects, but also manage a team of engineers. You not only optimize your own code but make sure engineers are able to optimize theirs. As a Software Engineering Manager you manage your project goals, contribute to product strategy and help develop your team. Teams work all across the company, in areas such as information retrieval, artificial intelligence, natural language processing, distributed computing, large-scale system design, networking, security, data compression, user interface design; the list goes on and is growing every day. Operating with scale and speed, our exceptional software engineers are just getting started -- and as a manager, you guide the way. With technical and leadership expertise, you manage engineers across multiple teams and locations, a large product budget and oversee the deployment of large-scale projects across multiple sites internationally. Today, just 10% of Search in India are for Local user-needs, compared to 25% in US. This offers a huge headroom to grow commercial Search in India. India users are also adopting AI faster than other countries, offering an opportunity to build AI-powered capabilities that can leapfrog the current gaps in Local and Travel Search. The Real World Journeys India team is on a multi-year goal to make Local and Travel Search the exceptional experience for every user in India. In Google Search, we're reimagining what it means to search for information – any way and anywhere. To do that, we need to solve complex engineering challenges and expand our infrastructure, while maintaining a universally accessible and useful experience that people around the world rely on. In joining the Search team, you'll have an opportunity to make an impact on billions of people globally. Responsibilities Manage a team of Software Engineers and offer clarity with planning, designs, and execution. Mentor/guide the team-members with timely, actionable feedback and help them grow. Be responsible for large engineering and organizational problems that span multiple teams/components and systematically lead the team to solve them. Collaborate across multiple teams and functions, driving alignment, prioritization, execution, decisions. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 4 days ago
0 years
0 Lacs
India
Remote
Location: Remote / Bangalore (Hybrid) Company: Trovex.ai Trovex is an AI-powered simulation platform helping sales and support teams train faster and better through real-time conversational roleplays. We work with large enterprises like ICICI Bank, Shriram Finance, and Bajaj Allianz to transform training outcomes using cutting-edge generative AI. Role Overview We’re looking for a Prompt Engineer with strong common sense, attention to detail, and a deep curiosity about how language models work. You’ll be responsible for designing, refining, and testing prompts that power our AI roleplay engine, ensuring accurate, human-like, and context-aware conversations. Freshers are welcome — especially if you’ve taken relevant courses or built side projects around prompt engineering or LLMs (like OpenAI, Claude, Gemini, etc.). Key Responsibilities Design and iterate prompts that guide LLMs to simulate realistic user behavior in sales/support scenarios Evaluate LLM outputs for accuracy, tone, and consistency Collaborate with product, UX, and training teams to translate use-cases into prompt structures Fine-tune or chain prompts for multi-turn interactions Create prompt templates for different industries, use-cases, and languages Maintain prompt libraries, test cases, and output benchmarks What We’re Looking For Strong written communication and critical thinking Familiarity with GPT-4, Claude, or similar LLMs (via OpenAI Playground, ChatGPT, etc.) Practical understanding of few-shot, zero-shot, and chain-of-thought prompting Demonstrated interest in AI or NLP (certifications, online courses, or projects) High attention to detail and a “what could go wrong” mindset Bonus: Experience with tools like LangChain, PromptLayer, or prompt evaluation frameworks Nice to Have (but Not Required): Understanding of customer service/sales processes Exposure to Python or JSON for prompt automation Knowledge of RAG or embeddings-based retrieval (a plus, not mandatory) Why Join Us? Work on a real-world AI product used by India’s top BFSI and enterprise brands Flexible, fast-paced startup culture with lots of ownership Learn from a seasoned product and AI team Opportunity to grow into product, NLP, or conversation design roles
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At (TekLink HGS Digital), our vision is to be the globally preferred digital transformation partner for our clients, creating value in their business through rigorous innovation at scale. We are an expert team of 500+ leading strategic thinkers, digital marketing and creative masters, data analysts, software engineers, and process optimization specialists with an elemental desire to create transformative digital solutions. Job Title: Data Scientist Location: Hyderabad, India Duration: Full time Data scientist would support our internal teams and clients in driving strategic decisions, applying advanced statistical & predictive analytics and machine learning concepts to solve business problems in BFSI and CPG domains. You will also phrase requirements document, contribute towards project plan, carry out data research and collection, study attributes and features, test for parameters, resolve data issues, decide on models, modeling, QA/testing and showcase the findings in various formats for client consumption. Responsibilities: a) Analytics Requirements Definition: Works with business users to approve the requirements for analytics solution. b) Data Preparation: Reviews data preparation rules (data extraction, data integration, data granularity, data cleansing etc.). Prepares data for analytical modelling. Guides data analysts and associate data scientists on data preparation activities. c) Builds Machine Learning (ML) and Statistical Models using Python/R/Scala/SAS/SPSS d) Collaborate with clients and internal teams to define industry-leading analytics solutions for a wide variety of industries and business groups e) Develop proof-of-concepts and demos needed for client & internal presentations f) Create clear functional and technical documentation g) Work agnostic across multiple industry sectors and functional domains, with focus on BFSI and CPG domains. h) Work closely with all stakeholders to identify, evaluate, design, and implement statistical and other quantitative approaches for modeling enterprise scale data and big data i) Display proficiency in converting algorithmic proof of concepts into business requirement documents for product development or data driven actionable intelligence Minimum Requirements & Qualification The ideal candidate should have: • Full time Degree in Mathematics, Statistics, Computer Science or Computer Applications from reputed institutions, B.E./B.Tech., MBA specialized in Marketing, Operations Research, Data Science and/or Business Analytics • Overall 8+ years of technical experience in IT industry across BFSI and CPG domains. • Minimum of 5 years of hands-on work experience in Data Science/Advance analytics, Machine Learning using Python and SQL • Practical experience specifically around quantitative and analytical skills is required. • People management skills and experience, and familiarity with the pharmaceutical industry are preferred. • Knowledge of solution design, planning, and execution • Contribute to case studies, blogs, eBooks, and whitepapers • Proficiency in maintaining strong project documentation hygiene • Able to fully assimilate into automated MLOps mode • Must have good communication skills – written, oral, ppt and language skills o Able to translate statistical findings to business English • Hands on experience in one or more of the skillsets below: o Programming Language: R Programming, Base SAS, Advanced SAS o Visualization Tool: Tableau, MS Excel, think-cell, Power BI, Qlik Sense o Automation Tool: VBA Macro, Python scripts • Basic understanding of NLP/NLU/NLG and text mining • Skills/knowledge of advanced ML techniques with image processing and signal processing is a plus • GenAI and multimodal GenAI skills with RAG development and fine tuning • Sounds statistical training in linear and non-linear regression, weighted regression, clustering, and classification techniques • Sound understanding of applied statistical methods including survival analysis, categorical data analysis, time series analysis and multivariate statistics • Introduction to classical statistical including concepts in Bayesian statistics, experimental design and inference theory • Practical understanding of concepts in computer vision, data mining, machine learning, information retrieval, pattern recognition and knowledge discovery • Additional knowledge in WFM, biological learning systems and modern statistical concepts is a plus • Knowledge of IoT devices and solutions with multi-sensor data fusion is a plus • Knowledge of Geostatistics, information theory, computational statistics is a plus • Experience in character recognition with image, speech, and video analytics capabilities is a plus • Working knowledge of or certifications in AWS/Azure/GCP is beneficial
Posted 4 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Elevate Your Impact Through Innovation and Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4,500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work® in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Innovation and Technology Centre (ITC) Our Innovation and Technology Centre specializes in creating modular solutions to meet clients' specific needs. As a member of the team, you will have the opportunity to develop and implement digital products, platforms, and tools specifically designed for research, analytics, and data management domains. We are at the forefront of technological advancements, and our AI efforts rely on our larger platform called AI for Research and Analytics (AIRA). What you will be doing at Evalueserve • Building and implementing advanced machine learning (ML) and deep learning (DL) algorithms and models. • Applying different natural language processing (NLP) techniques to problems such as text classification, text summarization, questions and answers, information retrieval, knowledge extraction, and design of conversational bots by using traditional and generative AI techniques • Contributing to the design and development of enterprise-grade generative AI applications, including but not limited to advanced RAG, VDB optimization, LLM evaluation, and finetuning. • Designing and developing a practical and analytical approach while maintaining a focus on aspects such as data quality and availability, feasibility, scalability, and turnaround time. What we’re looking for • At least a bachelor’s /master`s degree in computer science, information systems, statistics, mathematics, or a related field • Strong understanding of data science processes, such as data investigation, cleaning, minimal viable models, and nuances related to the deployment and enhancement • About 8-12 years of experience in NLP, ML, and DL • More than one year of experience in generative AI application development at the production level. • Demonstrated ability in developing NLP / ML / DL / generative AI project solutions • Hands-on experience and deep theoretical expertise in NLU, NLP, NLG, and common NLP algorithms and DL architectures such as Transformers, BERT, word2vec, Fast Text, and ELMO • Hands-on experience in building and handling product-level delivery of knowledge graphs, ranking algorithms, recommendation engines, etc. • In-depth knowledge and experience in handling open-source frameworks, such as TensorFlow, PyTorch, and Hugging face Transformers. • Expert-level programming experience in Python / C++ • Familiarity with general software design concepts, product development lifecycles, and ML model deployment best practices. • Experience in analyzing large amounts of user-generated content and process data in large-scale environments by using cloud infrastructure. • Proficiency in scraping data from external sources and developing architectures to store and organize the information for generating insights. • Experience in contributing to open-source software projects. • Experience and tenacity to go beyond available tools / techniques to design solutions in line with product requirements. • Ability to communicate with internal and external stakeholders and convey complex information clearly and concisely. To know more- Follow us on https://www.linkedin.com/company/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilities. Know more about how Evalueserve has climbed 16 places on the “50 Best Firms for Data Scientists in 2024”! Disclaimer: The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances.
Posted 4 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
What you'll do: Partner closely with our sales team to architect custom GenAI solutions that meet client requirements Design and implement AI workflows using RAG (Retrieval-Augmented Generation) architectures Work with Large Language Models (LLMs) to solve complex business challenges Translate technical capabilities into compelling client presentations and proposals Guide end-to-end solution development from concept to deployment What we're looking for: Deep expertise in GenAI technologies, including RAG, vector databases, and LLM fine-tuning Strong background in solution architecture and system design Experience collaborating with sales teams to win technical deals Understanding of AI model deployment, scaling, and optimization Excellent communication skills to bridge technical and business stakeholders Experience with cloud platforms (AWS/Azure/GCP) and AI/ML frameworks Bonus points for: Previous experience in pre-sales or solution consulting Knowledge of enterprise AI governance and compliance Background in prompt engineering and AI agent development
Posted 4 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We are looking for a talented LLM & Backend Engineer to join our AI innovation team at EaseMyTrip.com and help power the next generation of intelligent travel experiences. In this role, you will lead the integration and optimization of Large Language Models (LLMs) to create conversational travel agents that can understand, recommend, and assist travelers across platforms. You will work at the intersection of backend systems, AI models, and natural language understanding, bringing smart automation to every travel interaction. Key Responsibilities: LLM Integration: Deploy and integrate LLMs (e.g., GPT-4, Claude, Mistral) to process natural language queries and deliver personalized travel recommendations. Prompt Engineering & RAG: Design optimized prompts and implement Retrieval-Augmented Generation (RAG) workflows to enhance contextual relevance in multi-turn conversations. Conversational Flow Design: Build and manage robust conversational workflows capable of handling complex travel scenarios such as booking modifications and cancellations. LLM Performance Optimization: Tune models and workflows to balance performance, scalability, latency, and cost across diverse environments. Backend Development: Develop scalable, asynchronous backend services using FastAPI or Django, with a focus on secure and efficient API architectures. Database & ORM Design: Design and manage data using PostgreSQL or MongoDB, and implement ORM solutions like SQLAlchemy for seamless data interaction. Cloud & Serverless Infrastructure: Deploy solutions on AWS, GCP, or Azure using containerized and serverless tools such as Lambda and Cloud Functions. Model Fine-Tuning & Evaluation: Fine-tune open-source and proprietary LLMs using techniques like LoRA and PEFT, and evaluate outputs using BLEU, ROUGE, or similar metrics. NLP Pipeline Implementation: Develop NLP functionalities including named entity recognition, sentiment analysis, and dialogue state tracking. Cross-Functional Collaboration: Work closely with AI researchers, frontend developers, and product teams to ship impactful features rapidly and iteratively. Preferred Candidate Profile: Experience: Minimum 2 years in backend development with at least 1 year of hands-on experience working with LLMs or NLP systems. Programming Skills: Proficient in Python with practical exposure to asynchronous programming and frameworks like FastAPI or Django. LLM Ecosystem Expertise: Experience with tools and libraries such as LangChain, LlamaIndex, Hugging Face Transformers, and OpenAI/Anthropic APIs. Database Knowledge: Strong understanding of relational and NoSQL databases, including schema design and performance optimization. Model Engineering: Familiarity with prompt design, LLM fine-tuning (LoRA, PEFT), and evaluation metrics for language models. Cloud Deployment: Comfortable working with cloud platforms (AWS/GCP/Azure) and building serverless or containerized deployments. NLP Understanding: Solid grasp of NLP concepts including intent detection, dialogue management, and text classification. Problem-Solving Mindset: Ability to translate business problems into AI-first solutions with a user-centric approach. Team Collaboration: Strong communication skills and a collaborative spirit to work effectively with multidisciplinary teams. Curiosity and Drive: Passionate about staying at the forefront of AI and using emerging technologies to build innovative travel experiences.
Posted 4 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the company Lexitas is a high growth company. The Company is built on a belief that having strong personal relationships with our clients, and providing reliable, accurate and professional services is the driving force of our success. Lexitas offers an array of services including local and national court reporting, medical record retrieval, process service, registered agent services and legal talent outsourcing. Our reach is truly national as well as international. Lexitas is a MNC Company that has set up a subsidiary in Chennai, India – Lexitas India Pvt. Ltd. This Indian company will be the Lexitas Global Capability Center, helping build a world class IT development team, and over time serve as a Shared Services hub for several of the corporate functions. For More Information - https://www.lexitaslegal.com This is a Full-Time Job located in Chennai, India. Roles and Responsibilities: Leading the design and development of advanced Power BI reports and dashboards Providing guidance on data modeling and DAX calculations Collaborating with stakeholders to define data requirements Ensuring data security and compliance Troubleshooting and optimizing Power BI solutions Required Skills & Experience: 6 to 8+ years of experience working with Reporting tools 3 to 5+ years of experience as a Power BI hands-on developer Extensive experience in developing and optimizing complex Power BI solutions Proficiency in SQL and data warehouse concepts Experience in developing, debugging, and writing Complex MS SQL queries Experience with data pipeline orchestration and automation Expertise in performance tuning and optimization of Power BI reports and SQL queries Ability to architect end-to-end BI solutions, including data ingestion, storage, and presentation layers is a plus Strong communication skills, including the ability to lead cross-functional teams Ability to manage complex projects and deliver results Certifications in Power BI are highly desirable Understanding of Cloud and Azure Fabric Qualifications: A bachelor’s degree in computer science or a Masters preferred. 8+ years of proven experience
Posted 4 days ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Hi All Greetings from Live Connections! We have an urgent requirement on Gen AI + GCP (Immediate Joiners) role with one of our MNC based company in all over Mumbai, MH Location. Please find the below job description and kindly share me your updated CV to sharmila@liveconnections.in Position Title: Gen AI + GCP (Immediate Joiners) role Experience Level: 4-9 Years Duration: Full Time Location: Pune, MH Notice Period: Immediate to 15 Days Work from Office Budget is up to 27 Lpa Overview: Generative AI Engineer + GCP Responsibilities: Design and fine-tune LLMs (Large Language Models) for BFSI use-cases: intelligent document processing, report generation, chatbots, advisory tools. Evaluate and apply prompt engineering, retrieval-augmented generation (RAG), and fine-tuning methods. Implement safeguards, red-teaming, and audit mechanisms for LLM usage in BFSI. Work with data privacy, legal, and compliance teams to align GenAI outputs with industry regulations. Collaborate with enterprise architects to integrate GenAI into existing digital platforms. Regards, Sharmila sharmila@liveconnections.in
Posted 4 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 4+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines Solid foundation in Data Structures, Algorithms, Object-Oriented Programming, and Software Design Experience in production-grade coding in Java, and Python/Scala Experience in designing and operating Big Data processing pipelines, such as: Hadoop and Spark Experience handling complexities in production software systems Excellent verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 4 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description ACCOUNTABILITIES & RESPONSIBILITIES Accountabilities Activities 1 Exports Deal/ Trades evaluation and documentation Evaluate Product Exports deals and ensure sign off from the authorised signatories. Maintain the records of the same. Check the deals entered in the system (Bulldog) 2 Documentation of Pre-finance Agreements Preparation of documents like approval note, LOA, and other documents required under Condition Precedent of Advance Agreement. Ensure timely arrangement of approvals and required documents Interaction with legal, secretarial team and buyer to complete the transaction Review of agreements – exports and advance agreement 3 Contract Vetting To verify export contracts based on deals concluded. Ensure contract contains standard clauses Verifying agreements for subscription to data feeds, publications etc., 4 Exports Pre-Shipment Activities Preparation of Proforma Invoice, pricing for customs clearance in a timely manner Vessel registration in system Scrutiny of Letter of Credit and comments to buyers. Exports Post Shipment activity Validating export values in system after completion of shipment Preparation of Provisional Invoice for claiming payment Make available export documents to Corporate Finance for discounting / negotiating in minimum turnaround time after completion of shipment. Follow up with customers for export realisation as per due date. Preparation of final invoice and reconciling with system. 5 IST related activities Compilation of annual budget of IST based on the data received from all desks, review and maintain Process all the IST related service invoices for accounting and payment. Maintain MIS of all such services. 6 Meeting the deadlines relating to Month end closing activities. Provision relating to demurrage claims, IST related service expenses and expenses incurred by IST for the vessel chartered for Domestic marketing Accounting entry for demurrage claim provision and reversal Interest calculations and accounting for the outstanding receivables of export buyers 7 Claims verification and processing To verify the demurrage claims raised by buyers on product export shipments Ensure sign-off/ approvals of authorised signatories as per SOP/ DOA and timely accounting and payment 8 Ensure smooth closure of Audit requirements To provide data and documents to auditors and clarification to their queries for quarterly, half yearly and annual audits. 9 Other Activities Preparing and monitoring Daily Product Exports Report for shipment execution, payment realisation. Preparation of various MIS as required by the Management Providing data to internal/ external requirements. Monthly MIS to EPS, consolidation team. Debtors reconciliation and ageing analysis Preparation of notes/ workflows and PPTs as and when required. SAP customer master creation Accounting of differential invoicing against all export shipments during the month. Effective storage and retrieval of documents in the DMS system Processing of vendor payments pertaining to subscriptions and various data feeds. Monitoring system interface to ensure details are recorded properly. Responsibilities ACCOUNTABILITIES & RESPONSIBILITIES Qualifications CA
Posted 4 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Tezo is seeking passionate AI Engineers who are excited about harnessing the power of Generative AI to transform our company and provide cutting-edge solutions for our clients. Join us in revolutionizing enterprises by building intelligent, generative solutions that leverage AI/ML. If you've ever dreamed of contributing to impactful projects on a large scale, this is the opportunity for you! In this role, you will be an integral part of the Machine Learning Platforms/Data Science team, focusing on developing, testing, and deploying generative AI models. What Makes Our AI/ML Practice Unique: Purpose-driven: We actively respond to our customers' evolving needs with innovative solutions. Collaborative: We foster a positive and engaging work environment where collective ideas thrive. Accountable: We take ownership of our performance, both individually and as a team. Service Excellence: We maximize our potential through continuous learning and improvement. Trusted: We empower individuals to make informed decisions and take calculated risks. Job Summary: We are looking for a dedicated Lead Data Scientist with a strong background in Generative AI to join our team. You will support product, leadership, and client teams by providing insights derived from advanced data analysis and generative modeling. In this role, you will collaborate closely with the development team, architects, and product owners to build efficient generative models and manage their lifecycle using the appropriate technology stack. Core Requirements: At least 6 years of experience working with geographically distributed teams 2+ years of experience working in a client-facing role on AI/ML . Demonstrable experience in leading a substantive area of work, or line management of a team. Proven experience in building production grade Retrieval-Augmented Generation (RAG) solutions with hands on experience with advanced RAG techniques for retrieval, re-ranking etc. Build GenAI applications using LangChain, LlamaIndex and familiarity with Vector Stores and Large Language Models. Experience in fine-tuning Large Language Models (LLMs) for business use cases will be preferred. Minimum of 4 years of experience in developing end-to-end classical machine learning and NLP projects. Demonstrated experience in deploying ML solutions in production using cloud services like Azure,AWS. Business Understanding, Stakeholder management and Team leading skills. Strong practical expertise in Python and SQL needed for data science projects. Join us at Tezo to be part of a dynamic team committed to driving innovation through Generative AI solutions!
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hello Connections, Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title: Mainframe cobol developer · Location: bangalore,chennai,hyderabad,noida,coimbatore · Experience: 5+ Year to 9year(relevant in mainframe cobol developer 5Year) · Job Type : Contract to hire. Work Mode : Hybrid(3day) Office Timing : 1.00PM to 10.30PM · Notice Period:- Immediate joiners(who can able to join july 3rd week) Mandatory Skills: Mainframe Cobol Developer, JCL, DB2, VSAM, CICS (relevant in mainframe cobol developer 6 Year) Need to work from 1.00 PM to 10.30 PM IST Agile Software Development (typically Scrum, Kanban, Safe) Banking Domain should be atleast 4 years of experience L1 Virtual interview - 28th June(saturday) Roles and Responsibilities: Responsibilities: COBOL Programming: Develop, maintain, and enhance COBOL applications that meet business requirements. Write, test, and debug COBOL code for high-performance batch and online processing. Modify and update existing COBOL applications to improve efficiency or add new features. JCL (Job Control Language): Create and maintain JCL scripts to manage batch jobs for data processing. Ensure that JCL is optimized for job scheduling, monitoring, and error handling. Troubleshoot and resolve JCL-related issues that impact batch processing. DB2 (Database): Design and develop DB2 queries to interact with databases, ensuring optimal performance. Integrate COBOL programs with DB2 for data retrieval, insertion, and updating. Ensure database integrity and handle SQL optimization for large-scale banking transactions. VSAM (Virtual Storage Access Method): Work with VSAM files to store and retrieve data efficiently. Ensure that COBOL programs interact seamlessly with VSAM files. Perform file management tasks such as creating, deleting, and maintaining VSAM datasets. CICS (Customer Information Control System): Develop and maintain CICS-based applications, ensuring seamless communication between online programs and data resources. Optimize transaction processing in a CICS environment, focusing on real-time banking applications. Debug and resolve any issues related to CICS transactions, ensuring minimal downtime. Agile Methodology: Participate in Agile ceremonies, including daily standups, sprint planning, and retrospectives. Collaborate with cross-functional teams to deliver features incrementally and meet sprint goals. Ensure timely delivery of COBOL-based solutions within Agile sprints. Banking Domain Knowledge: Develop software that aligns with banking regulations, business processes, and security standards. Ensure data accuracy and consistency in financial transactions, account management, and payment systems. Stay informed about changes in the banking domain and ensure the software complies with industry standards and regulations. Testing and Documentation: Write unit tests and perform thorough testing of COBOL programs, ensuring high-quality output. Document code, workflows, and processes for future reference and regulatory purposes. Provide clear documentation for troubleshooting, maintenance, and knowledge sharing. Performance Optimization: Analyze the performance of COBOL applications and optimize them for speed and efficiency, particularly for high-volume banking transactions. Identify and resolve bottlenecks in the system. Collaboration and Communication: Work closely with business analysts, project managers, and other developers to understand business needs and translate them into technical solutions. Communicate effectively with stakeholders to manage expectations and provide updates on project progress.
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Greater Kolkata Area
On-site
Role Overview We are looking for an AI/ML Engineer with 3 to 5 years of experience in designing, developing, and deploying AI-powered applications. The ideal candidate should have strong hands-on experience in LLMs, prompt engineering, reasoning workflows (e.g., Chain-of-Thought), and agentic AI frameworks alongside a solid foundation in machine learning. Key Responsibilities Fine-tune reasoning-based LLMs using Chain-of-Thought datasets and techniques. Utilize multiple LLMs (ChatGPT, Claude, Gemini, etc.) and understand their strengths and limitations. Optimize LLM interactions through advanced prompt engineering, few-shot CoT, and iterative refinement. Leverage Retrieval-Augmented Generation (RAG) to combine external context with LLM reasoning. Use platforms like Hugging Face/GitHub to manage, fine-tune, and deploy pretrained models. Implement agentic AI workflows (LangGraph, CrewAI, or custom orchestration systems). Develop scalable backend architectures using FastAPI, Uvicorn, Celery, and async pipelines. Build intelligent automation pipelines involving scraping, entity extraction, and enrichment. Apply best practices in LLM engineering, including: Model training workflows (train/val/test splits, dataset curation) Fine-tuning and instruction-tuning with open-source models Reasoning task evaluation (factuality, coherence, step accuracy) Stay on the cutting edge of AI advancements, from open-weight LLMs to novel prompting strategies. Required Skills & Expertise Excellent communication skills — ability to articulate AI concepts clearly. Hands-on experience in: Prompt engineering for LLM-based applications Agentic AI frameworks (LangGraph, CrewAI, or custom-built) Chain-of-Thought reasoning, few-shot learning, and task decomposition Retrieval-Augmented Generation (RAG) pipelines Web scraping tools (Playwright, Octoparse, BeautifulSoup) System architecture (FastAPI, Uvicorn, Celery, async queues) AI/ML platforms (Hugging Face, Transformers, GitHub-based models) ML Frameworks: PyTorch (preferred), TensorFlow, or Keras Core fundamentals: Data structures, statistics, linear algebra, and optimization Bonus: Familiarity with reasoning datasets (GSM8K, HotpotQA, OpenBookQA) Experience with evaluation metrics for generative models (BLEU, ROUGE, EM, custom validators) Comfortable working across APIs, scraping flows, and backend systems Required Background 3 to 5 years of work experience in AI/ML, backend engineering, or NLP-heavy roles A portfolio of real-world projects — shipped products, agents, scrapers, or research tools Why Join Us? Be part of a stealth-stage, AI-first company solving real-world problems Build reasoning-aware systems powered by LLMs and automation Work at the intersection of trend intelligence, agents, and research Ship fast, own what you build, and have a voice in product direction Competitive salary If you’re passionate about building smart systems that think , we’d love to connect.
Posted 4 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are seeking a highly skilled Data Scientist with 4 to 8 years of experience to join our dynamic team. The ideal candidate will leverage expertise in Python, SQL, NLP, Computer Vision, Machine Learning, and PySpark to develop and implement advanced data solutions that support our clients’ digital transformation journeys. You will collaborate closely with business stakeholders and technical teams to analyze complex data sets and deliver actionable insights. Key Responsibilities Design, develop, and deploy machine learning models and data-driven solutions for clients in the financial services sector. Apply NLP and computer vision techniques to extract insights from unstructured data sources. Write optimized SQL queries to extract and manipulate large datasets from relational databases. Build scalable data pipelines and workflows using PySpark for large-scale data processing. Collaborate with cross-functional teams to translate business requirements into analytical solutions. Conduct exploratory data analysis to identify trends, patterns, and opportunities for improvement. Communicate findings and recommendations effectively to both technical and non-technical stakeholders. Stay current with emerging technologies and methodologies in data science, NLP, computer vision, and big data. Mandatory Skills Strong programming skills in Python for data science and machine learning. Proficient in SQL for complex data extraction and manipulation. Hands-on experience with Natural Language Processing (NLP) techniques and tools. Practical knowledge of Computer Vision algorithms and frameworks. Expertise in Machine Learning model development, evaluation, and deployment. Experience working with PySpark for big data processing and ETL workflows. Experience with Generative AI (Gen-AI) including Large Language Models (LLMs) for text generation and understanding. Familiarity with Retrieval-Augmented Generation (RAG) architectures for enhancing LLM performance with external knowledge sources. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, Statistics, or a related field. 4+ years of professional experience as a Data Scientist or in a similar role. Strong analytical, problem-solving, and communication skills. Ability to work independently and as part of a collaborative team in a fast-paced environment. About Company: Capco, a Wipro company, is a global technology and management consultancy specializing in driving digital transformation in the financial services industry. With a growing client portfolio comprising of over 100 global organizations, Capco operates at the intersection of business and technology by combining innovative thinking with unrivalled industry knowledge to deliver end-to-end data-driven solutions and fast-track digital initiatives for banking and payments, capital markets, wealth and asset management, insurance, and the energy sector. Capcoâs cutting-edge ingenuity is brought to life through its Innovation Labs and award-winning Be Yourself At Work culture and diverse talent. Our approach is tailor-made to fit with each clientâs problem with an emphasis on building long-term strategic partnerships that foster collaboration and trust. We have the people, the vision, and the passion. Capco is committed to providing clients with practical solutions. We offer a globally integrated service with offices in leading financial centers across the Americas, Europe and Asia Pacific.
Posted 4 days ago
0 years
4 - 5 Lacs
Coimbatore, Tamil Nadu, India
On-site
Skills: Technical Writing, Documentation Management, Knowledge Management, Team Leadership, Process Improvement, Project Management, Workflow Automation, A "Documentation Incharge" or "Document Controller" is responsible for ensuring that all company documentation is accurately, efficiently, and effectively managed. This includes creating, organizing, maintaining, and distributing various types of documents, such as process maps, user guides, policies, and technical manuals. They also ensure compliance with regulatory requirements and industry standards. Key Responsibilities Document Creation and Management: Developing and writing documents, including process maps, user guides, and procedure manuals. Organization and Storage: Maintaining a well-organized document database or library, ensuring accurate archiving and indexing for easy retrieval. Compliance: Ensuring that documentation practices adhere to company policies, regulatory requirements, and industry best practices. Communication and Training: Updating personnel on new document versions, providing access, and training employees on proper document handling. Record Retention: Establishing and maintaining record retention timelines, ensuring compliance with company policies and legal requirements. Quality Control: Reviewing and updating documents to ensure accuracy, clarity, and adherence to quality standards. Collaboration: Working with other departments to ensure that all documentation is accurate, up-to-date, and accessible.
Posted 5 days ago
2.0 years
0 Lacs
Greater Kolkata Area
Remote
Experience Required : 2 years Location : Remote Company : Troopr Labs Website : www.enjo.ai About Troopr Labs & Enjo At Troopr Labs, we're building Enjo - the world's smartest AI support agent - designed to become the default AI-first helpdesk for businesses. With Enjo's no-code AI Agent Studio, AI Ticketing, and LLM-native automation tools, enterprises can automate customer and employee support at scale. Trusted by global giants like Starbucks, Netflix, Snowflake, and Spotify, Enjo is already changing the way support happens. We're a lean team of builders on a mission to redefine enterprise productivity through generative AI. If you're curious, fast-moving, and excited about LLMs, we'd love to talk. Role Description We are looking for a Prompt Engineer to shape the intelligence of our AI agents. This role goes beyond simple prompt writing - it requires deep curiosity in language behavior, rigorous experimentation, and the ability to translate product goals into scalable LLM strategies. You'll work closely with product, engineering, and customer teams to design, evaluate, and productionize prompts that power real-world support experiences - and push the boundaries of what AI agents can do without human handoff. What You'll Do Design and iterate prompts to optimize Enjo's performance in customer and employee support use cases Build reusable prompt libraries and test frameworks for scale Collaborate with engineers and product leads to integrate prompt flows into our agent pipelines Experiment with emerging techniques like retrieval-augmented generation (RAG), guardrails, and role conditioning Analyze LLM outputs and conduct rigorous evaluations to improve reliability, interpretability, and impact Who You Are 1-3 years of experience working with LLMs or NLP-based AI products (internships/projects count) Experience with tools like LangChain, OpenAI, Anthropic, or similar APIs Strong understanding of prompt engineering patterns and model behavior You're curious, self-driven, and thrive in fast-moving environments Bonus : experience with AI observability, A/B testing for prompts, or few-shot learning techniques Why This Role Is Special Pioneer LLM Workflows : Build reusable prompt infrastructure powering real customer-facing agents Direct Impact : Your prompts will touch 100,000+ users at global brands Work on the Edge : Apply the latest LLM capabilities in retrieval, instruction tuning, and autonomous agents Mentorship + Autonomy : Be part of a world-class remote team where your ideas and initiative are valued Perks Remote-first and async-friendly Real-world AI experimentation environment Work closely with founders and core engineering team Access to top-tier AI tools, APIs, and mentorship (ref:hirist.tech)
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities Enter alphabetic, numeric, or symbolic data from various sources into computer databases, spreadsheets, or other software Type accurately and efficiently, focusing on speed and precision Use keyboards, data recorders, scanners, or other data entry devices for data input Review and verify data for accuracy and completeness by comparing it to source documents Identify and correct errors or inconsistencies in data to maintain integrity Cross-reference information to ensure consistency and correctness across data sources Maintain and update existing data in databases and systems as needed Organize and file electronic and paper documents appropriately for easy retrieval Create and manage spreadsheets with large volumes of data, ensuring proper structure Ensure data is stored logically for easy access and retrieval when required Perform regular data backups to safeguard data and prevent loss Sort, categorize, and code data according to specific guidelines for accurate record-keeping Compile data and prepare basic reports or summaries for business or auditing purposes Assist in retrieving data for reports, audits, and other business needs as requested Adhere to company data entry procedures and comply with data protection regulations Maintain the confidentiality of sensitive information at all times Communicate with team members to clarify data requirements or resolve discrepancies Respond to requests for data retrieval from various stakeholders Collaborate with other departments to ensure data consistency and accuracy across systems Operate standard office equipment such as computers, scanners, printers, and fax machines efficiently Ensure the proper use and maintenance of data entry equipment to avoid downtime Identify opportunities to improve data entry processes and increase overall efficiency About Company: Velozity Global Solutions is not only a globally recognized IT company, it's a family representing togetherness for over two years of a successful journey. For Velozity, the definition of success is to transform innovative ideas of people to reality with the help of our tech expertise - this is what we as a team want to be remembered for. Our vision has led Velozity to become an emerging IT company in India & the USA for delivering industry-led mobility solutions. The goal is to empower clients and businesses by creating new possibilities leveraging the technologies of today and tomorrow with the utmost quality, satisfaction, and transparency. Our enthusiasm has led us to become a top IT company in India & the USA for delivering various industry-led mobility solutions in web and mobile application development domains, leveraging futuristic technologies like the Internet of Things (IoT), AI-ML, AR-VR, voice assistants, and voice skills, DevOps & cloud computing, etc.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane