Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 5 Lacs
Bengaluru
On-site
Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you’d make a great addition to our vibrant team. We are looking for Junior Semantic Web ETL Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environment? What is the user story based on? Implementation means – trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange – discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOU’LL MAKE A DIFFERENCE BY: Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ SKILLS: International experience with global projects and collaboration with intercultural team is preferred 3 - 6 years’ experience on developing software solutions with various Application programming languages. Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POC’s Experience in expert functions like Software Development / Architecture Strong knowledge of Python fundamentals, including object-oriented programming, data structures, and algorithms. Exposure in Semantics, Knowledge Graphs, Data modelling and Ontologies will be preferred up and above. Proficiency in Python-based web frameworks such as Flask, FAST API. Experience in building and consuming RESTful APIs. Knowledge of web technologies like HTML, CSS, and JavaScript (basic understanding for integration purposes). Experience with libraries such as RDFLib or Py2neo for building and querying knowledge graphs. Familiarity with SPARQL for querying data from knowledge graphs. Understanding of graph databases like Neo4j, GraphDB, or Blazegraph. Experience with version control systems like Git. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI. Experience with containerization (Docker) and orchestration (Kubernetes). Familiar with basics of AWS Services – S3 Familiar with AWS Neptune Excellent command over English in written, spoken communication and strong presentation skills Good to have Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. Strong experience in Data Engineering and Analytics (Optional) Experience and exposure to Testing Frameworks, Unit Test cases Expert in Data Engineering and building data pipelines, implementing Algorithms in a distributed environment Very good experience with data science and machine learning (Optional) Experience with developing and deploying web applications on the cloud with solid understanding of one or more of the following Django (Optional) Drive adoption of Cloud technology for data processing and warehousing Experience in working with multiple databases, especially with NoSQL world Understanding of Webserver, Load Balancer and deployment process / activities Advanced level knowledge of software development life cycle. Advanced level knowledge of software engineering process. Experience in Jira, Confluence will be an added advantage. Experience with Agile/Lean development methods using Scrum Experience in Rapid Programming techniques and TDD (Optional) Takes strong initiatives and highly result oriented Good at communicating within the team as well as with all the stake holders Strong customer focus and good learner. Highly proactive and team player Ready to travel for Onsite Job assignments (short to long term) Create a better #TomorrowWithUs! This role is in Bangalore, where you’ll get the chance to work with teams impacting entire cities, countries – and the craft of things to come. We’re Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow ‘s reality. Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds (http://www.siemens.com/careers/digitalminds)
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : Machine Learning Engineer (NLP & Graph Analytics) Company Overview FWS is at the forefront of innovation in AI and ML .We are dedicated to solving complex challenges and creating impactful solutions through the power of artificial intelligence and machine learning. Our team is composed of passionate and driven experts who are committed to pushing the boundaries of what's possible. We are currently seeking a highly skilled and motivated Machine Learning Engineer to join our dynamic team and contribute to our next generation of intelligent products. Job Summary We are looking for an experienced Machine Learning Engineer with a strong background in building and deploying sophisticated machine learning models. The ideal candidate will have a proven track record of working with diverse datasets, a deep understanding of natural language processing (NLP), and hands-on experience with state-of-the-art deep learning frameworks. In this role, you will be responsible for the entire lifecycle of machine learning model development, from data acquisition and preprocessingincluding JSON extraction and graph database modeling with Neo4jto training, evaluation, and deployment via APIs. You will work with cutting-edge technologies, including Stanford NLP and large language models like Llama 3.2, to develop innovative solutions that drive our business forward. Key Responsibilities ML Model Development & Optimization : Design, develop, and implement robust machine learning models using Python and its associated libraries. You will be responsible for the end-to-end model lifecycle, including data preprocessing, feature engineering, model training, and performance evaluation. Deep Learning & NLP Utilize your expertise in deep learning with PyTorch to build and fine-tune complex neural networks. A key focus will be on Natural Language Processing (NLP) tasks, leveraging the Stanford NLP library for tasks such as sentiment analysis, named entity recognition, and coreference resolution. Large Language Models (LLMs) Stay at the forefront of AI by working with and fine-tuning large language models, specifically Llama 3.2. You will explore and implement its capabilities for tasks such as text generation, summarization, and question answering. Data Engineering & Analytics Conduct in-depth data analysis and handle complex data pipelines. This includes robust JSON extraction, transformation of semi-structured data, and modeling complex relationships using graph databases like Neo4j to build knowledge graphs that feed our ML systems. API Development & Deployment Develop and maintain scalable RESTful APIs to serve machine learning models in a production environment. Collaborate with our engineering teams to ensure seamless integration and contribute to our MLOps practices for scalability and reliability. Research & Innovation Stay current with the latest advancements in machine learning, deep learning, and NLP. Proactively identify and champion new technologies and techniques that can enhance our products and processes. Qualifications And Skills Educational Background : A Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Bachelor's degree with significant relevant experience will also be considered. Programming Proficiency : Expert-level programming skills in Python and a strong command of its data science and machine learning ecosystem (e.g., Pandas, NumPy, Scikit-learn). Deep Learning Expertise : Proven experience in building and deploying deep learning models using PyTorch. Natural Language Processing (NLP) : Demonstrable experience with NLP techniques and libraries, with specific expertise in using the Stanford NLP toolkit. Large Language Models (LLMs) : Hands-on experience with and a strong understanding of large language models, including practical experience with models like Llama 3.2. Data Handling : Strong proficiency in handling and parsing various data formats, with specific experience in JSON extraction and manipulation. API Development : Solid experience in developing and deploying models via RESTful APIs using frameworks like Flask, FastAPI, or Django. Graph Database Knowledge : Experience with graph databases, particularly Neo4j, and understanding of graph data modeling and querying (Cypher). Problem-Solving Skills : Excellent analytical and problem-solving abilities with a talent for tackling complex and ambiguous challenges. Preferred Qualifications Published research in top-tier machine learning or NLP conferences. Experience with MLOps tools and cloud platforms (e.g., AWS, GCP, Azure). Familiarity with other NLP libraries and frameworks (e.g., spaCy, Hugging Face Transformers). Contributions to open-source machine learning projects. (ref:hirist.tech)
Posted 3 weeks ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you’d make a great addition to our vibrant team. We are looking for Junior Semantic Web ETL Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environment? What is the user story based on? Implementation means – trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange – discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOU’LL MAKE A DIFFERENCE BY: Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ SKILLS: International experience with global projects and collaboration with intercultural team is preferred 3 - 6 years’ experience on developing software solutions with various Application programming languages. Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POC’s Experience in expert functions like Software Development / Architecture Strong knowledge of Python fundamentals, including object-oriented programming, data structures, and algorithms. Exposure in Semantics, Knowledge Graphs, Data modelling and Ontologies will be preferred up and above. Proficiency in Python-based web frameworks such as Flask, FAST API. Experience in building and consuming RESTful APIs. Knowledge of web technologies like HTML, CSS, and JavaScript (basic understanding for integration purposes). Experience with libraries such as RDFLib or Py2neo for building and querying knowledge graphs. Familiarity with SPARQL for querying data from knowledge graphs. Understanding of graph databases like Neo4j, GraphDB, or Blazegraph. Experience with version control systems like Git. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI. Experience with containerization (Docker) and orchestration (Kubernetes). Familiar with basics of AWS Services – S3 Familiar with AWS Neptune Excellent command over English in written, spoken communication and strong presentation skills Good to have Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. Strong experience in Data Engineering and Analytics (Optional) Experience and exposure to Testing Frameworks, Unit Test cases Expert in Data Engineering and building data pipelines, implementing Algorithms in a distributed environment Very good experience with data science and machine learning (Optional) Experience with developing and deploying web applications on the cloud with solid understanding of one or more of the following Django (Optional) Drive adoption of Cloud technology for data processing and warehousing Experience in working with multiple databases, especially with NoSQL world Understanding of Webserver, Load Balancer and deployment process / activities Advanced level knowledge of software development life cycle. Advanced level knowledge of software engineering process. Experience in Jira, Confluence will be an added advantage. Experience with Agile/Lean development methods using Scrum Experience in Rapid Programming techniques and TDD (Optional) Takes strong initiatives and highly result oriented Good at communicating within the team as well as with all the stake holders Strong customer focus and good learner. Highly proactive and team player Ready to travel for Onsite Job assignments (short to long term) Create a better #TomorrowWithUs! This role is in Bangalore, where you’ll get the chance to work with teams impacting entire cities, countries – and the craft of things to come. We’re Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow ‘s reality. Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds (http://www.siemens.com/careers/digitalminds)
Posted 3 weeks ago
0 years
0 - 0 Lacs
Cochin
Remote
https://pmspace.ai/ Company Profile: At pmspace.ai, we’re building next-generation AI tools for project management intelligence. Our platform leverages graph databases, NLP, and large language models (LLMs) to transform complex project data into actionable insights. Join us to pioneer cutting-edge solutions in a fast-paced, collaborative environment. Role Overview We seek a Python Developer with expertise in graph databases (Neo4j), RAG pipelines, and vLLM optimization. You’ll design scalable AI systems, enhance retrieval-augmented workflows, and deploy high-performance language models to power our project analytics engine. Key Responsibilities Architect and optimize graph database systems (Neo4j) to model project knowledge networks and relationships. Build end-to-end RAG (Retrieval-Augmented Generation) pipelines for context-aware AI responses. Implement and fine-tune vLLM for efficient inference of large language models (LLMs). Develop Python-based microservices for data ingestion, processing, and API integrations (FastAPI, Flask). Collaborate with ML engineers to deploy transformer models (e.g., BERT, GPT variants) and vector databases. Monitor system performance, conduct A/B tests, and ensure low-latency responses in production. Required Skills Proficiency in Python and AI/ML libraries (PyTorch, TensorFlow, Hugging Face Transformers). Hands-on experience with graph databases, especially Neo4j (Cypher queries, graph algorithms). Demonstrated work on RAG pipelines (retrieval, reranking, generation) using frameworks like LangChain or LlamaIndex. Experience with vLLM or similar LLM optimization tools (quantization, distributed inference). Knowledge of vector databases (e.g., FAISS, Pinecone) and embedding techniques. Familiarity with cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Job Type: Full-time Pay: ₹5,000.00 - ₹7,000.00 per month Schedule: Day shift Work Location: Remote Expected Start Date: 01/08/2025
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. : PTC is a dynamic and innovative company dedicated to creating innovative products that transform industries and improve lives. We are looking for a talented Product Architect that will be able to lead the conceptualization and development of groundbreaking products, and leverage the power of cutting edge AI technologies to drive enhanced productivity and innovation. Job Description: Responsibilities: Design and implement scalable, secure, and high-performing Java applications. Focus on designing, building, and maintaining complex, large-scale systems with intrinsic multi-tenant SaaS characteristics. Define architectural standards, best practices, and technical roadmaps. Lead the integration of modern technologies, frameworks, and cloud solutions. Collaborate with DevOps, product teams, and UI/UX designers to ensure cohesive product development. Conduct code reviews, mentor developers, and enforce best coding practices. Stay up-to-date with the latest design patterns, technological trends, and industry best practices. Ensure scalability, performance, and security of product designs. Conduct feasibility studies and risk assessments. Requirements: Proven experience as a Software Solution Architect or similar role. Strong expertise in vector and graph databases (e.g., Pinecone, Chroma DB, Neo4j, ArangoDB, Elastic Search). Extensive experience with content repositories and content management systems. Familiarity with SaaS and microservices implementation models. Proficiency in programming languages such as Java, Python, or C#. Excellent problem-solving skills and ability to think strategically. Strong technical, analytical, communication, interpersonal, and presentation skills. Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience with cloud platforms (e.g., AWS, Azure). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with artificial intelligence (AI) and machine learning (ML) technologies. Benefits: Competitive salary and benefits package. Opportunities for professional growth and development. Collaborative and inclusive work environment. Flexible working hours and hybrid work options. Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."
Posted 4 weeks ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Python Developer – Data Science & AI Integration Location: Chandkheda, Ahmedabad, Gujarat 382424 Experience: 2–3 Years Employment Type: Full-time Work Mode: On-site About the Role We are seeking a talented and driven Python Developer to join our AI & Data Science team. The ideal candidate will have experience in developing backend systems, working with legal datasets, and integrating AI/LLM-based chatbots (text and voice). This is a hands-on role where you’ll work across modern AI architectures like RAG and embedding-based search using vector databases. Key Responsibilities Design and implement Python-based backend systems for AI and data science applications. Analyze legal datasets and derive insights through automation and intelligent algorithms. Build and integrate AI-driven chatbots (text & voice) using LLMs and RAG architecture. Work with vector databases (e.g., Pinecone, ChromaDB) for semantic search and embedding pipelines. Implement graph-based querying systems using Neo4j and Cypher. Collaborate with cross-functional teams (Data Scientists, Backend Engineers, Legal SMEs). Maintain data pipelines for structured, semi-structured, and unstructured data. Ensure code scalability, security, and performance. Required Skills & Experience 2–3 years of hands-on Python development experience in AI/data science environments. Solid understanding of legal data structures and preprocessing. Experience with LLM integrations (OpenAI, Claude, Gemini) and RAG pipelines. Proficiency in vector databases (e.g., Pinecone, ChromaDB) and embedding-based similarity search. Experience with Neo4j and Cypher for graph-based querying. Familiarity with PostgreSQL and REST API design. Strong debugging and performance optimization skills. Nice to Have Exposure to Agile development practices. Familiarity with tools like LangChain or LlamaIndex. Experience working with voice-based assistant/chatbot systems. Bachelor's degree in Computer Science, Data Science, or a related field. Why Join Us? Work on cutting-edge AI integrations in a domain-focused environment. Collaborate with a passionate and experienced cross-functional team. Opportunity to grow in legal-tech and AI solutions space.
Posted 4 weeks ago
10.0 - 15.0 years
35 - 40 Lacs
Chennai
Work from Office
Req ID: 329415 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Advisor to join our team in Chennai, Tamil N du (IN-TN), India (IN). Tech Lead/Conversational AI Dev JD : 10+ years of experience. Python LLM Prompt Engineering RAG Graph DB (neo4j) NLP NLU GuardRails Mongo DB Oracle Full Stack Developer Knowledge Graph Specialist MCP About NTT DATA NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https: / / us.nttdata.com / en / contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If youd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 4 weeks ago
2.0 - 5.0 years
3 - 7 Lacs
Chennai
Work from Office
Design, develop, and maintain automated test scripts using Playwright with TypeScript/JavaScript, as well as Selenium with Java, to ensure comprehensive test coverage across applications. Enhance the existing Playwright framework by implementing modular test design and optimizing performance, while also utilizing Cucumber for Behavior-Driven Development (BDD) scenarios. Execute functional, regression, integration, performance, and security testing of web applications, APIs and microservices. Collaborate in an Agile environment, participating in daily stand-ups, sprint planning, and retrospectives to ensure alignment on testing strategies and workflows. Troubleshoot and analyze test failures and defects using debugging tools and techniques, including logging and tracing within Playwright, Selenium, Postman, Grafana, etc. Document and report test results, defects, and issues using Jira and Confluence, ensuring clarity and traceability for all test activities. Implement page object models and reusable test components in both Playwright and Selenium to promote code reusability and maintainability. Integrate automated tests into CI/CD pipelines using Jenkins and GitHub Actions, ensuring seamless deployment and testing processes. Collaborate on Git for version control, managing branches and pull requests to maintain code quality and facilitate teamwork. Mentor and coach junior QA engineers on best practices for test automation, Playwright and Selenium usage, and CI/CD workflows. Research and evaluate new tools and technologies to enhance testing processes and coverage. WHAT DO YOU NEED TO SHINE IN THIS ROLE? Bachelor?s degree in Computer Science, Engineering, or related field, or equivalent work experience. At least 5 years of experience in software testing, with at least 3 years of experience in test automation. Ability to write functional test, test plan and test strategies Ability to configure test environment and test data using automation tools Experience in creation of an automated regress / CI test suite using Cucumber with Playwright (Preferred) or Selenium and Rest APIs Proficient in one or more programming languages - Java, Javascript or Typescript. Experience in testing web applications, APIs, and microservices using various tools and frameworks such as Selenium, Cucumber etc. Experience in testing SAST/DAST tools (Preferred) Experience in working with cloud platforms such as AWS, Azure, GCP, etc. Experience in working with CI/CD tools such as Jenkins, GitLab, GitHub, etc. Experience in writing queries and working with databases such as MySQL, MongoDB, Neo4j, Cassandra etc. Experience in working with tools such as Postman, JMeter, Grafana, etc. Exposure to Security standards and Compliance Experience in working with Agile methodologies such as Scrum, Kanban, etc. Ability to work independently and as part of a team. Ability to learn new technologies and tools quickly and adapt to changing requirements. Highly analytical mindset, logical approach to find solutions and perform root cause analysis Able to prioritize between critical and non critical path items Excellent communication skills with ability to communicate test results to stakeholders in the functional aspect of the system and its impact. WHAT YOU?LL GET Highly competitive compensation, benefits, and vacation package Ability to work for one of the fastest growing companies with some of the most talented people in the industry Team outings Fun, Hardworking, and Casual Environment Endless Growth Opportunities
Posted 4 weeks ago
10.0 - 18.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. The Role As a Data Science Lead with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc…. Conceptualise, build and manage AI/ML (more focus on unstructured data) platform by evaluating and selecting best in industry AI/ML tools and frameworks Drive and take ownership for developing cognitive solutions for internal stakeholders & external customers. Conduct research in various areas like Explainable AI, Image Segmentation,3D object detections and Statistical Methods Evaluate not only algorithms & models but also available tools & technologies in the market to maximize organizational spend. Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI/ML applications that scale from multi-user to enterprise class. Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations IT Skills & Experience Thorough understanding of complete AI/ML project life cycle to establish processes & provide guidance & expert support to the team. Expert knowledge of emerging technologies in Deep Learning and Reinforcement Learning Knowledge of MLOps process for efficient management of the AI/ML projects. Must have lead project execution with other data scientists/ engineers for large and complex data sets Understanding of machine learning algorithms, such as k-NN, GBM, Neural Networks Naive Bayes, SVM, and Decision Forests. Experience in the AI/ML components like Jupyter Hub, Zeppelin Notebook, Azure ML studio, Spark ML lib, TensorFlow, Tensor flow,Keras, Py-Torch and Sci-Kit Learn etc Strong knowledge of deep learning with special focus on CNN/R-CNN/LSTM/Encoder/Transformer architecture Hands-on experience with large networks like Inseption-Resnets,ResNeXT-50. Demonstrated capability using RNNS for text, speech data, generative models Working knowledge of NoSQL (GraphX/Neo4J), Document, Columnar and In-Memory database models Working knowledge of ETL tools and techniques, such as Talend,SAP BI Platform/SSIS or MapReduce etc. Experience in building KPI /storytelling dashboards on visualization tools like Tableau/Zoom data People Skills Professional and open communication to all internal and external interfaces. Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Strong analytical skills. Industry Specific Experience 10 -18 Years of experience of AI/ML project execution and AI/ML research Education – Qualifications, Accreditation, Training Master or Doctroate degree Computer Science Engineering/Information Technology /Artificial Intelligence Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Head of Data Intelligence
Posted 4 weeks ago
12.0 - 17.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project description Join our data engineering team to lead the design and implementation of advanced graph database solutions using Neo4j. This initiative supports the organization's mission to transform complex data relationships into actionable intelligence. You will play a critical role in architecting scalable graph-based systems, driving innovation in data connectivity, and empowering cross-functional teams with powerful tools for insight and decision-making. Responsibilities Graph Data Modeling & Implementation. Design and implement complex graph data models using Cypher and Neo4j best practices. Leverage APOC procedures, custom plugins, and advanced graph algorithms to solve domain-specific problems. Oversee integration of Neo4j with other enterprise systems, microservices, and data platforms. Develop and maintain APIs and services in Java, Python, or JavaScript to interact with the graph database. Mentor junior developers and review code to maintain high-quality standards. Establish guidelines for performance tuning, scalability, security, and disaster recovery in Neo4j environments. Work with data scientists, analysts, and business stakeholders to translate complex requirements into graph-based solutions. SkillsMust have 12+ years in software/data engineering, with at least 3-5 years hands-on experience with Neo4j. Lead the technical strategy, architecture, and delivery of Neo4j-based solutions. Design, model, and implement complex graph data structures using Cypher and Neo4j best practices. Guide the integration of Neo4j with other data platforms and microservices. Collaborate with cross-functional teams to understand business needs and translate them into graph-based models. Mentor junior developers and ensure code quality through reviews and best practices. Define and enforce performance tuning, security standards, and disaster recovery strategies for Neo4j. Stay up-to-date with emerging technologies in the graph database and data engineering space. Strong proficiency in Cypher query language, graph modeling, and data visualization tools (e.g., Bloom, Neo4j Browser). Solid background in Java, Python, or JavaScript and experience integrating Neo4j with these languages. Experience with APOC procedures, Neo4j plugins, and query optimization. Familiarity with cloud platforms (AWS) and containerization tools (Docker, Kubernetes). Proven experience leading engineering teams or projects. Excellent problem-solving and communication skills. Nice to have N/A
Posted 4 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do We are seeking a Senior Full Stack Software Engineer to join BCG's critical GenAI application team, collaborating closely with LLM engineers to design and implement cutting-edge AI solutions. The ideal candidate will possess extensive experience in both front-end and back-end development, ensuring seamless integration and optimal performance of AI-driven applications. This role demands a proactive problem-solver with a passion for innovation and the ability to thrive in a fast-paced, collaborative environment. Provide exceptional Level 3 technical support for enterprise AI tools, working closely with internal teams and vendors to ensure smooth operations and rapid problem resolution. Drive the adoption of productivity tools within the organization by identifying and implementing enhancements that increase efficiency and user satisfaction. Handle the configuration and integration of SaaS products, ensuring secure and effective deployment across the organization. Administer and support hands-on changes to our globally deployed CT productivity tools portfolio, focusing on SaaS products with AI-powered capabilities. Administer and configure SaaS integrations for secure rollout and operations. Assist in the evaluation and deployment of new tools and technologies, contributing to POC projects that explore innovative solutions. Identify and support opportunities to implement automation capabilities to reduce manual work and human error. Adapt to industry direction and evolving customer needs, implementing emerging technologies using Change Management disciplines. What You'll Bring A bachelor’s degree in computer science, Electronics Engineering, or equivalent. Experience And Skills (Mandatory) Proven experience leading and mentoring a development team Experience developing multi-tenant applications and multi-user M:M broadcast/pubsub/event streaming services Experience with multi-user tiered permission/access structures and data sharing permissions Strong proficiency in building frontend( e.g., React, Typescript) Advanced proficiency with backend development concepts and languages (e.g., Python, Java) Expertise in Terraform for infrastructure as code automation Ability to manage containerized deployments using Kubernetes (EKS preferred) Experience with AWS, particularly EKS, serverless, queue, VPC & various databases Knowledge of Design Patterns and Architecture trade offs, OOP, API design (OpenAPI specs and postman or other tools) Knowledge on TDD, unit test, load test and integration test Solid git experience (version, tag, rebase) Experience And Skills (Nice To Have) Previous experience building a user-facing GenAI/LLM software application Previous experience with vectors and embeddings (pgvector, chromadb) Knowledge of LLM RAG/Agent core concepts and fundamentals Experience with Helm, Neo4J, GraphQL for efficient data querying for APIs, and CI/CD tools like Jenkins for automating deployments Other AWS Managed Services (RDS, Batch, Lambda, Fargate, Step Functions, SQS/SNS, etc.) FastAPI and NextJS experience (if we’re still using the latter) Websockets, Server-Side Events, Pub/Sub (RabbitMQ, Kafka, etc.) Who You'll Work With This individual will collaborate with other BCG information technology teams such as Identity, Security, Enterprise Architecture, other Functional squads to ensure alignment with BCG’s overall IT architecture plan. Additional info You're Good At Excelling in SaaS product management, with a strong ability to administer, support, and configure AI-powered SaaS integrations securely and effectively. Demonstrating expertise in implementing monitoring, alerting, and self-recovery solutions for SaaS tools to ensure optimal performance and reliability. Skilled in automating processes to reduce manual interventions and errors, showcasing proficiency in scripting and automation tools. Adaptable and forward-thinking, with the capability to leverage emerging technologies and apply Change Management disciplines effectively. Proficient in critical thinking and problem-solving, capable of managing complex technical challenges under tight deadlines. Strong interpersonal and communication skills, able to collaborate effectively with diverse teams and manage stakeholder expectations. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 4 weeks ago
8.0 - 13.0 years
0 - 1 Lacs
Pune
Hybrid
Experienced Required: 6+ years of experience in relevant discipline. Required Skills: Expertise in ETL processes, data modeling, data integration, machine learning model management, MLOPs, DevOps. Proficiency in tools and technologies such as SQL, Python, R, Hadoop and Spark. Experience in designing cloud solutions using Azure cloud platform Nice to Have Skills: Experience in Databricks, Snowflake and Neo4J. Expertise in machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Key Responsibilities: Defines component breakdown of the system; performs functional allocation to the components; establishes interfaces between systems and among system components. Effectively applies enterprise architecture standards including styles, stacks, and patterns. Develops multiple design solutions; performs cost/benefit analysis; recommends optimized solution designs to meet business and analytic technical requirements. Defines reference architectures aligned with architecture standards and building blocks to be applied globally. Drives changes in technical architecture into solution designs. Builds strong relationships with customers and peers to effectively promote solution development using standard technologies. Maintains awareness of emerging technologies, software trends, and tools. Facilitates and supports execution of pilot and or proof of concept activities to validate technology capabilities. Analyzes infrastructure capacity and provides recommendations for emerging needs. Maintain strong relationships to deliver business value using relevant Business Relationship Management practices.
Posted 4 weeks ago
16.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description We are looking for a Director of Engineering to lead one of our key product engineering teams. This role will report directly to the VP of Engineering and will be responsible for successful execution of the company's business mission through development of cutting-edge software products and solutions. As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team. You will have to collaborate with Product Management and Implementation teams and build a commercially successful product. You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands onengineering leadership. Requirement deep technical knowledge in Software Product Engineering using Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, , kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcrawler, springboot , etc. is a must. Job Requirement 16+ years of experience in Software Engineering with at least 5+ years as an engineering leader in a software product company. Hands-on technical leadership with proven ability to recruit high performance talent High technical credibility - ability to audit technical decisions and push for the best solution to a problem. Experience building E2E Application right from backend database to persistent layer. Experience UI technologies Angular, react.js, Node.js or fullstack environment will be preferred. Experience with NoSQL technologies (MongoDB, Cassandra, Neo4j, Dynamodb, etc.) Elastic Search, Kibana, ELK, Logstash. Experience in developing Enterprise Software using Agile Methodology. Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc. SaaS cloud-based platform exposure. Ownership E2E design development and also quality enterprise product/application deliverable exposure. A track record of setting and achieving high standards. Strong understanding of modern technology architecture. Key Programming Skills: Java, J2EE with cutting edge technologies. Excellent team building, mentoring and coaching skills are a must-have. Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (worlds leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
Posted 4 weeks ago
16.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description We are looking for a Director of Engineering to lead one of our key product engineering teams. This role will report directly to the VP of Engineering and will be responsible for successful execution of the company's business mission through development of cutting-edge software products and solutions. As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team. You will have to collaborate with Product Management and Implementation teams and build a commercially successful product. You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands on engineering leadership. Requirement deep technical knowledge in Software Product Engineering using Amazon Web Services,Java 8 Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, docker, kubernetes, Amazon Web Services ,Architecture Concepts,Design PatternsData Structures & Algorithms,Distributed Computing,Multi-threading,AWS,Docker,Kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcrawler, springboot, etc. is a must. Job Requirement 16+ years of experience in Software Engineering with at least 5+ years as an engineering leader in a software product company. Hands-on technical leadership with proven ability to recruit high performance talent High technical credibility - ability to audit technical decisions and push for the best solution to a problem. Experience building E2E Application right from backend database to persistent layer. Experience UI technologies Angular, react.js, Node.js or fullstack environment will be preferred. Experience with NoSQL technologies (MongoDB, Cassandra, Neo4j, Dynamodb, etc.) Elastic Search, Kibana, ELK, Logstash. Experience in developing Enterprise Software using Agile Methodology. Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc. SaaS cloud-based platform exposure. Experience on Docker, Kubernetes etc. Ownership E2E design development and also quality enterprise product/application deliverable exposure A track record of setting and achieving high standards Strong understanding of modern technology architecture Key Programming Skills: Java, J2EE with cutting edge technologies Excellent team building, mentoring and coaching skills are a must-have Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (worlds leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Scrum Master Exp: 3-8 yrs Location: Bangalore Scrum Master advanced certifications(PSM, SASM, SSM, etc) Working experience on Agile project management tools (Jira, VersionOne(Agility.ai) Rally, etc) Good to have skills: SAFE Agile, Scrum master certification Knowledge and experience in working with the Safe framework. Experience with continuous delivery, DevOps, and release management. Experience working with European customers or colleagues as a big plus. Ability to communicate concisely and accurately to team and to management Knowledge in all or several of the following: In software development (Python, JavaScript, ASP, C#, HTML5...) Data storage technologies (SQL, . Net, NoSQL (Neo4J, Neptune), S3, AWS (Redshift) ) Web development technologies and frameworks (e.g. Angular, AngularJS, ReactJS) DevOps methodologies and practices
Posted 4 weeks ago
0.0 - 3.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Key Responsibilities: Deliver engaging and interactive training sessions (24 hours total) based on structured modules. Teach integration of monitoring, logging, and observability tools with machine learning. Guide learners in real-time anomaly detection, incident management, root cause analysis, and predictive scaling. Support learners in deploying tools like Prometheus, Grafana, OpenTelemetry, Neo4j, Falco, and KEDA. Conduct hands-on labs using LangChain, Ollama, Prophet, and other AI/ML frameworks. Help participants set up smart workflows for alert classification and routing using open-source stacks. Prepare learners to handle security, threat detection, and runtime anomaly classification using LLMs. Provide post-training support and mentorship when necessary. Observability & Monitoring: Prometheus, Grafana, OpenTelemetry, ELK Stack, FluentBit AI/ML: Python, scikit-learn, Prophet, LangChain, Ollama (LLMs) Security Tools: Falco, KubeArmor, Sysdig Secure Dev Tools: Docker, VSCode, Jupyter Notebooks LLMs & Automation: LangChain, Neo4j, GPT-based explanation tools, Slack Webhooks.
Posted 4 weeks ago
5.0 - 10.0 years
8 - 13 Lacs
Bengaluru
Work from Office
A Senior R&D Engineer, you will be responsible for designing, developing, and maintaining high-quality software solutions with an expertise in Java and Spring Boot. You also have experience in UML modeling, JSON schema, and NoSQL databases. You should have strong skills in cloud-native development and microservices architecture, with additional knowledge in scripting and Helm charts. You have: Bachelors or masters degree in computer science, Engineering, or a related field. 5+ years of experience in software development with a focus on Java and Spring Boot. Exposure to CI/CD tools (Jenkins, GitLab CI). It would be nice if you also had: Understanding of RESTful API design and implementation. Relevant certifications (e.g., AWS Certified Developer, Oracle Certified Professional Java SE) are a plus. Knowledge of container orchestration and management. Familiarity with Agile development methodologies. Design and develop high-quality applications using Java and Spring Boot, implementing and maintaining RESTful APIs and microservices. Create and maintain UML diagrams for software architecture, define and manage JSON schemas, and optimize NoSQL databases like Neo4j, MongoDB, and Cassandra for efficient data handling. Develop and deploy cloud-native applications using AWS, Azure, or OCP, ensuring scalability and resilience in microservices environments. Manage Kubernetes deployments with Helm charts, collaborate with DevOps teams to integrate CI/CD pipelines, and automate tasks using Python and Bash scripting. Ensure efficient data storage and retrieval, optimize system performance, and support automated deployment strategies. Maintain comprehensive documentation for software designs, APIs, and deployment processes, ensuring clarity and accessibility.
Posted 4 weeks ago
5.0 - 10.0 years
8 - 13 Lacs
Bengaluru
Work from Office
As a Senior R&D Engineer, youll be at the forefront of designing and developing cutting-edge, cloud-native applications using Java and Spring Boot. Youll have the chance to craft customer-focused solutions that make a real impact, leveraging microservices architecture, NoSQL databases, and containerized deployments to power high-performance enterprise applications. You have: Bachelors or Masters degree in Computer Science, Engineering, or a related field 5+ years of experience in software development, specializing in Java and Spring Boot, with expertise in microservices architecture Expertise in NoSQL databases (Neo4j / MongoDB / Cassandra), RESTful API design, containerization technologies (Docker / Kubernetes / Helm), and scripting language (Python / Bash) for automation Proficiency in software modeling (UML, JSON schema design), DevOps practices, and tracing tools (Wireshark, tShark) for debugging and performance tuning It would be nice if you also have: Exposure on cloud platforms (AWS, Azure, OCP) and CI/CD tools (Jenkins, GitLab CI) Develop and optimize enterprise applications using Java and Spring Boot. Design and implement JSON schemas, UML models, and NoSQL databases (Neo4j, MongoDB, Cassandra). Build and deploy cloud-native solutions on AWS, Azure, or OCP. Implement microservices architecture, using Docker, Kubernetes, and Helm charts for deployment. Automate workflows with scripting languages (Python, Bash). Ensure robust RESTful API design, cloud storage integration, and serverless computing. Drive innovation and process improvements in an Agile development environment.
Posted 4 weeks ago
10.0 - 15.0 years
17 - 22 Lacs
Chennai
Work from Office
In this role, you will play a key role in shaping innovative solutions for customers, acting as their trusted advisor in autonomous network domains like Assurance/Network as Code(NAC). You'll work closely with stakeholders to understand their unique needs and translate them into practical, high-impact solutions. If you have a flair for crafting smart, customer-focused solutions, this role is for you ! You'll design and deliver end-to-end architectures using Nokias cutting-edge portfolio (and beyond) to help customers achieve their goals with confidence and long-term technical integrity. You have: Bachelors degree in engineering/technology or equivalent with 10+ years of hands-on experience in autonomous networks driving large programs, and should have worked as an Architect/Designer for at least 5 years. Experience in Assurance/Network as Code (NAC). Exposure to Java, expect scripting, Python, Kubernetes, Microservices, Databases, XML, XSLT, Data Parsing, SNMP, REST, SOAP, CORBA, LDAP, JMS, and FTP. Exposure to Oracle, Postgres, MongoDB, MariaDB, Neo4J, containerization, orchestration tools, and agile methodologies. It would be nice if you also had: Understanding of 5G Slicing, 5G SA/NSA network, IP/MPLS, Optics, IMS, VoLTE, NFV/SDN, Fixed network. Been an independent, disruptive thinker with a results-oriented mindset and strong communication skills. Ability to work in a fast-paced global environment in collaboration with cross-cultural teams and customers. Develop a Requirement Definition Document (RDD), High-Level Design (HLD), and Low-Level Design (LLD). Stay updated on customer architecture within the dedicated technical area and regional requirements. Apply solution architecture standards, processes, and principles. Define and develop the full scope of solutions, collaborating across teams and organizations to create effective outcomes. Work effectively in diverse environments, leveraging best practices and industry knowledge to enhance products and services. Serve as a trusted advisor and mentor to team members, guiding projects and tasks. Drive projects with manageable risks and resource requirements or oversee small teams, managing day-to-day operations, resource allocation, and workload distribution. Act as a key troubleshooter and subject matter expert on the Autonomous product portfolio, including , assurance, inventory,
Posted 4 weeks ago
0.0 - 4.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Key Responsibilities: Deliver engaging and interactive training sessions (24 hours total) based on structured modules. Teach integration of monitoring, logging, and observability tools with machine learning. Guide learners in real-time anomaly detection, incident management, root cause analysis, and predictive scaling. Support learners in deploying tools like Prometheus, Grafana, OpenTelemetry, Neo4j, Falco, and KEDA. Conduct hands-on labs using LangChain, Ollama, Prophet, and other AI/ML frameworks. Help participants set up smart workflows for alert classification and routing using open-source stacks. Prepare learners to handle security, threat detection, and runtime anomaly classification using LLMs. Provide post-training support and mentorship when necessary. Skills Kubernetes: Minikube, Helm, kubectl, HPA, KEDA Observability & Monitoring: Prometheus, Grafana, OpenTelemetry, ELK Stack, FluentBit AI/ML: Python, scikit-learn, Prophet, LangChain, Ollama (LLMs) Security Tools: Falco, KubeArmor, Sysdig Secure Dev Tools: Docker, VSCode, Jupyter Notebooks LLMs & Automation: LangChain, Neo4j, GPT-based explanation tools, Slack Webhooks
Posted 4 weeks ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Python Developer – Data Science & AI Integration Location: Chandkheda, Ahmedabad, Gujarat 382424 Experience: 2–3 Years Employment Type: Full-time Work Mode: On-site About the Role We are seeking a talented and driven Python Developer to join our AI & Data Science team. The ideal candidate will have experience in developing backend systems, working with legal datasets, and integrating AI/LLM-based chatbots (text and voice). This is a hands-on role where you’ll work across modern AI architectures like RAG and embedding-based search using vector databases. Key Responsibilities Design and implement Python-based backend systems for AI and data science applications. Analyze legal datasets and derive insights through automation and intelligent algorithms. Build and integrate AI-driven chatbots (text & voice) using LLMs and RAG architecture. Work with vector databases (e.g., Pinecone, ChromaDB) for semantic search and embedding pipelines. Implement graph-based querying systems using Neo4j and Cypher. Collaborate with cross-functional teams (Data Scientists, Backend Engineers, Legal SMEs). Maintain data pipelines for structured, semi-structured, and unstructured data. Ensure code scalability, security, and performance. Required Skills & Experience 2–3 years of hands-on Python development experience in AI/data science environments. Solid understanding of legal data structures and preprocessing. Experience with LLM integrations (OpenAI, Claude, Gemini) and RAG pipelines. Proficiency in vector databases (e.g., Pinecone, ChromaDB) and embedding-based similarity search. Experience with Neo4j and Cypher for graph-based querying. Familiarity with PostgreSQL and REST API design. Strong debugging and performance optimization skills. Nice to Have Exposure to Agile development practices. Familiarity with tools like LangChain or LlamaIndex. Experience working with voice-based assistant/chatbot systems. Bachelor's degree in Computer Science, Data Science, or a related field. Why Join Us? Work on cutting-edge AI integrations in a domain-focused environment. Collaborate with a passionate and experienced cross-functional team. Opportunity to grow in legal-tech and AI solutions space.
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
We’re Hiring: Data Engineer Experience Level: Mid-Level / Senior (based on fit) We’re looking for a skilled and motivated Data Engineer to join our growing team. If you're passionate about designing scalable data infrastructure and love working with cutting-edge tools like Apache Spark, Airflow, and Neo4j, this role is for you. Key Responsibilities: Design, build, and maintain scalable ETL pipelines for diverse data sources Develop and optimize data processing workflows and models for performance and reliability Leverage Apache Spark for distributed data transformations and large-scale processing Schedule and manage data pipelines using Apache Airflow Write efficient, maintainable Python code for data tasks and automation Model and manage graph databases using Neo4j to extract insights from complex data relationships Collaborate with data scientists, analysts, and cross-functional teams to deliver actionable insights Maintain high data quality through testing, validation, and monitoring Troubleshoot pipeline issues and ensure infrastructure reliability Stay current with trends and advancements in data engineering Qualifications: Proven experience as a Data Engineer or similar role Proficiency in Python, including libraries like Pandas and PySpark Strong understanding of ETL processes and tools Hands-on experience with Apache Spark and Airflow Practical knowledge of Neo4j and Cypher for graph-based data modeling Solid understanding of SQL and NoSQL databases Familiarity with cloud platforms like AWS, GCP, or Azure Strong analytical thinking and problem-solving skills Excellent collaboration and communication abilities Bachelor’s degree in Computer Science, Engineering, or a related field
Posted 4 weeks ago
5.0 - 8.0 years
3 - 6 Lacs
Hyderābād
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2 Team Management Productivity, efficiency, absenteeism 3 Capability development Triages completed, Technical Test performance Mandatory Skills: Neo4j Graph Database. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 4 weeks ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are looking for a Svelte Developer to build lightweight, reactive web applications with excellent performance and maintainability. Key Responsibilities: Design and implement applications using Svelte and SvelteKit. Build reusable components and libraries for future use. Optimize applications for speed and responsiveness. Collaborate with design and backend teams to create cohesive solutions. Required Skills & Qualifications: 8+ years of experience with Svelte or similar reactive frameworks. Strong understanding of JavaScript, HTML, CSS, and reactive programming concepts. Familiarity with SSR and JAMstack architectures. Experience integrating RESTful APIs or GraphQL endpoints. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 month ago
0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Company: Amonex Technologies Pvt. Ltd. Product: Recaho POS – www.recaho.com Location: [Office Location or Remote] Internship Duration: 6 Months Reporting To: Sales Manager / Marketing Lead About Amonex Technologies Amonex Technologies is a fast-growing SaaS company behind Recaho, an all-in-one restaurant management platform that is transforming how food and beverage businesses operate. With a presence in 18+ countries and over 11,000 customers, Recaho empowers restaurants, cafes, and cloud kitchens with tools for billing, inventory, CRM, and online ordering — all in one platform. At Amonex, we’re not just building software — we’re shaping the future of the global food service industry through innovation, data, and design. Join us on our mission to digitally empower 1 million food businesses across emerging markets. About The Role We are looking for a driven and enthusiastic Sales and Marketing Intern for a 6-month internship. This role is ideal for someone passionate about email marketing, digital outreach, and eager to dive deep into sales operations and customer acquisition strategies. You will work directly with our core sales and marketing teams to execute campaigns, learn tools, and gain real-world exposure to a high-growth SaaS business. Key Responsibilities Assist in planning and executing email marketing campaigns for lead generation and engagement. Work with the sales team to manage leads, update CRM, and optimize conversion workflows. Research and segment databases for targeted outbound communication. Help craft compelling content including email templates, case studies, and sales decks. Track and analyze campaign metrics, identify opportunities for improvement. Collaborate on special projects involving marketing automation, product launches, and field promotions. Key Skills and Interests Passion for email marketing, CRM systems, and customer engagement. Eagerness to learn about sales funnels, marketing automation, and SaaS growth strategies. Strong communication, writing, and organizational skills. Ability to work independently and collaboratively in a fast-paced environment. Prior exposure to tools like HubSpot, Mailchimp, or Zoho is a plus (but not required). What You’ll Learn How a high-growth SaaS startup builds and executes end-to-end sales & marketing funnels. Hands-on experience with email campaigns, lead nurturing, and CRM operations. Understanding of how marketing directly supports sales in driving business growth. Industry-level exposure to the F&B tech landscape, with opportunities to contribute and make impact. About Company: We are a startup founded by ex-Infosys employees based in Pune. We are developing next-generation e-commerce platforms with various flavors, including B2C, B2B, B2B2C, and marketplaces. Our mission is to replace current e-commerce and vertical solutions/platforms using modern-age technologies and frameworks to deliver exceptional performance and user experiences. Technologies we use include Node.js, GraphQL, MongoDB, Neo4j DB, Nginx, Docker, etc.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough