Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Description: Job Title- Data Science Engineer, AVP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What we ll offer you As part of our flexible scheme, here are just some of the benefits that you ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI Applications: Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models: Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally: Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows: Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 8+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How we ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 4 days ago
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About The Role : Job Title- Data Science Engineer, AS Location- Bangalore, India Role Description We are seeking a Data Science Engineer to contribute to the development of intelligent, autonomous AI systems The ideal candidate will have a strong background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves deploying AI solutions that leverage Retrieval-Augmented Generation (RAG) Multi-agent frameworks Hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 4+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. How well support you
Posted 4 days ago
7.0 - 12.0 years
32 - 37 Lacs
Bengaluru
Work from Office
About The Role : Job Title- Data Science Engineer, VP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 13+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 4 days ago
2.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Neo4j Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationA DevOps engineer in the platform teams should have the following experience and expertise:- Experience with Azure cloud infrastructure deployment, configuration and maintaining via YAML/bicep/Terraform:o Databrickso VNetso Virtual Machineso App serviceso Storageo Container apps- Using Azure DevOps Board to manage the CI/CD pipelines and our git repos- Automated monitoring and incident management- Experience with (complex) azure infrastructure- Improve and maintain a high level of security and compliancy for the infra structure- Knowledge about infrastructure as code(IAAC)- Solid communication skills to ensure ideas and opinion can be shared easilyNice to have:- ETL knowledge (data engineering) to be able to support and help our platform customers with their solutions- Worked with Neo4J, that is the network database type hosted by the team- Works in Bangalore, this to increase team feeling even more- Experience working with Docker Qualification 15 years full time education
Posted 4 days ago
3.0 - 4.0 years
0 Lacs
Mohali district, India
On-site
Job Title: Python Backend Developer (Data Layer) Location: Mohali, Punjab Company: RevClerx About RevClerx: RevClerx Pvt. Ltd., founded in 2017 and based in the Chandigarh/Mohali area (India), is a dynamic Information Technology firm providing comprehensive IT services with a strong focus on client-centric solutions. As a global provider, we cater to diverse business needs including website designing and development, digital marketing, lead generation services (including telemarketing and qualification), and appointment setting. Job Summary: We are seeking a skilled Python Backend Developer with a strong passion and proven expertise in database design and implementation. This role requires 3-4 years of backend development experience, focusing on building robust, scalable applications and APIs. The ideal candidate will not only be proficient in Python and common backend frameworks but will possess significant experience in designing, modeling, and optimizing various database solutions, including relational databases (like PostgreSQL) and, crucially, graph databases (specifically Neo4j). You will play a vital role in architecting the data layer of our applications, ensuring efficiency, scalability, and the ability to handle complex, interconnected data. Key Responsibilities: ● Design, develop, test, deploy, and maintain scalable and performant Python-based backend services and APIs. ● Lead the design and implementation of database schemas for relational (e.g., PostgreSQL) and NoSQL databases, with a strong emphasis on Graph Databases (Neo4j). ● Model complex data relationships and structures effectively, particularly leveraging graph data modeling principles where appropriate. ● Write efficient, optimized database queries (SQL, Cypher, potentially others). ● Develop and maintain data models, ensuring data integrity, consistency, and security. ● Optimize database performance through indexing strategies, query tuning, caching mechanisms, and schema adjustments. ● Collaborate closely with product managers, frontend developers, and other stakeholders to understand data requirements and translate them into effective database designs. ● Implement data migration strategies and scripts as needed. ● Integrate various databases seamlessly with Python backend services using ORMs (like SQLAlchemy, Django ORM) or native drivers. ● Write unit and integration tests, particularly focusing on data access and manipulation logic.
Posted 4 days ago
9.0 years
0 Lacs
Thiruporur, Tamil Nadu, India
On-site
Job Description Join us with your skills and experience as an Architect to design, develop, and maintain robust test automation frameworks for our Autonomous Network solutions. This includes Orchestration/fulfillment (FlowOne, CDPA, CDFF, NoRC), Assurance/NAC, Inventory (UIV, Discovery and Reconciliation), SSO/Security product suites (NIAM), and Analytics. You will work closely with development teams, product owners, and other stakeholders to understand requirements and translate them into practical, high-impact test strategies. You will champion best practices in test automation, driving continuous improvement and innovation within the testing lifecycle. How You Will Contribute And What You Will Learn Develop and maintain comprehensive test automation frameworks and strategies aligned with Agile methodologies and CI/CD pipelines. Design and implement automated tests covering various aspects of service functionality, including performance, security, scalability, reliability, and integration. Develop coding standards, procedures, and methodologies for automated testing, collaborating with other QA leaders and architects. Create detailed test plans that specify automation architecture, positive/negative testing techniques, and reporting mechanisms. Drive end-to-end test automation, aiming for zero manual testing and integrated status reporting. Work with service development and release engineering to integrate automated tests into the CI/CD flow. Act as a subject matter expert on test automation best practices and technologies. Troubleshoot and resolve complex testing issues. Stay current with the latest testing technologies and industry trends. Key Skills And Experience If you have: Bachelor's degree in engineering/technology or equivalent with 9+ years of experience in software testing, with at least 5 years in a Test Architect or similar role for designing and implementing automated test frameworks. Practical Experience on software testing methodologies (Agile, Waterfall) and driving the full Software Testing Life Cycle (STLC). Experience with programming languages such as Java, JavaScript, Python, and scripting languages (Perl, Shell, etc.). It would be nice if you had: Good understanding of databases (Oracle, Postgres, MongoDB, MariaDB, Neo4j) and SQL. Exposure to Linux and containerization technologies (e.g., Docker, Kubernetes). Knowledge of testing in cloud environments (AWS, Azure, GCP) is a plus. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title Full-Stack AI Application Engineer (Node.js & Modern AI Platforms) About the Role We are looking for a versatile **Full-Stack AI Application Engineer** who can turn business ideas into production-ready web apps powered by the latest AI tooling. You will design, build, and deploy scalable applications—ranging from proof-of-concept MVPs to full SaaS products—leveraging cutting-edge platforms such as Bolt, Cursor, Replit, and Lovable. If you thrive on building end-to-end solutions, integrating custom LLMs, and shipping beautifully crafted UIs, we’d love to meet you. Key Responsibilities - **Architect & develop** responsive web apps (landing pages, dashboards, admin panels). - **Integrate databases & APIs** (Supabase, Firebase, PostgreSQL, MongoDB). - **Embed AI features**—GPT/ChatGPT, custom LLMs, vector search, recommendation engines. - **Implement authentication** and secure user flows. - **Set up payment processing** with Stripe or similar gateways. - **Deploy applications** on Vercel, Netlify, Replit, or cloud providers (AWS, GCP). - **Optimize performance & SEO** for mobile-first experiences. - Collaborate with product, design, and ML teams to iterate quickly from mockups to production. - Write clean, testable code and maintain documentation for handoffs and scaling. Required Skills & Experience | Category | Must-Have | Nice-to-Have | |----------|-----------|--------------| | **Languages** | JavaScript/TypeScript, Python | Dart (Flutter) | | **Frameworks** | Node.js, Express.js, React or Next.js | MERN stack, Shadcn | | **AI Tooling** | OpenAI API, custom LLM fine-tuning | Rasa, Botpress, n8n | | **Databases** | PostgreSQL, Supabase, MongoDB | GraphQL, Neo4j | | **DevOps & Hosting** | Vercel, Netlify, Replit | Docker, Kubernetes | | **Payment & Auth** | Stripe integration, OAuth/JWT | SOC 2 or HIPAA compliance | | **UI/UX** | Tailwind CSS, responsive design | Figma, Storybook | Who You Are - 3+ years building full-stack web applications. - Hands-on experience shipping at least one AI-enabled product or feature. - Comfortable owning projects end-to-end—from UI mockups through deployment. - Passionate about rapid prototyping, experimentation, and lifelong learning in the AI space. - Strong communication skills; able to explain technical decisions to non-technical stakeholders. What We Offer - Competitive salary + performance bonuses. - Remote-first culture with flexible hours. - Annual budget for conferences, courses, and hardware. - Opportunity to define engineering best practices in a fast-growing AI startup. How to Apply 1. Send your resume and a short note highlighting your most relevant AI projects. 2. Include a link to a GitHub repo or demo that showcases your end-to-end development skills. 3. We’ll schedule a 30-minute intro call, followed by a technical deep dive and portfolio review. Take your career to the next level by building the future of AI-powered web applications with us! #teceze
Posted 5 days ago
3.0 years
3 - 6 Lacs
Chennai
On-site
ROLE SUMMARY At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! ROLE RESPONSIBILITIES The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings BASIC QUALIFICATIONS Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time PREFERRED QUALIFICATIONS Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes PHYSICAL/MENTAL REQUIREMENTS None NON-STANDARD WORK SCHEDULE, TRAVEL OR ENVIRONMENT REQUIREMENTS Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech #LI-PFE
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
The Data Engineer will serve as a technical expert in the fields ofdesign and develop AI data pipelines to manage both large unstructuredand structured datasets, with a focus on building data pipelines for enterprise AI solutions Job Description In your new role you will: Working closely with data scientists and domain experts to design anddevelop AI data pipelines using agile development process. Developing pipelines for ingesting and processing large unstructuredand structured datasets from a variety of sources, with a specificemphasis on creating solutions for AI solutions to ensure efficient andeffective data processing. Work efficiently with structured and unstructured data sources. Work with cloud technologies such as AWS to design and implement scalable data architectures Supporting the operation of the data pipelines involves troubleshooting and bug fixing, as well as implementing change requeststo ensure that the data pipelines continue to meet user requirements. Your Profile You are best equipped for this task if you have: Masters or Bachelor’s Degree in Computer Science/Mathematics/Statistics or equivalent. Minimum of 3 years of relevant work experience in data engineering Extensive hands-on experience in conceptualizing, designing, andimplementing data pipelines. Proficiency in handling structured dataOracle unstructured data formats (e.g., PPT, PDF, Docx), databases(RDMS, Oracle/PL SQL, MySQL, NoSQL such as Elasticsearch, MongoDB,Neo4j, CEPH) and familiarity with big data platforms (HDFS, Spark,Impala). Experience in working with AWS technologies focussing on buildingscalable data pipelines. Strong background in Software Engineering & Development cycles(CI/CD) with proficiency in scripting languages, particularly Python. Good understanding and experience with Kubernetes / OpenshiftPlatform. Front-end Reporting & Dashboard and Data Exploration tools –Tableau Good understanding of data management, data governance, and datasecurity practices. Highly motivated, structured and methodical with high degree ofself-initiative Team player with good cross-cultural skills to work in internationalteam Customer and Result orientated #WeAreIn for driving decarbonization and digitalization. As a global leader in semiconductor solutions in power systems and IoT, Infineon enables game-changing solutions for green and efficient energy, clean and safe mobility, as well as smart and secure IoT. Together, we drive innovation and customer success, while caring for our people and empowering them to reach ambitious goals. Be a part of making life easier, safer and greener. Are you in? We are on a journey to create the best Infineon for everyone. This means we embrace diversity and inclusion and welcome everyone for who they are. At Infineon, we offer a working environment characterized by trust, openness, respect and tolerance and are committed to give all applicants and employees equal opportunities. We base our recruiting decisions on the applicant´s experience and skills. Please let your recruiter know if they need to pay special attention to something in order to enable your participation in the interview process. Click here for more information about Diversity & Inclusion at Infineon.
Posted 5 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This job is with Pfizer, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. About Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you'll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer's responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor's degree or Master's degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 5 days ago
0.0 - 89.0 years
0 Lacs
Mumbai, Maharashtra
On-site
singlePosition View All Jobs API Lead_Vice President_Data & Analytics Engineering Mumbai, Maharashtra, Inde Apply Now Find out how well you match with this job Upload your resume Job description Employment Type Full time Job Level Vice President Posted Date Jul 27, 2025 Morgan Stanley API Lead_ Vice President _Data & Analytics Engineering Profile Description We’re seeking someone to join our team as API Lead will be a part of our Cyber Data Risk & Resilience team, to will play a key role in helping transform how Morgan Stanley operates CDRR_Technology The Cybersecurity organization's mission is to create an agile, adaptable organization with the skills and expertise needed to defend against increasingly sophisticated adversaries. This will be achieved by maintaining sound capabilities to identify and protect our assets, proactively assessing threats and vulnerabilities and detecting events, ensuring resiliency through our ability to respond to and recover from incidents and building awareness and increase vigilance while continually developing our cyber workforce. Firmwide Data Office Data COE team is distributed globally between New York, London, Budapest, India, and Shanghai; and are engaged in a wide array of projects touching all business units (Institutional Securities, Investment Management, Wealth Management) and functions (e.g., Operations, Finance, Risk, Trading, Treasury, Resilience) across the Firm. The team vision is a multi-year effort to simplify firm’s data architecture and business processes front-to-back with goals of reducing infrastructure and manpower costs, improving the ability to demonstrate control of data, empowering developers by providing consistent means of handling data, facilitate data-driven insights & decision making, and providing a platform to implement future change initiatives faster, cheaper, and easier. Data & Analytics Engineering This is Vice President position that provides specialist data analysis and expertise that drive decision-making and business insights as well as crafting data pipelines, implementing data models, optimizing data processes for improved data accuracy and accessibility, including applying machine learning and AI-based techniques. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What you’ll do in the role: The Firmwide Data Office department is recruiting for an enthusiastic, dynamic, hands-on and delivery focused Senior API Developer. As a member of our Software Development team, we look first and foremost for people who are passionate about solving business problems through innovation and engineering practices. You'll be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner with stakeholders to stay focused on business goals. We embrace a culture of experimentation and constantly strive for improvement and learning. Work in a collaborative, trusting, thought-provoking environment-one that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. Combine your design and development expertise with a never-ending quest to create innovative technology through solid engineering practices. You'll work with highly inspired and inquisitive team of technologists who are developing & delivering top quality technology products to our clients & stakeholders What you’ll bring to the role: 10-15 years of hands-on experience in API Development and strong expertise in Core Java, Multithreading, and Object-Oriented Design. Strong experience in designing and developing RESTful APIs; good working knowledge of GraphQL. Experience with Java 15 or later (Java 17 preferred). Design, develop, and maintain core components of a high-performance application built around knowledge graph architecture. Implement and optimize scalable backend solutions integrating with Graph database. Develop and support APIs (both REST and GraphQL) to expose and manage application functionality efficiently. Deep understanding of distributed caching mechanisms, including Hazelcast, Caffeine, InCache, or Google Guava Cache. Experience integrating with Graph Databases (preferably Stardog); additional knowledge of Apache Jena and SAPRQL is a strong plus. Understanding of application security, authentication, and authorization best practices. Experience with ZooKeeper for coordination and distributed systems management. Familiarity with load balancer configurations and application performance tuning. Analyze, debug, and enhance existing components using modern Java practices, ensuring maintainability and reliability. Required Skills & Qualifications: Proficiency with Spring Framework, Spring Boot, and deep understanding of Spring Annotations and Java-based configurations. Strong understanding of system design principles, including scalability, fault tolerance, distributed systems, and performance optimization. Experience with unit and integration testing in modern Java applications. Full software development life cycle. Collaborate with cross-functional teams to ensure seamless data flow and performance through intelligent caching strategies. Good to Have: Experience or familiarity with Reactive Programming, especially using Spring WebFlux. Hands-on knowledge of HTTP clients like OkHttp, WebClient, or similar. Experience with or exposure to other graph databases or triple stores (e.g., Neo4j, Virtuoso, Blazegraph, RDF4J). Working knowledge of Redis for caching or data storage. Understanding of search and indexing systems similar to ElasticSearch used for building scalable and efficient search features. Exposure to observability and monitoring tools such as Prometheus, Grafana, Loki, Kibana or Splunk. What you can expect from Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 85 years. At our foundation are five core values — putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back — that guide our more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find trusted colleagues, committed mentors and a culture that values diverse perspectives, individual intellect and cross-collaboration. Our Firm is differentiated by the caliber of our diverse team, while our company culture and commitment to inclusion define our legacy and shape our future, helping to strengthen our business and bring value to clients around the world. Learn more about how we put this commitment to action: morganstanley.com/diversity. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximise their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing and advancing individuals based on their skills and talents. WHAT YOU CAN EXPECT FROM MORGAN STANLEY: We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Similar jobs Data Engineering Lead (Snowflake)_ Vice President _Data & Analytics Engineering Bengaluru, Karnataka, Inde + 2 more Data & Analytics Engineering Posted 14 days ago Data Analyst Lead/Cyber Security_Vice President_Data Analytics & Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Hybrid Posted 3 months ago Lead AI Solutions - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a day ago AI/ML Software Engineer - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a month ago Senior AI Solutions - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted 3 days ago Vice President_Applied AI Research Engineer_Data & Analytics Engineering Bengaluru, Karnataka, Inde Data & Analytics Engineering Posted 16 days ago Python/Data Engineering/Cloud_Vice President (Lead Software Engineer)_Parametric Mumbai, Maharashtra, Inde Data & Technology Posted 3 months ago Data Engineer - Director - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a day ago Python Data Engineering Lead - Vice President - Software Engineering Mumbai, Maharashtra, Inde Software Engineering Posted a month ago Principle Database - Vice President - Data & Analytics Engineering Bengaluru, Karnataka, Inde Data & Analytics Engineering Posted a month ago
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Full-Stack Developer with 5+ years of experience in building scalable web applications using Python (FastAPI), React.js, and cloud-native technologies. In this role, you will be responsible for developing a low-code/no-code AI agent platform, implementing an intuitive workflow UI, and integrating with LLMs, enterprise connectors, and role-based access controls. Your responsibilities will include backend development where you will develop and optimize APIs using FastAPI, integrating with LangChain, vector databases (Pinecone/Weaviate), and enterprise connectors (Airbyte/Nifi). Additionally, you will work on frontend development to build an interactive drag-and-drop workflow UI using React.js (React Flow, D3.js, TailwindCSS). You will also be involved in implementing OAuth2, Keycloak, and role-based access controls (RBAC) for multi-tenant environments. Database design is a crucial part of this role, where you will work with PostgreSQL (structured data), MongoDB (unstructured data), and Neo4j (knowledge graphs). DevOps & Deployment tasks will involve deploying using Docker, Kubernetes, and Terraform across multi-cloud (Azure, AWS, GCP) to ensure smooth operations. Performance optimization is another key area where you will focus on improving API performance and optimizing frontend responsiveness for seamless user experience. Collaboration with AI & Data Engineers is essential, as you will work closely with the Data Engineering team to ensure smooth AI model integration. To be successful in this role, you are required to have 5+ years of experience in FastAPI, React.js, and cloud-native applications. Strong knowledge of REST APIs, GraphQL, and WebSockets is essential, along with experience in JWT authentication, OAuth2, and multi-tenant security. Additionally, proficiency in PostgreSQL, MongoDB, Neo4j, and Redis is expected. Knowledge of workflow automation tools (n8n, Node-RED, Temporal.io), familiarity with containerization (Docker, Kubernetes), and CI/CD pipelines is also required. Bonus skills include experience in Apache Kafka, WebSockets, or AI-driven chatbots.,
Posted 6 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 6 days ago
6.0 years
3 - 6 Lacs
Chennai
On-site
6+ Years on IT experience and 4+ years of experirnce in ne04j Design and implement efficient graph models using Neo4j to represent complex relationships. Write optimized Cypher queries for data retrieval, manipulation, and aggregation. Develop and maintain ETL pipelines to integrate data from various sources into the graph database. Integrate Neo4j databases with existing systems using APIs and other middleware technologie About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. PTC is a dynamic and innovative company dedicated to creating innovative products that transform industries and improve lives. We are looking for a talented Product Architect that will be able to lead the conceptualization and development of groundbreaking products, and leverage the power of cutting edge AI technologies to drive enhanced productivity and innovation. Job Description: Responsibilities: Design and implement scalable, secure, and high-performing Java applications. Focus on designing, building, and maintaining complex, large-scale systems with intrinsic multi-tenant SaaS characteristics. Define architectural standards, best practices, and technical roadmaps. Lead the integration of modern technologies, frameworks, and cloud solutions. Collaborate with DevOps, product teams, and UI/UX designers to ensure cohesive product development. Conduct code reviews, mentor developers, and enforce best coding practices. Stay up-to-date with the latest design patterns, technological trends, and industry best practices. Ensure scalability, performance, and security of product designs. Conduct feasibility studies and risk assessments. Requirements: Proven experience as a Software Solution Architect or similar role. Strong expertise in vector and graph databases (e.g., Pinecone, Chroma DB, Neo4j, ArangoDB, Elastic Search). Extensive experience with content repositories and content management systems. Familiarity with SaaS and microservices implementation models. Proficiency in programming languages such as Java, Python, or C#. Excellent problem-solving skills and ability to think strategically. Strong technical, analytical, communication, interpersonal, and presentation skills. Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience with cloud platforms (e.g., AWS, Azure). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with artificial intelligence (AI) and machine learning (ML) technologies. Benefits: Competitive salary and benefits package. Opportunities for professional growth and development. Collaborative and inclusive work environment. Flexible working hours and remote work options. Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."
Posted 6 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities Must have 4+ years of experience in Python programming. Experience on Django, Flask, Fast API, Kubernetes, building API, create microservice using python. Ability to think through and build API & SDK design is must. Expertise in Agile development methodology. Strong knowledge in Design patterns, Security, Performance tuning. Pydantic, Linting using flake8 or similar process in python microservice. Hands on experience using pyMongo integration and retrieval of Mongo collection. Hands on experience integrating with graphql & graph database. Good understanding and setting up Neo4J database. Write clean, well-documented, and type-safe Python code with strong OOP practices and typing discipline. Design and manage data models using SQLAlchemy or similar ORMs for both relational and NoSQL databases. Ensure high availability, reliability, and performance of backend services. Collaborate with DevOps to build, test, and deploy services via CI/CD pipelines (GitHub Actions or GitLab CI). Implement feature-branch workflows, follow conventional commits, and conduct peer code reviews. Write and maintain unit, integration, and functional tests for backend services. Monitor and resolve issues in production APIs and services. Requirements Mandatory : Python + Fast API Experience : 5 to 8 years Location : Chennai Notice Period : Immediate to 15 days only serving (ref:hirist.tech)
Posted 1 week ago
3.0 - 6.0 years
10 - 20 Lacs
Bengaluru
Work from Office
We are looking for talented Python Developers with 3 - 6 years of hands-on experience in Natural Language Processing (NLP) and Generative AI Models to join our growing team working on innovative projects in enterprise scale AI applications and solutions. Responsibilities: Design, evaluation, and best fit analysis of NLP / ML Models Design, fine-tuning, and deployment of open-source and API-based LLMs to solve real-world use cases Engage with clients for data preparation process, the subtleties of the fine-tuning procedure, and the challenges it presents. Lead efforts to address challenging data science and machine learning problems, spanning predictive modeling to natural language processing, with a significant impact on organizational success. Building data ingestion pipelines, handling various data formats such as JSON and YAML. Technical Qualifications: Strong in data collection, curation, and dataset preparation for ML Models (Classifiers / Categorization). Demonstrable expertise in area of data science or analytics (e.g., machine learning, deep learning, NLP, predictive modeling and forecasting, statistics). Strong experience in machine learning (unsupervised and supervised techniques) and experience in machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, Attention, Generative Models etc would be a plus. SQL expertise for data extraction, transformation, and analysis. Experience in building and maintaining scalable APIs using FastAPI / Flask frameworks. Proficiency in Python, with experience in libraries such as PyTorch or TensorFlow. Solid understanding of data structures, embedding techniques, and vector search systems. Experience in SQL / NoSQL databases including PostgreSQL, MySQL and MongoDB. Proficiency in Graph Databases Neo4j. Generic Qualifications: Comfortable working with cross-functional teams product managers, data scientists, and other developers to deliver quality software solutions. Ensure software quality through code reviews, unit testing, and best practices for DevOps and CI/CD pipelines. Ability to manage time wisely across projects and competing priorities, meet agreed upon deadlines, and be accountable for work. Able to write maintainable and functionally tested modules. You should not hesitate in learning and building innovative solutions in newer technology stacks. Others: Candidate should be willing to relocate or commute to North Bangalore (Yelahanka) for work.
Posted 1 week ago
0 years
0 Lacs
Greater Chennai Area
On-site
Overview Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Responsibilities Key Responsibilities: Meta Data Modeling: Develop and implement meta data models that represent complex data structures and relationships across the system. Collaborate with cross-functional teams to design flexible, efficient, and scalable meta data models to support application and data processing requirements. Software Development (Java & Spring Boot): Develop high-quality, efficient, and scalable Java applications using Spring Boot and other Java-based frameworks. Participate in full software development lifecycle: design, coding, testing, deployment, and maintenance. Optimize Java applications for performance and scalability. UI Development (Angular): (Optional) Design and implement dynamic, responsive, and user-friendly web UIs using Angular. Integrate the UI with backend microservices, ensuring a seamless and efficient user experience. Ensure that the UI adheres to best practices in terms of accessibility, security, and usability. Containerization & Microservices (Kubernetes): Design, develop, and deploy microservices using Kubernetes to ensure high availability and scalability of applications. Use Docker containers and Kubernetes for continuous deployment and automation of application lifecycle. Maintain and troubleshoot containerized applications in a cloud or on-premise Kubernetes environment. Requirements Database Management (Postgres & Neo4j): Design and implement database schemas and queries for both relational databases (Postgres) and graph databases (Neo4j). Develop efficient data models and support high-performance query optimization. Collaborate with the data engineering team to integrate data pipelines and ensure the integrity of data storage. Business Process Modeling (BPMN): Utilize BPMN to model business processes and workflows. Design and optimize process flows to improve operational efficiency. Work with stakeholders to understand business requirements and implement process automation. Rule Engine (Drools Rules): Implement business logic using the Drools Rules Engine to automate decision-making processes. Work with stakeholders to design and define business rules and integrate them into applications. Ingestion Framework: Build and maintain robust data ingestion frameworks that process large volumes of data efficiently. Ensure proper data validation, cleansing, and enrichment during the ingestion process.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Sanctity AI is a Netherlands-based startup founded by an IIT alum, specializing in ethical, safe, and impactful artificial intelligence. Our agile team is deeply focused on critical areas like AI alignment, responsible LLM training, prompt orchestration, and advanced agent infrastructure. In a landscape where many talk ethics, we build and deploy solutions that genuinely embody ethical AI principles. Sanctity AI is positioned at the forefront of solving real-world alignment challenges, shaping the future of trustworthy artificial intelligence. We leverage proprietary algorithms, rigorous ethical frameworks, and cutting-edge research to deliver AI solutions with unparalleled transparency, robustness, and societal impact. Sanctity AI represents a rare opportunity in the rapidly evolving AI ecosystem, committed to sustainable innovation and genuine human-AI harmony. The Role As an AI ML Intern reporting directly to the founder, you’ll go beyond just coding. You’ll own whole pipelines—from data wrangling to deploying cutting-edge ML models in production. You’ll also get hands-on experience with large language models (LLMs), prompt engineering, semantic search, and retrieval-augmented generation. Whether it’s spinning up APIs in FastAPI, containerizing solutions with Docker, or exploring vector and graph databases like Pinecone and Neo4j, you’ll be right at the heart of our AI innovation. What You’ll Tackle Data to Insights: Dive into heaps of raw data, and turn it into actionable insights that shape real decisions. Model Building & Deployment: Use Scikit-learn, XGBoost, LightGBM, and advanced deep learning frameworks (TensorFlow, PyTorch, Keras) to develop state-of-the-art models. Then, push them to production—scaling on AWS, GCP, or other cloud platforms. LLM & Prompt Engineering: Fine-tune and optimize large language models. Experiment with prompt strategies and incorporate RAG (Retrieval-Augmented Generation) for more insightful outputs. Vector & Graph Databases: Implement solutions using Pinecone, Neo4j, or similar technologies for advanced search and data relationships. Microservices & Big Data: Leverage FastAPI (or similar frameworks) to build robust APIs. If you love large-scale data processing, dabble in Apache Spark, Hadoop, or Kafka to handle the heavy lifting. Iterative Improvement: Observe model performance, gather metrics, and keep refining until the results shine. Who You Are Python Pro: You write clean, efficient Python code using libraries like Pandas, NumPy, and Scikit-learn. Passionate About AI/ML: You’ve got a solid grasp of algorithms and can’t wait to explore deep learning or advanced NLP. LLM Enthusiast: You’re familiar with training or fine-tuning large language models and love the challenge of prompt engineering. Cloud & Containers Savvy: You’ve at least toyed with AWS, GCP, or similar, and have some experience with Docker or other containerization tools. Data-Driven & Detail-Oriented: You enjoy unearthing insights in noisy datasets and take pride in well-documented, maintainable code. Curious & Ethical: You believe AI should be built responsibly and love learning about new ways to do it better. Languages: You can fluently communicate complex technical ideas in English. Fluency in Dutch, Spanish or French is a plus. Math Wizard: You have a strong grip on Advanced Mathematics and Statistical modeling. This is a core requirement. Why Join Us? Real-World Impact: Your work will address real world and industry challenges—problems that genuinely need AI solutions. Mentorship & Growth: Team up daily with founders and seasoned AI pros, accelerating your learning and skill-building. Experimentation Culture: We encourage big ideas and bold experimentation. Want to try a new approach? Do it. Leadership Path: Show us your passion and skills, and you could move into a core founding team member role, shaping our future trajectory. Interested? Send over your résumé, GitHub repos, or any project links that showcase your passion and talent. We can’t wait to see how you think, build, and innovate. Let’s team up to create AI that isn’t just powerful—but also responsibly built for everyone.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Embark on a transformative journey with SwaaS, where innovation meets opportunity. Explore thrilling career prospects at the cutting edge of technology. Join our dynamic team, dedicated to shaping the future of IT. At SwaaS, we offer more than just jobs; we provide a platform for growth, collaboration, and impactful contributions. Discover a workplace where your aspirations align with limitless possibilities. Your journey towards a rewarding career in technology begins here, with SwaaS as your guide. Perks and Benefits We go beyond salaries and provide guaranteed benefits that speak about Swaas" value and culture. Our employees receive common benefits and performance-based individual benefits. Performance based benefits We promote a culture of equity. Accept the challenge, deliver the results, and get rewarded. Healthcare Our comprehensive medical insurance helps you cover your urgent medical needs. Competitive Salary We assure with pride that we are on par with the industry leaders in terms of our salary package. Employee Engagement A break is always needed out of the regular monotonous work assignments. Our employee engagement program helps our employees enhance their team bonding. Upskilling We believe in fostering a culture of learning and harnessing the untapped potential in our employees. Everyone is encouraged and rewarded for acquiring new skills and certifications. Senior AI/ML Developer (Lead Role) (Experience: 3 - 5 years) Tech Stack: Python, Node.js (Javascript), LangChain, LLama Index, OpenAI API, Perplexity.ai API, Neo4j, Docker, Kubernetes Responsibilities: Lead the integration of Perplexity.ai and ChatGPT APIs for real-time chatbot interactions. Design and implement intent extraction and NLP models for conversational AI. Design and implement RAG (Retrieval-Augmented Generation) pipelines for context-aware applications. Guide junior developers and ensure best practices in LLM application development. Set up Python API security, and performance monitoring. Requirements: Deep knowledge of NLP, LLMs (ChatGPT, GPT-4, Perplexity.ai, LangChain). Experience with Graph Databases (Neo4j) and RAG techniques. Familiarity with vector databases (Pinecone, Weaviate, FAISS). Strong expertise in Python (FastAPI, Flask) and/or Node.js (Express, NestJS). Good to have - experience with containerization (Docker, Kubernetes). Excellent problem-solving skills and experience leading a small AI/ML team.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Software Application Engineer at Nbyula, you will be responsible for developing production-level web and/or native applications that aim to solve real-world problems, reduce manual errors, automate monotonous tasks, and enable remote collaboration in a highly collaborative crowd-sourced environment. This full-time terraformer position requires you to quickly absorb functional domain skills, business knowledge, market insights, and organizational values of Nbyula. Ideal Terraformers at Nbyula possess the following attributes: - Openness to diverse perspectives and approaches in problem-solving - Conscientiousness and dedication towards larger goals - Humility, respect, and willingness to voice different perspectives respectfully - Willingness to take calculated risks and explore new ideas - Ability to self-learn and research for solutions - Commitment to self-actualization and reaching full potential without being distracted **Roles, Responsibilities & Expectations:** - Collaborate with the engineering and design teams to build reusable frontend components for various web applications - Develop and maintain reusable libraries, packages, and frameworks for server-side development - Participate in product requirements, design, and engineering discussions - Ensure timely delivery of fully functional products with high quality - Take ownership of deliverables beyond technical expertise - Work with technologies like Cloud computing, OSS, Schemaless DB, Machine Learning, and more **Qualifications & Skills:** - B.Sc./B.E./B.Tech/M.Tech in a related field - Proficiency in writing reusable, performant UI code - Strong knowledge of JavaScript, object-oriented programming, and web concepts - Experience with React.js or other frontend frameworks, NoSQL databases like MongoDB, and client-server systems - Excellent communication and time management skills - Hands-on experience with Python, Django/Flask, Nginx, MySql/MongoDB/Redis, and more - Proficiency in HTML5, CSS, client-server architectures, and asynchronous request handling - Knowledge of scripting languages like Python, JavaScript, React, Express, and more - Analytical skills and ability to meet time-bound deliverables - Excitement to work in a fast-paced startup environment and solve challenging technology problems **About Nbyula:** Nbyula is a German technology brand with a focus on leveraging cutting-edge technologies to create a global marketplace for talent, content, products, and services. The company aims to empower "Skillizens without Borders" by eliminating barriers through technology. **Job Perks:** - Opportunity to work on innovative projects in the Ed-tech space - Comfortable workspace with gaming chairs and live music - Access to a wide range of books and snacks - Extensive health coverage and long weekend breaks - Stock options, relaxed dress code, and no bureaucracy Join Nbyula to be part of a dynamic team that values creativity, innovation, and personal growth. Explore more about us at [Nbyula Official Website](https://nbyula.com/about-us).,
Posted 1 week ago
6.0 years
8 - 23 Lacs
Hyderābād
On-site
Position – Data Engineer Exp – 6-8 Years Location - Hyderabad, INDIA Budget – open Budget based on interview Not Many Job Switches Can be from a Reputed College like ( IIM or IIT) Should & Must have SaaS product experience Mongo DB – Mandatory Good understanding of Database systems -- SQL and No SQL Must have comprehensive experience in MongoDB or any other document DB Responsibilities: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning,and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Requirements: 6+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,Debezium, Airflow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Proficiency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications: Experience with graph databases (Neo4j, Amazon Neptune) Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures Experience working with streaming data technologies and real-time data processing Familiarity with data governance and data security best practices Experience with containerization technologies (Docker, Kubernetes) Understanding of financial back-office operations and FinTech domain Experience working in a high-growth startup environment Job Type: Permanent Pay: ₹862,603.66 - ₹2,376,731.02 per year Benefits: Health insurance Provident Fund Supplemental Pay: Performance bonus Yearly bonus Experience: ETL: 7 years (Preferred) HADOOP : 1 year (Preferred) Work Location: In person Application Deadline: 27/07/2025 Expected Start Date: 25/07/2025
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
Job Description: We are in search of a skilled and innovative MERN (MongoDB, Express.js, React, Node.js) stack developer to join our dynamic team. The ideal candidate will demonstrate proficiency in the MERN stack, showcasing a strong portfolio of successful projects. As a MERN stack developer, you will play a crucial role in designing, building, and maintaining scalable web applications. Number of Vacancies: 1 Required Skills: Proven experience as a MERN stack developer or similar role. Proficiency in MongoDB, Express.js, React, and Node.js. Strong understanding of TypeScript, JavaScript, including ES6+ syntax. Experience with frontend technologies such as HTML, CSS, and client-side scripting libraries. Familiarity with state management libraries such as Redux. Knowledge of RESTful API design and development. Understanding of database design and management using MongoDB. Excellent problem-solving skills and attention to detail. Effective communication skills for seamless collaboration with team members. Responsibilities: Develop and maintain web applications using the MERN stack, ensuring high performance and responsiveness. Collaborate with cross-functional teams to design, architect, and implement robust solutions. Create and maintain RESTful APIs for seamless communication between the front end and back end. Implement effective and secure data storage solutions using MongoDB and other databases. Optimize applications for maximum speed, scalability, and security. Participate in code reviews to maintain code quality and enhance team collaboration. Stay updated with industry trends and technologies, incorporating best practices into development processes. Nice to Have Experience with InfluxDB, Redis, GraphQL, Neo4J, MySQL, etc. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Familiarity with continuous integration and deployment processes. Understanding of serverless architecture. Exposure to cloud platforms such as AWS, Azure, or Google Cloud. Preferred Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience of at least 2 years in MERN stack development. A strong portfolio showcasing successful MERN stack projects. Full-time Adajan | Bhatar Exp. 3+ Years
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Overview: We are seeking a motivated Junior AI Testing Engineer to join the team. In this role, you will support the testing of AI models and pipelines, with a special focus on data ingestion into knowledge graphs and knowledge graph administration. You will collaborate with data scientists, engineers, and product teams to ensure the quality, reliability, and performance of AI-driven solutions. Key Responsibilities: AI Model & Pipeline Testing: Design and execute test cases for AI models and data pipelines, ensuring accuracy, stability, and fairness. Knowledge Graph Ingestion: Support the development and testing of Python scripts for data extraction, transformation, and loading (ETL) into enterprise knowledge graphs. Knowledge Graph Administration: Assist in maintaining, monitoring, and troubleshooting knowledge graph environments (e.g., Neo4j, RDF stores), including user access and data integrity. Test Automation: Develop and maintain basic automation scripts (preferably in Python) to streamline testing processes for AI functionalities. Data Quality Assurance: Evaluate and validate the quality of input and output data for AI models, reporting and documenting issues as needed. Bug Reporting & Documentation: Identify, document, and communicate bugs or issues discovered during testing. Maintain clear testing documentation and reports. Collaboration: Work closely with knowledge graph engineers, data scientists, and product managers to understand requirements and deliver robust solutions. Requirements: Education: Bachelor’s degree in Computer Science, Information Technology, or related field. Experience: Ideally some experience in software/AI testing, data engineering, or a similar technical role. Technical Skills: Proficient in Python (must-have) Experience with test case design, execution, and bug reporting Exposure to knowledge graph technologies (e.g., Neo4j, RDF, SPARQL) and data ingestion/ETL processes Analytical & Problem-Solving Skills: Strong attention to detail, ability to analyze data and systems, and troubleshoot issues. Communication: Clear verbal and written communication skills for documentation and collaboration. Preferred Qualifications: Experience with graph query languages (e.g., Cypher, SPARQL) Exposure to cloud platforms (AWS, Azure, GCP) and CI/CD workflows Familiarity with data quality and governance practices
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Wyzard, we are on a mission to revolutionize AI applications in real-world scenarios. We're looking for a Lead AI Engineer to drive the research, development, and deployment of advanced AI systems. 🔹 What You’ll Do: ✅ Lead AI-driven solutions with LLMs (GPT, LLaMA, Claude, Grok, DeepSeek, etc.) & AI agents ✅ Architect & deploy scalable AI models for NLP, Conversational AI, and decision-making ✅ Develop autonomous AI agents that interact with users and datasets ✅ Collaborate with cross-functional teams to define AI use cases & performance metrics ✅ Experiment with the latest AI advancements and optimize LLM performance ✅ Design and implement efficient RAG models that enhance AI-generated responses with retrieved context. ✅ Work with vector databases & vector graph (e.g., MemoryDB, Pinecone, Neo4j) to improve document retrieval accuracy and efficiency. ✅ Mentor & guide junior engineers while promoting AI ethics & responsible development. 🔹 What We’re Looking For: ✔️ 2+ years in AI/ML engineering. ✔️ Hands-on experience with LLMs, AI agents, & NLP ✔️ Experience with vector databases (MemoryDB, Pinecone, ChromaDB, Neo4jetc.). ✔️ Strong expertise in Machine Learning, Neural Networks & Reinforcement Learning ✔️ Proficiency in TensorFlow, PyTorch & cloud technologies (AWS, GCP, Azure) ✔️ Experience with distributed systems, containerization (Docker, Kubernetes), & fine-tuning models
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough