Jobs
Interviews

632 Neo4J Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities 1. Strategy & Planning o Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. o Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. o Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. o Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. o Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. o Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. o Ensure that data strategies and architectures are aligned with regulatory compliance. o Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. o Ensure effective data management throughout the project lifecycle. 2. Acquisition & Deployment o Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals 3. Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. 4. Data Architecture Design: o Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. o Design and implement scalable, high-performance data solutions that meet business requirements. 5. Data Governance: o Establish and enforce data governance policies and procedures as agreed with stakeholders. o Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. 6. Data Migration: o Oversee the data migration process from legacy systems to the new systems being put in place. o Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. 7. Master Data Management: o Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. o Provide data management (create, update and delimit) methods to ensure master data is governed 8. Stakeholder Collaboration: o Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. o Ensure the enterprise system meets the organization's data needs. 9. Training and Support: o Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. o Promote user adoption and proper use of data. Data Quality Assurance: o Implement data quality assurance measures to identify and correct data issues. o Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. 11. Reporting and Analytics: o Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems o Enable data-driven decision-making through robust data analysis. 12. Continuous Improvement: o Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. o Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Educational Qualification: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Gurgaon

On-site

Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities 1. Strategy & Planning o Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. o Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. o Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. o Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. o Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. o Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. o Ensure that data strategies and architectures are aligned with regulatory compliance. o Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. o Ensure effective data management throughout the project lifecycle. 2. Acquisition & Deployment o Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals 3. Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. 4. Data Architecture Design: o Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. o Design and implement scalable, high-performance data solutions that meet business requirements. 5. Data Governance: o Establish and enforce data governance policies and procedures as agreed with stakeholders. o Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. 6. Data Migration: o Oversee the data migration process from legacy systems to the new systems being put in place. o Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. 7. Master Data Management: o Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. o Provide data management (create, update and delimit) methods to ensure master data is governed 8. Stakeholder Collaboration: o Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. o Ensure the enterprise system meets the organization's data needs. 9. Training and Support: o Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. o Promote user adoption and proper use of data. 10 Data Quality Assurance: o Implement data quality assurance measures to identify and correct data issues. o Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. 11. Reporting and Analytics: o Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems o Enable data-driven decision-making through robust data analysis. 1. Continuous Improvement: o Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. o Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) o SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: o Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

On-site

Job Description: We are in search of a skilled and innovative MERN (MongoDB, Express.js, React, Node.js) stack developer to join our dynamic team. The ideal candidate will demonstrate proficiency in the MERN stack, showcasing a strong portfolio of successful projects. As a MERN stack developer, you will play a crucial role in designing, building, and maintaining scalable web applications. Number of Vacancies: 1 Required Skills: Proven experience as a MERN stack developer or similar role. Proficiency in MongoDB, Express.js, React, and Node.js. Strong understanding of TypeScript, JavaScript, including ES6+ syntax. Experience with frontend technologies such as HTML, CSS, and client-side scripting libraries. Familiarity with state management libraries such as Redux. Knowledge of RESTful API design and development. Understanding of database design and management using MongoDB. Excellent problem-solving skills and attention to detail. Effective communication skills for seamless collaboration with team members. Responsibilities: Develop and maintain web applications using the MERN stack, ensuring high performance and responsiveness. Collaborate with cross-functional teams to design, architect, and implement robust solutions. Create and maintain RESTful APIs for seamless communication between the front end and back end. Implement effective and secure data storage solutions using MongoDB and other databases. Optimize applications for maximum speed, scalability, and security. Participate in code reviews to maintain code quality and enhance team collaboration. Stay updated with industry trends and technologies, incorporating best practices into development processes. Nice to Have Experience with InfluxDB, Redis, GraphQL, Neo4J, MySQL, etc. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Familiarity with continuous integration and deployment processes. Understanding of serverless architecture. Exposure to cloud platforms such as AWS, Azure, or Google Cloud. Preferred Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience of at least 2 years in MERN stack development. A strong portfolio showcasing successful MERN stack projects. Full-time Adajan | Bhatar Exp. 1.5+ Years

Posted 3 weeks ago

Apply

15.0 - 20.0 years

37 - 45 Lacs

Bengaluru

Work from Office

: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How well support you

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs How You Will Contribute You will: Operationalize and automate activities for efficiency and timely production of data visuals Assist in providing accessibility, retrievability, security and protection of data in an ethical manner Search for ways to get new data sources and assess their accuracy Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation Validate information from multiple sources. Assess issues that might prevent the organization from making maximum use of its information assets What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data Ability to simplify complex problems and communicate to a broad audience In This Role As a Senior Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI (Optional), Tableau (Optional), Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Azure Cloud Services: Azure Datalake Gen2, Azure Databricks, Azure Synapse Analytics, Azure Data Factory, Azure Stream Analytics. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyze data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Data Science Analytics & Data Science

Posted 3 weeks ago

Apply

5.0 years

15 - 25 Lacs

Hyderābād

On-site

Role - Data Engineer Location - Hyderabad, INDIA [Hybrid] Responsibilities: ● Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform ● Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing ● Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing ● Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions ● Design and implement data warehouse solutions that support analytical needs and machine learning applications ● Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features ● Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability ● Optimize query performance across various database systems through indexing, partitioning, and query refactoring ● Develop and maintain documentation for data models, pipelines, and processes ● Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. ● Stay current with emerging technologies and best practices in data engineering Requirements: ● 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure work exp on enterprise SAAS is mandatory. ● Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL ● Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB ● Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies ● Experience with data warehousing concepts and technologies ● Solid understanding of data modeling principles and best practices for both operational and analytical systems ● Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning ● Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack ● Proficiency in at least one programming language (Python, Node.js, Java) ● Experience with version control systems (Git) and CI/CD pipelines ● Bachelor's degree in Computer Science, Engineering, or related field Job Description Preferred Qualifications: ● Experience with graph databases (Neo4j, Amazon Neptune) ● Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures ● Experience working with streaming data technologies and real-time data processing ● Familiarity with data governance and data security best practices ● Experience with containerization technologies (Docker, Kubernetes) ● Understanding of financial back-office operations and FinTech domain ● Experience working in a high-growth startup environment ● Master's degree in Computer Science, Data Engineering, or related field Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person

Posted 3 weeks ago

Apply

10.0 - 12.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Key Responsibilities : 1. Architect scalable GenAI platforms using advanced RAG techniques. 2. Design multi-agent orchestration using MCP and A2A patterns. 3. Integrate Knowledge Graphs and vector databases for semantic search. 4. Lead prompt pipeline development and observability tooling. 5. Define secure API gateways and DevOps deployment strategies. 6. Optimize LLM performance and context management. 7. Ensure compliance with data privacy standards (HIPAA/GDPR). 8. Collaborate with cross-functional teams for system reliability. 9. Mentor engineers on GenAI architecture and best practices. 10. Drive innovation in agentic AI and schema-aware retrieval systems. Must Have 1. Deep expertise in LangChain, DSPy, LangGraph, and Python. 2. Strong understanding of RAG variants (Schema RAG, Agentic-RAG). 3. Experience with vector DBs like FAISS, pgvector, Pinecone. 4. Knowledge of Knowledge Graphs and Neo4j integration. 5. Familiarity with OpenAI APIs and prompt engineering. 6. Hands-on with Docker, Terraform, and GitHub Actions. 7. Cloud deployment experience (AWS, Azure, GCP). 8. Proficiency in secure API design and token management. 9. Strong documentation and architectural design skills. 10. Strategic thinking with mentoring and leadership capabilities.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

30 - 40 Lacs

Bengaluru

Work from Office

Data Engineer and Developer We are seeking a Data Engineer who will define and build the foundational architecture for our data platform the bedrock upon which our applications will thrive. You all will collaborate closely with application developers, translating their needs into platform capabilities that turbocharge development. From the start, you all will architect for scale, ensuring our data flows seamlessly through every stage of its lifecycle: collection, modeling, cleansing, enrichment, securing, and storing data in an optimal format. Think of yourself as the mastermind orchestrating an evolving data ecosystem, engineered to adapt and excel amid tomorrow's challenges. We are looking for a Data Engineer with 5+ years of experience who has: Database Versatility: Deep expertise working with relational databases (PostgreSQL, MS SQL, and beyond) as well as NoSQL systems (such as MongoDB, Cassandra, Elasticsearch). Graph Database: Design and implement scalable graph databases to model complex relationships between entities for use in GenAI agent architectures using Neo4J, Dgraph, ArangoDB and query languages such as Cypher, SPARQL, GraphQL. Data Lifecycle Expertise: Skilled in all aspects of data management collection, storage, integration, quality, and pipeline design. Programming Proficiency: Adept in programming languages such as Python, Go. Collaborative Mindset: Experienced in partnering with GenAI Engineers and Data Scientists. Modern Data Paradigms: A strong grasp of Data Mesh and Data Products, Data Fabric. Understanding of Data Ops and Domain Driven Design (DDD) is a plus.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

17 - 22 Lacs

Noida

Work from Office

As a Domain Architect in the Autonomous Network domain, you will play a key role in shaping innovative solutions for customers, acting as their trusted advisor. You'll work closely with stakeholders to understand their unique needs and translate them into practical, high-impact solutions. In this role, you'll design and deliver end-to-end solutions tailored to customer requirements, leveraging Nokias cutting-edge portfolio along with third-party products when needed. You'll apply industry best practices and architectural standards to ensure long-term technical integrity, helping customers achieve their business goals with confidence. You have: Bachelors degree in engineering/technology or equivalent with 10+ years of hands-on experience in autonomous networks driving large programs and should have worked as an Architect/Designer for at least 5 years. Experience in at least one or two domains like Orchestration/fulfillment/Flowone/CDPA/CDFF/NoRC; Assurance/NAC; InventoryUIV, Discovery and Reconciliation domain; SSO/Security product suites/NIAM; Analytics Hands-on experience in Java, expect scripting, Python, Kubernetes, Microservices, Databases, XML, XSLT, Data Parsing, SNMP, REST, SOAP, CORBA, LDAP, JMS, and FTP. Exposure to Oracle, Postgres, MongoDB, MariaDB, Neo4J, containerization, orchestration tools, agile methodologies It would be nice if you also had: Understanding of 5G Slicing, 5G SA/NSA network, IP/MPLS, Optics, IMS, VoLTE, NFV/SDN, Fixed network Independent, disruptive thinker with a results-oriented mindset and strong communication skills. Ability to work in a fast-paced global environment in collaboration with cross-cultural teams and customers. Develop a Requirement Definition Document (RDD), High-Level Design (HLD), and Low-Level Design (LLD). Stay updated on customer architecture within the dedicated technical area and regional requirements. Apply solution architecture standards, processes, and principles. Define and develop the full scope of solutions, collaborating across teams and organizations to create effective outcomes. Work effectively in diverse environments, leveraging best practices and industry knowledge to enhance products and services. Serve as a trusted advisor and mentor to team members, providing guidance on projects and tasks. Drive projects with manageable risks and resource requirements or oversee small teams, managing day-to-day operations, resource allocation, and workload distribution. Act as a key troubleshooter and subject matter expert on the Autonomous product portfolio, including fulfillment, assurance, inventory, security, and analytics.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

As a Senior R&D engineer with expertise in cloud-native development, microservices architecture, scripting, and Helm charts, you will join our team that builds high-quality software solutions using Java and Spring Boot. With hands-on experience of designing RESTful APIs, managing microservices, and working across cloud platforms like AWS, Azure, or OCP, youre well-equipped to develop scalable, resilient solutions. Your skill set spans optimizing NoSQL databases, managing Kubernetes deployments with Helm, and partnering closely with DevOps teams to streamline CI/CD pipelines. You have: Bachelors or Masters degree in computer science, Engineering, or a related field. 5+ years of experience in software development with a focus on Java and Spring Boot. Experience in RESTful API design and implementation. Exposure to CI/CD tools (Jenkins, GitLab CI). It would be nice if you also had: Relevant certifications (e.g., AWS Certified Developer, Oracle Certified Professional Java SE) are a plus. Knowledge of container orchestration and management. Familiarity with Agile development methodologies. Design and develop high-quality applications using Java and Spring Boot, implementing and maintaining RESTful APIs and microservices. Create and maintain UML diagrams for software architecture, define and manage JSON schemas, and optimize NoSQL databases like Neo4j, MongoDB, and Cassandra for efficient data handling. Develop and deploy cloud-native applications using AWS, Azure, or OCP, ensuring scalability and resilience in microservices environments. Manage Kubernetes deployments with Helm charts, work with DevOps teams to integrate CI/CD pipelines, and automate tasks using Python and Bash scripting. Ensure efficient data storage and retrieval, optimize system performance, and assist in automated deployment strategies. Work closely with cross-functional teams to gather requirements, design solutions, and ensure seamless software integration. Maintain comprehensive documentation for software designs, APIs, and deployment processes, ensuring clarity and accessibility. Continuously enhance development workflows through automation, best practices, and performance optimizations.

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Senior Backend Data Engineer RESUME must be in English Location: Remote Employment Type: Full-Time / Contract Who We Are We are a U.S.-based technology consulting firm with distributed development teams across Latin America, Eastern Europe, and Türkiye. Our mission is simple: deliver exceptional software by empowering world-class engineering talent. We work with top-tier clients on innovative, high-impact projects—and we’re growing fast. About the Client Our client is a forward-thinking technology company at the forefront of data modeling and analytics. Their platform is built using modern engineering principles, combining backend development and advanced data engineering in a Python-based declarative logic programming framework. The solutions are Snowflake-native and designed to handle complex modeling and analytics at scale. They are looking for engineers who are passionate about learning, solving novel challenges, and contributing to systems that are built from the ground up. Role Overview We are hiring a Senior Backend Data Engineer to help build and scale data-driven applications with a strong focus on backend logic, data modeling, and engineering best practices. You’ll be working in a collaborative environment, designing systems from scratch, and delivering solutions that handle high volumes of data efficiently. Core Responsibilities & Required Skills Rapid Learning Ability – Comfortable picking up new tools, technologies, and paradigms quickly Backend Development – Proven experience designing and developing backend systems, APIs, and asynchronous workflows from scratch Data Engineering – Solid experience building data ingestion pipelines, stream processing, and managing large-scale data movement Relational Data Modeling – Expertise in designing robust, scalable relational data models Python Proficiency – Strong programming background in Python, including adaptability to nontraditional or abstract programming styles Cloud Platform Experience – Hands-on experience with cloud platforms like Azure or AWS Software Architecture – Understanding of design patterns and architectural best practices for scalable backend systems Nice-to-Have Skills Familiarity with graph databases (e.g., Neo4j, Amazon Neptune) or graph libraries like networkX Experience with CI/CD tools such as GitHub Actions, GitLab CI, or CircleCI Hands-on knowledge of Snowflake or similar cloud data warehouse platforms Who You Are You are a curious, driven engineer who loves building solutions from the ground up. You thrive in fast-moving environments, enjoy tackling complex challenges, and are comfortable working in unconventional or emerging technologies. You're excited by backend development and passionate about clean, scalable data architecture. Why Join Us? We believe in giving our team members the tools, flexibility, and trust they need to do their best work. From continuous learning opportunities to working on challenging, high-impact projects, we’re committed to your growth. You'll work with cutting-edge technologies and collaborate with passionate professionals who care deeply about quality. What You Can Expect Remote work flexibility Access to modern tools and learning platforms Challenging, high-impact projects with innovative clients A supportive, growth-oriented team environment

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Who we are and what do we do India has witnessed a journey of Innovation in Digital Payments and today it leads the world with over 45% of the Global digital transaction volume. At NPST, we believe that our decade long journey has carved an opportunity for building future roadmap for the world to follow. We are determined to contribute immensely to nation’s growth story with our vision “to provide digital technology across financial value chain” and our mission to create leadership position in digital payment space. Founded in 2013, NPST is a leading fintech firm in India, part of the Make in India initiative and listed on BSE and National Stock Exchange. We specialize in Digital Payments operating as Technology Service Provider to Regulated entities and providing Payment Platform to Industry – empowered by payment processing engine, Financial Super app, Risk Intelligence engine and digital merchant solution. While we drive 3% of global digital transaction volume for over 100+ clients, we aim to increase our market share by 5X in next five years through innovation and industry first initiatives. What will you do As a Technical lead you will work as part of a team on the design and implementation of Java-based enterprise solutions using Agile software development methodologies. You will have the opportunity to take full responsibility for the technical design and implementation of specific business areas using latest frameworks. Role is responsible to achieve organization goals by defining, integrating, and upgrading a comprehensive architecture to support Java applications. This role will be contributing towards information, proposing recommendations to strategic plans and will review, prepare, and completing action plans by implementing production and quality standards, resolving problems, identifying trends, determining system improvements, and implementing change. Job responsibilities: Meeting with technology managers and the design team to discuss the goals and needs of the company. Lead a technical team consisting of software developers, business analysts and a Product Owner. Capture functional and non-functional requirements and design technical solutions leveraging Spring Framework Examining and defining current architecture systems. Designing scalable architecture systems for Java-based applications. Efficiently review code and provide technical guidance and identifying needs to implement or execute to accommodate the company’s architectural goals. Contribute to the development process by implementing PoCs and standardizing software delivery adopting DevOps practices. Troubleshooting design flaws and system bottlenecks. Performing validation tests, system performance tests and others, to ensure the flexibility and scalability of the Java environment. Ensure the overall quality and fit of the technical solution in addition to the overall performance of the application stack. Oversee progress of development team to ensure consistency with the initial design, development principles and deadlines. Assisting the software design team with application integration. What are we looking for: Advanced knowledge of software architecture, design, web programming and implementation of software networks. Proficient with Java, Spring boot, Spring Cloud like Configuration management, circuit breaker, security, service discovery, Sleuth, load balancing Should have deep understanding and experience with multithreading. Should be able to create distributed and scalable architecture. Should have the habit of learning and exploring new technologies. Should understand Apache Spark, Data Science, ML & AI Should have used RDBMS Oracle, MySQL Knowledge of NoSQL – MongoDB, Neo4J, Cassandra Ability to solve complex software system issues. Ability to clearly present technical information to fellow technical professionals and non-technical peers. Updates job knowledge by participating in educational opportunities, reading professional publications, and participating in professional organizations Entrepreneurial skills, ability to observe, innovate and own your work. Detail-oriented and organized with strong time management skills. Influencing skills and the ability to create positive working relationships with team members at all levels. Excellent communication and interpersonal skills. Collaborative approach and work with perfection as a group effort to achieve organization goal. Education Qualification - Bachelor’s degree in software engineering or computer science. Experience - Total Experience: 8 to 12 years, Industry - IT/Software/BFSI/ Banking /Fintech Work arrangement – 5 days working from office Location – Noida / Bengaluru What do we offer: An organization where we strongly believe in one organization, one goal. A fun workplace which compels us to challenge ourselves and aim higher. A team that strongly believes in collaboration and celebrating success together. Benefits that resonate ‘We Care’. If this opportunity excites you, we invite you to apply and contribute to our success story. If your resume is shortlisted, you will hear back from us.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

About Fincore At Fincore we’re on a mission to to build next-generation AI-native finance technology for enterprises Our core values - ownership, customer obsession, truth-seeking, and velocity - guide everything we do We are venture-backed and closely collaborate with seasoned technology, finance and AI leaders We maintain a small, talent-dense team of domain experts and technologists What We're Looking For We are seeking an extremely talented, senior- to staff-level AI engineer to help us pioneer the future of finance and accounting. You must have experience building high-quality, complex, yet maintainable LLM apps and agents, and you should be able to do so in a fraction of the time that most competent people think is possible (in part because of your ability to wield the latest in code generation and intuition for prompt engineering). You must have a strong ability to collaborate with customers directly to build solutions tailored to their business needs. Must-Have Skills 5+ years of experience building backend services in Python, Node.js, or similar Deep experience and intuition with LLMs Cutting-edge knowledge of code generation and prompting techniques Experience building agents and tooling for agents Expertise in API design (REST, GraphQL) and relational databases Strong problem-solving, communication, and collaboration skills Top-tier institutes preferred (IIT's, NIT's etc.) Our Tech Stack: NextJS | Python | LangGraph | AWS | Neo4j | PostgreSQL | MongoDB

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

Remote

DHIRA Company Overview DHIRA is a leading company specializing in intelligent transformation, where we leverage advanced AI/ML and data-driven solutions to revolutionize business operations. Unlike traditional digital transformation, which focuses on transaction automation, our intelligent transformation encompasses both transactional automation and deep analytics for comprehensive insights. Our expertise in data engineering, data quality, and master data management ensures robust and scalable AI/ML applications. Utilizing cutting-edge technologies across AWS, Azure, GCP, and on-premises Hadoop systems, we deliver efficient and innovative data solutions. Our vision is embodied in the Akashic platform, designed to provide seamless, end-to-end analytics. At DHIRA, we are committed to excellence, driving impactful contributions to the industry. Join us to be part of a dynamic team at the forefront of intelligent transformation Role- Data Architect – Evolution of Databases, Data Modeling, and Modern Data Practices Location : Bangalore, Remote Position Overview: We are seeking a Principal Data Architect with 5+ years of experience who has a comprehensive understanding of the evolution of databases , from OLTP to OLAP, and relational systems to NoSQL, Graph, and emerging Vector Databases . This role requires deep expertise in data modeling , from traditional ER modeling to advanced dimensional, graph, and vector schemas, along with a strong grasp of the history, best practices, and future trends in data management. The ideal candidate will bring both historical context and cutting-edge expertise to architect scalable, high-performance data solutions, driving innovation while maintaining strong governance and best practices. This is a leadership role that demands a balance of technical excellence, strategic vision, and team mentorship. Key Responsibilities: 1. Data Modeling Expertise: – Design and implement Entity-Relationship Models (ER Models) for OLTP systems, ensuring normalization and consistency. – Transition ER models into OLAP environments with robust dimensional modeling, including star and snowflake schemas. – Develop hybrid data models that integrate relational, NoSQL, Graph, and Vector Database schemas. – Establish standards for schema design across diverse database systems, focusing on scalability and query performance. 2. Database Architecture Evolution: – Architect solutions across the database spectrum: • Relational databases (PostgreSQL, Oracle, MySQL) • NoSQL databases (MongoDB, Cassandra, DynamoDB) • Graph databases (Neo4j, Amazon Neptune) • Vector databases (Pinecone, Weaviate, Milvus). – Implement hybrid data architectures combining OLTP, OLAP, NoSQL, Graph, and Vector systems for diverse business needs. – Ensure compatibility and performance optimization across these systems for real-time and batch processing. 3. Data Warehousing and Analytics: – Lead the development of enterprise-scale Data Warehouses capable of supporting advanced analytics and business intelligence. – Design high-performance ETL/ELT pipelines to handle structured and unstructured data with minimal latency. – Optimize OLAP systems for petabyte-scale data storage and low-latency querying. 4. Emerging Database Technologies: – Drive adoption of Vector Databases for AI/ML applications, enabling semantic search and embedding-based queries. – Explore cutting-edge technologies in data lakes, lakehouses, and real-time processing systems. – Evaluate and integrate modern database paradigms, ensuring scalability for future business requirements. 5. Strategic Leadership: – Define the organization’s data strategy , aligning with long-term goals and emerging trends. – Collaborate with business and technical stakeholders to design systems that balance transactional and analytical workloads. – Lead efforts in data governance, ensuring compliance with security and privacy regulations. 6. Mentorship and Innovation: – Mentor junior architects and engineers, fostering a culture of learning and technical excellence. – Promote innovation by introducing best practices, emerging tools, and modern methodologies in data architecture. – Act as a thought leader in database evolution, presenting insights to internal teams and external forums. Required Skills & Qualifications: • Experience: – 6+ years of experience in data architecture, with demonstrated expertise across OLTP, OLAP, NoSQL, Graph, and Vector databases. – Proven experience designing and implementing data models across relational, NoSQL, graph, and vector systems. – A strong understanding of the evolution of databases and their impact on modern data architectures. • Technical Proficiency: – Deep expertise in ER modeling , dimensional modeling, and schema design for modern database systems. – Proficient in SQL and query optimization for relational and analytical databases. – Hands-on experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB. – Strong knowledge of Graph databases (Neo4j, Amazon Neptune) and Vector databases (Pinecone, Milvus, or Weaviate). – Familiarity with modern cloud-based DW platforms (e.g., Snowflake, BigQuery, Redshift) and lakehouse solutions. • Knowledge of Data Practices: – Historical and practical understanding of data practices, from schema-on-write to schema-on-read approaches. – Experience in implementing real-time and batch processing systems for diverse workloads. – Strong grasp of data lifecycle management, governance, and security practices. • Leadership and Communication: – Ability to lead large-scale data initiatives, balancing technical depth and strategic alignment. – Excellent communication skills to articulate complex ideas to technical and non-technical audiences. – Proven ability to mentor and upskill teams, fostering a collaborative environment. Preferred Skills: • Experience integrating Vector Databases into existing architectures for AI/ML workloads. • Knowledge of real-time streaming systems (Kafka, Pulsar) and their integration with modern databases. • Certifications in data-related technologies (e.g., AWS, GCP, Snowflake, Neo4j). • Hands-on experience with BI tools (e.g., Tableau, Power BI) and AI/ML platforms.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Pune

Work from Office

Hi, Wishes from GSN ! Pleasure connecting with you. About the job: This is Neo4J Developer opportunity with a leading bootstrapped product company, a valued client of GSN HR. WORK LOCATION: Pune JOB ROLE: Neo4J Developer EXPERIENCE: 5+ Yrs CTC Range: 15 - 30 LPA WORK TYPE: Work from Office Job Summary: Key Responsibilities: Neo4J expertise - Proven experience with Neo4J, including its core concepts, Cypher query language and best practices Designing and implementing graph database solutions : This includes creating and maintaining graph schemas, models and architectures Familiarity with graph theory , graph data modelling and other graph database technologies Developing and optimizing Cypher queries Integrating Neo4J with BI and other systems Providing technical guidance to junior developers Creating and maintaining documentation for system architecture , design and operational processes If interested, click Apply now for IMMEDIATE response.Best, KAVIYA GSN HR | Kaviya@gsnhr.net | Google review : https://g.co/kgs/UAsF9W

Posted 3 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Bengaluru

On-site

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you’d make a great addition to our vibrant team. We are looking for Semantic Web ETL Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environment? What is the user story based on? Implementation means – trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange – discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOU’LL MAKE A DIFFERENCE BY: Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ SKILLS: International experience with global projects and collaboration with intercultural team is preferred 6 - 8 years’ experience on developing software solutions with Python language. Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POC’s Manage end-to-end development of web applications and knowledge graph projects, ensuring best practices and high code quality. Provide technical guidance and mentorship to junior developers, fostering their growth and development. Design scalable and efficient architectures for web applications, knowledge graphs, and database models. Carry out code standards and perform code reviews, ensuring alignment with standard methodologies like PEP8, DRY, and SOLID principles. Collaborate with frontend developers, DevOps teams, and database administrators to deliver cohesive solutions. Strong and Expert-like proficiency in Python web frameworks Django, Flask, FAST API, Knowledge Graph Libraries. Experience in designing and developing complex RESTful APIs and microservices architectures. Strong understanding of security standard processes in web applications (e.g., authentication, authorization, and data protection). Extensive experience in building and querying knowledge graphs using Python libraries like RDFLib, Py2neo, or similar. Proficiency in SPARQL for advanced graph data querying. Experience with graph databases like Neo4j, GraphDB, or Blazegraph.or AWS Neptune Experience in expert functions like Software Development / Architecture, Software Testing (Unit Testing, Integration Testing) Excellent in DevOps practices, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Excellent in Cloud technologies and architecture. Should have exposure on S3, EKS, ECR, AWS Neptune Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. LEADERSHIP QUALITIES Visionary Leadership: Ability to lead the team towards long-term technical goals while managing immediate priorities. Strong Communication: Good interpersonal skills to work effectively with both technical and non-technical stakeholders. Mentorship & Coaching: Foster a culture of continuous learning, skill development, and collaboration within the team. Conflict Resolution: Ability to manage team conflicts and provide constructive feedback to improve team dynamics. Create a better #TomorrowWithUs! This role is in Bangalore, where you’ll get the chance to work with teams impacting entire cities, countries – and the craft of things to come. We’re Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow ‘s reality. Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds (http://www.siemens.com/careers/digitalminds)

Posted 3 weeks ago

Apply

0 years

4 - 7 Lacs

Bengaluru

On-site

Date: 7 Jul 2025 Location: Bangalore, KA, IN Job Description We are a technology-led healthcare solutions provider. We are driven by our purpose to enable healthcare organizations to be future-ready. We offer accelerated, global growth opportunities for talent that’s bold, industrious, and nimble. With Indegene, you gain a unique career experience that celebrates entrepreneurship and is guided by passion, innovation, collaboration, and empathy. To explore exciting opportunities at the convergence of healthcare and technology, check out www.careers.indegene.com Looking to jump-start your career? We understand how important the first few years of your career are, which create the foundation of your entire professional journey. At Indegene, we promise you a differentiated career experience. You will not only work at the exciting intersection of healthcare and technology but also will be mentored by some of the most brilliant minds in the industry. We are offering a global fast-track career where you can grow along with Indegene’s high-speed growth. We are purpose-driven. We enable healthcare organizations to be future ready and our customer obsession is our driving force. We ensure that our customers achieve what they truly want. We are bold in our actions, nimble in our decision-making, and industrious in the way we work. If this excites you, then apply below. Role: GenAI Architect Description: Key Responsibilities : 1. Architect scalable GenAI platforms using advanced RAG techniques. 2. Design multi-agent orchestration using MCP and A2A patterns. 3. Integrate Knowledge Graphs and vector databases for semantic search. 4. Lead prompt pipeline development and observability tooling. 5. Define secure API gateways and DevOps deployment strategies. 6. Optimize LLM performance and context management. 7. Ensure compliance with data privacy standards (HIPAA/GDPR). 8. Collaborate with cross-functional teams for system reliability. 9. Mentor engineers on GenAI architecture and best practices. 10. Drive innovation in agentic AI and schema-aware retrieval systems. Must Have 1. Deep expertise in LangChain, DSPy, LangGraph, and Python. 2. Strong understanding of RAG variants (Schema RAG, Agentic-RAG). 3. Experience with vector DBs like FAISS, pgvector, Pinecone. 4. Knowledge of Knowledge Graphs and Neo4j integration. 5. Familiarity with OpenAI APIs and prompt engineering. 6. Hands-on with Docker, Terraform, and GitHub Actions. 7. Cloud deployment experience (AWS, Azure, GCP). 8. Proficiency in secure API design and token management. 9. Strong documentation and architectural design skills. 10. Strategic thinking with mentoring and leadership capabilities. EQUAL OPPORTUNITY Indegene is proud to be an Equal Employment Employer and is committed to the culture of Inclusion and Diversity. We do not discriminate on the basis of race, religion, sex, colour, age, national origin, pregnancy, sexual orientation, physical ability, or any other characteristics. All employment decisions, from hiring to separation, will be based on business requirements, the candidate’s merit and qualification. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, gender identity, sexual orientation, disability status, protected veteran status, or any other characteristics.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title-Principal SDE Exp-8+ Years Location-Remote Responsibilities Principal SDE At Shakudo, we’re building the world’s first operating system for data and AI—a unified platform that streamlines powerful open-source and proprietary tools into a seamless, production-ready environment. We’re looking for a Principal Software Development Engineer to lead the development of full end-to-end applications on our platform. This role is ideal for engineers who love solving real customer problems, moving across the stack, and delivering high-impact solutions that showcase what’s possible on Shakudo. What You’ll Do • Design and build complete applications—from backend to frontend—using Shakudo and open-source tools like Neo4J, ollama, Spark, and many more • Solve real-world data and AI challenges with elegant, production-ready solutions • Collaborate with Product and Customer Engineering to translate needs into scalable systems • Drive architecture and design patterns for building on Shakudo—with high autonomy and self-direction • Set the standard for building efficient, reusable, and impactful solutions What You Bring • 8+ years building production systems across the stack • Strong backend and frontend experience (e.g. Python, React, TypeScript) • Familiarity with cloud infrastructure, Kubernetes, and data/AI tooling • A hands-on, solutions-first mindset and a passion for fast, high-quality delivery Why This Role You’ll lead by example, building flagship applications that demonstrate the power of Shakudo. This role offers high ownership, high impact, and the chance to shape how modern data and AI solutions are built.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Python Developer – Data Science & AI Integration Location: Chandkheda, Ahmedabad, Gujarat 382424 Experience: 2–3 Years Employment Type: Full-time Work Mode: On-site About the Role We are seeking a talented and driven Python Developer to join our AI & Data Science team. The ideal candidate will have experience in developing backend systems, working with legal datasets, and integrating AI/LLM-based chatbots (text and voice). This is a hands-on role where you’ll work across modern AI architectures like RAG and embedding-based search using vector databases. Key Responsibilities Design and implement Python-based backend systems for AI and data science applications. Analyze legal datasets and derive insights through automation and intelligent algorithms. Build and integrate AI-driven chatbots (text & voice) using LLMs and RAG architecture. Work with vector databases (e.g., Pinecone, ChromaDB) for semantic search and embedding pipelines. Implement graph-based querying systems using Neo4j and Cypher. Collaborate with cross-functional teams (Data Scientists, Backend Engineers, Legal SMEs). Maintain data pipelines for structured, semi-structured, and unstructured data. Ensure code scalability, security, and performance. Required Skills & Experience 2–3 years of hands-on Python development experience in AI/data science environments. Solid understanding of legal data structures and preprocessing. Experience with LLM integrations (OpenAI, Claude, Gemini) and RAG pipelines. Proficiency in vector databases (e.g., Pinecone, ChromaDB) and embedding-based similarity search. Experience with Neo4j and Cypher for graph-based querying. Familiarity with PostgreSQL and REST API design. Strong debugging and performance optimization skills. Nice to Have Exposure to Agile development practices. Familiarity with tools like LangChain or LlamaIndex. Experience working with voice-based assistant/chatbot systems. Bachelor's degree in Computer Science, Data Science, or a related field. Why Join Us? Work on cutting-edge AI integrations in a domain-focused environment. Collaborate with a passionate and experienced cross-functional team. Opportunity to grow in legal-tech and AI solutions space.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies